Minimum message length (original) (raw)

From Wikipedia, the free encyclopedia

Formal information theory restatement of Occam's Razor

Minimum message length (MML) is a Bayesian information-theoretic method for statistical model comparison and selection.[1] It provides a formal information theory restatement of Occam's Razor: even when models are equal in their measure of fit-accuracy to the observed data, the one generating the most concise explanation of data is more likely to be correct (where the explanation consists of the statement of the model, followed by the lossless encoding of the data using the stated model). MML was invented by Chris Wallace, first appearing in the seminal paper "An information measure for classification".[2] MML is intended not just as a theoretical construct, but as a technique that may be deployed in practice.[3] It differs from the related concept of Kolmogorov complexity in that it does not require use of a Turing-complete language to model data.[4]

Shannon's A Mathematical Theory of Communication (1948) states that in an optimal code, the message length (in binary) of an event E {\displaystyle E} {\displaystyle E}, length ⁡ ( E ) {\displaystyle \operatorname {length} (E)} {\displaystyle \operatorname {length} (E)}, where E {\displaystyle E} {\displaystyle E} has probability P ( E ) {\displaystyle P(E)} {\displaystyle P(E)}, is given by length ⁡ ( E ) = − log 2 ⁡ ( P ( E ) ) {\displaystyle \operatorname {length} (E)=-\log _{2}(P(E))} {\displaystyle \operatorname {length} (E)=-\log _{2}(P(E))}.

Bayes's theorem states that the probability of a (variable) hypothesis H {\displaystyle H} {\displaystyle H} given fixed evidence E {\displaystyle E} {\displaystyle E} is proportional to P ( E | H ) P ( H ) {\displaystyle P(E|H)P(H)} {\displaystyle P(E|H)P(H)}, which, by the definition of conditional probability, is equal to P ( H ∧ E ) {\displaystyle P(H\land E)} {\displaystyle P(H\land E)}. We want the model (hypothesis) with the highest such posterior probability. Suppose we encode a message which represents (describes) both model and data jointly. Since length ⁡ ( H ∧ E ) = − log 2 ⁡ ( P ( H ∧ E ) ) {\displaystyle \operatorname {length} (H\land E)=-\log _{2}(P(H\land E))} {\displaystyle \operatorname {length} (H\land E)=-\log _{2}(P(H\land E))}, the most probable model will have the shortest such message. The message breaks into two parts: − log 2 ⁡ ( P ( H ∧ E ) ) = − log 2 ⁡ ( P ( H ) ) + − log 2 ⁡ ( P ( E | H ) ) {\displaystyle -\log _{2}(P(H\land E))=-\log _{2}(P(H))+-\log _{2}(P(E|H))} {\displaystyle -\log _{2}(P(H\land E))=-\log _{2}(P(H))+-\log _{2}(P(E|H))}. The first part encodes the model itself. The second part contains information (e.g., values of parameters, or initial conditions, etc.) that, when processed by the model, outputs the observed data.

MML naturally and precisely trades model complexity for goodness of fit. A more complicated model takes longer to state (longer first part) but probably fits the data better (shorter second part). So, an MML metric won't choose a complicated model unless that model pays for itself.

Continuous-valued parameters

[edit]

One reason why a model might be longer would be simply because its various parameters are stated to greater precision, thus requiring transmission of more digits. Much of the power of MML derives from its handling of how accurately to state parameters in a model, and a variety of approximations that make this feasible in practice. This makes it possible to usefully compare, say, a model with many parameters imprecisely stated against a model with fewer parameters more accurately stated.

Key features of MML

[edit]

  1. ^ Wallace, C. S. (Christopher S.), -2004. (2005). Statistical and inductive inference by minimum message length. New York: Springer. ISBN 9780387237954. OCLC 62889003.{{[cite book](/wiki/Template:Cite%5Fbook "Template:Cite book")}}: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link)
  2. ^ Wallace, C. S.; Boulton, D. M. (1968-08-01). "An Information Measure for Classification". The Computer Journal. 11 (2): 185–194. doi:10.1093/comjnl/11.2.185. ISSN 0010-4620.
  3. ^ Allison, Lloyd. (2019). Coding Ockham's Razor. Springer. ISBN 978-3030094881. OCLC 1083131091.
  4. ^ a b Wallace, C. S.; Dowe, D. L. (1999-01-01). "Minimum Message Length and Kolmogorov Complexity". The Computer Journal. 42 (4): 270–283. doi:10.1093/comjnl/42.4.270. ISSN 0010-4620.

Original Publication:

Books:

Related Links: