Yet Another Analysis of Dice Problems (original) (raw)
Related papers
Proceedings of the IEEE, 2000
We show that a famous die experiment used by E. T.)dynes as intuitive justification of the need for maximum entropy (ME) estimation admits, in fact, of solutions by classical, Bayesian estimation. The Bayesian answers are the maximum probable (m.a.p.) and posterior mean solutions to the problem. These depart radically from the ME solution, and are also much more probable answers.
Topics in Bayesian statistics and maximum entropy
1998
Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling, The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making.
2008
Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Solving the inverse problem in digital image restoration and Bayesian modeling of neural networks are discussed in detail. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making.
An upper bound for the entropy and its applications to the maximal entropy problem
A variational principle where the Lagrange multipliers of a trial distribution are used as variational parameters is discussed as an efficient. practical route to the determination of the distribution of maximal entropy. * The theoretical reason is that only line&y independent constraints provide a basis for the unique expansion of hpp
Bayesian Inference and Maximum Entropy Methods in Science and Engineering
Springer proceedings in mathematics & statistics, 2018
This book series features volumes composed of selected contributions from workshops and conferences in all areas of current research in mathematics and statistics, including operation research and optimization. In addition to an overall evaluation of the interest, scientific quality, and timeliness of each proposal at the hands of the publisher, individual contributions are all refereed to the high quality standards of leading journals in the field. Thus, this series provides the research community with well-edited, authoritative reports on developments in the most exciting areas of mathematical and statistical research today.
Entropy, 2001
In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a first reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here
On The Relationship between Bayesian and Maximum Entropy Inference
AIP Conference Proceedings, 2004
We investigate Bayesian and Maximum Entropy methods for doing inference under uncertainty. This investigation is primarily through concrete examples that have been previously investigated in the literature. We find that it is possible to do Bayesian and MaxEnt inference using the same information, despite claims to the contrary, and that they lead to different results. We find that these differences are due to the Bayesian inference not assuming anything beyond the given prior probabilities and the data, whereas MaxEnt implicitly makes strong independence assumptions, and assumes that the given constraints are the only ones operating. We also show that maximum likelihood and maximum a posteriori estimators give different and misleading estimates in our examples compared to posterior mean estimates. We generalize the classic method of maximum entropy inference to allow for uncertainty in the constraint values. This generalized MaxEnt (GME) makes MaxEnt inference applicable to a much wider range of problems, and makes direct comparison between Bayesian and MaxEnt inference possible. Also, we show that MaxEnt is a generalized principle of independence, and this property is what makes it the preferred inference method in many cases.
A New Bound for Discrete Distributions Based on Maximum Entropy
In this work we re-examine some classical bounds for nonnegative integer-valued random variables by means of information theoretic or maxentropic techniques using fractional moments as constraints. The new bound is able to capture optimally all the information content provided by the sequence of given moments or by the moment generating function, summarized by few fractional moments. The bound improvement is not trivial.
Nucleation and Atmospheric Aerosols, 2005
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this probiem, advocated by E.T. Jaynes [1], is to igriore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represeritecfOy-a-pf60ahillty densiW{e.g:-a Gaus-slaii)~alldtliis uricerfamW-yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Binomial and Poisson distributions as maximum entropy distributions
IEEE Transactions on Information Theory, 2001
The binomial and the Poisson distributions are shown to be maximum entropy distributions of suitably defined sets. Poisson's law is considered as a case of entropy maximization, and also convergence in information divergence is established.