Maximizable informational entropy as measure of probabilistic uncertainty (original) (raw)
Related papers
A non-negative informational entropy for continuous probability distribution
HAL (Le Centre pour la Communication Scientifique Directe), 2022
In this work, we propose to use varentropy, an information measure defined from a generalization of thermodynamic entropy, for the calculation of informational entropy in order to avoid negative entropy in case of continuous probability distribution.
GENERALIZED MEASURE OF UNCERTAINTY AND THE MAXIMIZABLE ENTROPY
Modern Physics Letters B, 2010
For a random variable x we can define a variational relationship with practical physical meaning as dx x d dI − = , where I is called as uncertainty measurement. With the help of a generalized definition of expectation,
Probability distribution and entropy as a measure of uncertainty
Journal of Physics A: Mathematical and Theoretical, 2008
The relationship between three probability distributions and their maximizable entropy forms is discussed without postulating entropy property. For this purpose, the entropy I is defined as a measure of uncertainty of the probability distribution of a random variable x by a variational relationship dx x d dI − = , a definition underlying the maximization of entropy for corresponding distribution.
Some invariant probability and entropy as a measure of uncertainty
Arxiv preprint cond-mat/0612076, 2007
The relationship between some probability distributions and their invariant property is discussed. A measure I of uncertainty (informational entropy) of the probability distribution p (x) is defined in a variational way by dxxddI−= which makes it possible to derive three ...
Generalized measurement of uncertainty and the maximizable entropy
Arxiv preprint arXiv: …, 2008
Abstract: For a random variable we can define a variational relationship with practical physical meaning as dI= dbar (x)-bar (dx), where I is called as uncertainty measurement. With the help of a generalized definition of expectation, bar (x)= sum_ (i) g (p_i) x_i, and ...
IEEE Transactions on Information Theory
Entropy and relative or cross entropy measures are two very fundamental concepts in information theory and are also widely used for statistical inference across disciplines. The related optimization problems, in particular the maximization of the entropy and the minimization of the cross entropy or relative entropy (divergence), are essential for general logical inference in our physical world. In this paper, we discuss a two parameter generalization of the popular Rényi entropy and associated optimization problems. We derive the desired entropic characteristics of the new generalized entropy measure including its positivity, expandability, extensivity and generalized (sub-)additivity. More importantly, when considered over the class of sub-probabilities, our new family turns out to be scale-invariant; this property does not hold for most existing generalized entropy measures. We also propose the corresponding cross entropy and relative entropy measures and discuss their geometric properties including generalized Pythagorean results over β-convex sets. The maximization of the new entropy and the minimization of the corresponding cross or relative entropy measures are carried out explicitly under the non-extensive ('thirdchoice') constraints given by the Tsallis' normalized q-expectations which also correspond to the β-linear family of probability distributions. Important properties of the associated forward and reverse projection rules are discussed along with their existence and uniqueness. In this context, we have come up with, for the first time, a class of entropy measures-a subfamily of our two-parameter generalization-that leads to the classical (extensive) exponential family of MaxEnt distributions under the non-extensive constraints; this discovery has been illustrated through the useful concept of escort distributions and can potentially be important for future research in information theory. Other members of the new entropy family, however, lead to the power-law type generalized q-exponential MaxEnt distributions which is in conformity with Tsallis' nonextensive theory. Therefore, our new family indeed provides a wide range of entropy and associated measures combining both the extensive and nonextensive MaxEnt theories under one umbrella.
Nucleation and Atmospheric Aerosols, 2005
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this probiem, advocated by E.T. Jaynes [1], is to igriore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represeritecfOy-a-pf60ahillty densiW{e.g:-a Gaus-slaii)~alldtliis uricerfamW-yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Information and entropy of continuous random variables
IEEE Transactions on Information Theory, 1997
The mean value of the square of a generalized score function is shown to be interpretable as information associated with a continuous random variable. This information is in particular cases equal to the Fisher information of the corresponding distribution.