logentropy — SciPy v1.15.3 Manual (original) (raw)

scipy.stats.Uniform.

Uniform.logentropy(*, method=None)[source]#

Logarithm of the differential entropy

In terms of probability density function \(f(x)\) and support\(\chi\), the differential entropy (or simply “entropy”) of a random variable \(X\) is:

\[h(X) = - \int_{\chi} f(x) \log f(x) dx\]

logentropy computes the logarithm of the differential entropy (“log-entropy”), \(log(h(X))\), but it may be numerically favorable compared to the naive implementation (computing \(h(X)\) then taking the logarithm).

Parameters:

method{None, ‘formula’, ‘logexp’, ‘quadrature}

The strategy used to evaluate the log-entropy. By default (None), the infrastructure chooses between the following options, listed in order of precedence.

Not all method options are available for all distributions. If the selected method is not available, a NotImplementedErrorwill be raised.

Returns:

outarray

The log-entropy.

Notes

If the entropy of a distribution is negative, then the log-entropy is complex with imaginary part \(\pi\). For consistency, the result of this function always has complex dtype, regardless of the value of the imaginary part.

References

Examples

Instantiate a distribution with the desired parameters:

import numpy as np from scipy import stats X = stats.Uniform(a=-1., b=1.)

Evaluate the log-entropy:

X.logentropy() (-0.3665129205816642+0j) np.allclose(np.exp(X.logentropy()), X.entropy()) True

For a random variable with negative entropy, the log-entropy has an imaginary part equal to np.pi.

X = stats.Uniform(a=-.1, b=.1) X.entropy(), X.logentropy() (-1.6094379124341007, (0.4758849953271105+3.141592653589793j))