pymc.logp — PyMC 5.22.0 documentation (original) (raw)
pymc.logp(rv, value, warn_rvs=None, **kwargs)[source]#
Create a graph for the log-probability of a random variable.
Parameters:
valuetensor_like
Should be the same type (shape and dtype) as the rv.
Warn if RVs were found in the logp graph. This can happen when a variable has other other random variables as inputs. In that case, those random variables should be replaced by their respective values.pymc.logprob.conditional_logp can also be used as an alternative.
Returns:
logpTensorVariable
Raises:
If the logp cannot be derived.
Examples
Create a compiled function that evaluates the logp of a variable
import pymc as pm import pytensor.tensor as pt
mu = pt.scalar("mu") rv = pm.Normal.dist(mu, 1.0)
value = pt.scalar("value") rv_logp = pm.logp(rv, value)
Use .eval() for debugging
print(rv_logp.eval({value: 0.9, mu: 0.0})) # -1.32393853
Compile a function for repeated evaluations
rv_logp_fn = pm.compile_pymc([value, mu], rv_logp) print(rv_logp_fn(value=0.9, mu=0.0)) # -1.32393853
Derive the graph for a transformation of a RandomVariable
import pymc as pm import pytensor.tensor as pt
mu = pt.scalar("mu") rv = pm.Normal.dist(mu, 1.0) exp_rv = pt.exp(rv)
value = pt.scalar("value") exp_rv_logp = pm.logp(exp_rv, value)
Use .eval() for debugging
print(exp_rv_logp.eval({value: 0.9, mu: 0.0})) # -0.81912844
Compile a function for repeated evaluations
exp_rv_logp_fn = pm.compile_pymc([value, mu], exp_rv_logp) print(exp_rv_logp_fn(value=0.9, mu=0.0)) # -0.81912844
Define a CustomDist logp
import pymc as pm import pytensor.tensor as pt
def normal_logp(value, mu, sigma): return pm.logp(pm.Normal.dist(mu, sigma), value)
with pm.Model() as model: mu = pm.Normal("mu") sigma = pm.HalfNormal("sigma") pm.CustomDist("x", mu, sigma, logp=normal_logp)