Random Walk Metropolis is a gradient-free Markov chain Monte Carlo (MCMC) algorithm. The algorithm involves a proposal generating stepproposal_state = current_state + perturb by a random perturbation, followed by Metropolis-Hastings accept/reject step. For more details see Section 2.1 of Roberts and Rosenthal (2004).
Current class implements RWM for normal and uniform proposals. Alternatively, the user can supply any custom proposal generating function.
The function one_step can update multiple chains in parallel. It assumes that all leftmost dimensions of current_state index independent chain states (and are therefore updated independently). The output oftarget_log_prob_fn(*current_state) should sum log-probabilities across all event dimensions. Slices along the rightmost dimensions may have different target distributions; for example, current_state[0, :] could have a different target distribution from current_state[1, :]. These semantics are governed by target_log_prob_fn(*current_state). (The number of independent chains is tf.size(target_log_prob_fn(*current_state)).)
Python callable which takes an argument likecurrent_state (or *current_state if it's a list) and returns its (possibly unnormalized) log-density under the target distribution.
new_state_fn
Python callable which takes a list of state parts and a seed; returns a same-type list of Tensors, each being a perturbation of the input state parts. The perturbation distribution is assumed to be a symmetric distribution centered at the input state part. Default value: None which is mapped totfp.mcmc.random_walk_normal_fn().
experimental_shard_axis_names
A structure of string names indicating how members of the state are sharded.
name
Python str name prefixed to Ops created by this function. Default value: None (i.e., 'rwm_kernel').
Raises
ValueError
if there isn't one scale or a list with same length ascurrent_state.
Attributes
experimental_shard_axis_names
The shard axis names for members of the state.
is_calibrated
Returns True if Markov chain converges to specified distribution.TransitionKernels which are "uncalibrated" are often calibrated by composing them with the tfp.mcmc.MetropolisHastings TransitionKernel.
name
new_state_fn
parameters
Return dict of __init__ arguments and their values.
Non-destructively creates a deep copy of the kernel.
Args
**override_parameter_kwargs
Python String/value dictionary of initialization arguments to override with new values.
Returns
new_kernel
TransitionKernel object of same type as self, initialized with the union of self.parameters and override_parameter_kwargs, with any shared keys overridden by the value of override_parameter_kwargs, i.e.,dict(self.parameters, **override_parameters_kwargs).
Runs one iteration of Random Walk Metropolis with normal proposal.
Args
current_state
Tensor or Python list of Tensors representing the current state(s) of the Markov chain(s). The first r dimensions index independent chains, r = tf.rank(target_log_prob_fn(*current_state)).
previous_kernel_results
collections.namedtuple containing Tensors representing values from previous calls to this function (or from thebootstrap_results function.)
Tensor or Python list of Tensors representing the state(s) of the Markov chain(s) after taking exactly one step. Has same type and shape as current_state.
kernel_results
collections.namedtuple of internal calculations used to advance the chain.
Raises
ValueError
if there isn't one scale or a list with same length ascurrent_state.