A versatile MCMC strategy for sampling posterior distributions of analytically intractable models (original) (raw)

Advances in Bayesian Modelling and Computation: Spatio-Temporal Processes, Model Assessment and Adaptive MCMC

2009

The modelling and analysis of complex stochastic systems with increasingly large data sets, state-spaces and parameters provides major stimulus to research in Bayesian nonparametric methods and Bayesian computation. This dissertation presents advances in both nonparametric modelling and statistical computation stimulated by challenging problems of analysis in complex spatio-temporal systems and core computational issues in model fitting and model assessment. The first part of the thesis, represented by chapters 2 to 4, concerns novel, nonparametric Bayesian mixture models for spatial point processes, with advances in modelling, compusampling based marginal likelihood computation, the proposed adaptive Monte Carlo method and sequential learning approach can facilitate improved accuracy in marginal likelihood computation. The approaches are exemplified in studies of v both synthetic data examples, and in a real data analysis arising in astro-statistics. Finally, chapter 7 summarizes the dissertation and discusses possible extensions of the specific modelling and computational innovations, as well as potential future work.

A Monte Carlo Metropolis-Hastings Algorithm for Sampling from Distributions with Intractable Normalizing Constants

Neural Computation, 2013

Simulating from distributions with intractable normalizing constants has been a long-standing problem in machine learning. In this letter, we propose a new algorithm, the Monte Carlo Metropolis-Hastings (MCMH) algorithm, for tackling this problem. The MCMH algorithm is a Monte Carlo version of the Metropolis-Hastings algorithm. It replaces the unknown normalizing constant ratio by a Monte Carlo estimate in simulations, while still converges, as shown in the letter, to the desired target distribution under mild conditions. The MCMH algorithm is illustrated with spatial autologistic models and exponential random graph models. Unlike other auxiliary variable Markov chain Monte Carlo (MCMC) algorithms, such as the Møller and exchange algorithms, the MCMH algorithm avoids the requirement for perfect sampling, and thus can be applied to many statistical models for which perfect sampling is not available or very expensive. The MCMH algorithm can also be applied to Bayesian inference for ra...

Maximum likelihood estimation for spatial models by Markov chain Monte Carlo stochastic approximation

2001

Summary. We propose a two-stage algorithm for computing maximum likelihood estimates for a class of spatial models. The algorithm combines Markov chain Monte Carlo methods such as the Metropolis±Hastings±Green algorithm and the Gibbs sampler, and stochastic approximation methods such as the off-line average and adaptive search direction. A new criterion is built into the algorithm so stopping is automatic once the desired precision has been set.

On general sampling schemes for Particle Markov chain Monte Carlo methods

Particle Markov Chain Monte Carlo methods are used to carry out inference in non-linear and non-Gaussian state space models, where the posterior density of the states is approximated using particles. Current approaches have usually carried out Bayesian inference using a particle Metropolis-Hastings algorithm or a particle Gibbs sampler. In this paper, we give a general approach to constructing sampling schemes that converge to the target distributions given in Andrieu et al. [2010] and Olsson and Ryden [2011]. We describe our methods as a particle Metropolis within Gibbs sampler (PMwG). The advantage of our general approach is that the sampling scheme can be tailored to obtain good results for different applications. We investigate the properties of the general sampling scheme, including conditions for uniform convergence to the posterior. We illustrate our methods with examples of state space models where one group of parameters can be generated in a straightforward manner in a particle Gibbs step by conditioning on the states, but where it is cumbersome and inefficient to generate such parameters when the states are integrated out. Conversely, it may be necessary to generate a second group of parameters without conditioning on the states because of the high dependence between such parameters and the states. Our examples include state space models with diffuse initial conditions, where we introduce two methods to deal with the initial conditions.

MCMC for doubly-intractable distributions

2006

Markov Chain Monte Carlo (MCMC) algorithms are routinely used to draw samples from distributions with intractable normalization constants. However, standard MCMC algorithms do not apply to doublyintractable distributions in which there are additional parameter-dependent normalization terms; for example, the posterior over parameters of an undirected graphical model. An ingenious auxiliary-variable scheme ) offers a solution: exact sampling (Propp and Wilson, 1996) is used to sample from a Metropolis-Hastings proposal for which the acceptance probability is tractable. Unfortunately the acceptance probability of these expensive updates can be low. This paper provides a generalization of and a new MCMC algorithm, which obtains better acceptance probabilities for the same amount of exact sampling, and removes the need to estimate model parameters before sampling begins.

Posterior sampling with improved efficiency

Storage and Retrieval for Image and Video Databases, 2000

The Markov Chain Monte Carlo (MCMC) technique provides a means to generate a random sequence of model realizations that sample the posterior probability distribution of a Bayesian analysis. That sequence may be used to make inferences about the model uncertainties that derive from measurement uncertainties. This paper presents an approach to improving the efficiency of the Metropolis approach to MCMC

Markov chain Monte Carlo in approximate Dirichlet and beta two-parameter process hierarchical models

Biometrika, 2000

We present some easy-to-construct random probability measures which approximate the Dirichlet process and an extension which we will call the beta two-parameter process. The nature of these constructions makes it simple to implement Markov chain Monte Carlo algorithms for fitting nonparametric hierarchical models and mixtures of nonparametric hierarchical models. For the Dirichlet process, we consider a truncation approximation as well as a weak limit approximation based on a mixture of Dirichlet processes. The same type of truncation approximation can also be applied to the beta two-parameter process. Both methods lead to posteriors which can be fitted using Markov chain Monte Carlo algorithms that take advantage of blocked coordinate updates. These algorithms promote rapid mixing of the Markov chain and can be readily applied to normal mean mixture models and to density estimation problems. We prefer the truncation approximations, since a simple device for monitoring the adequacy of the approximation can be easily computed from the output of the Gibbs sampler. Furthermore, for the Dirichlet process, the truncation approximation offers an exponentially higher degree of accuracy over the weak limit approximation for the same computational effort. We also find that a certain beta two-parameter process may be suitable for finite mixture modelling because the distinct number of sampled values from this process tends to match closely the number of components of the underlying mixture distribution.

An approach for improving the sampling efficiency in the Bayesian calibration of computationally expensive simulation models

Water Resources Research, 2009

In recent years, interest in the Bayesian approach for the calibration of hydrological and environmental simulation (HES) models has been growing. To extract useful information on unknown parameters produced in a Bayesian calibration, it is often necessary to rely on samples drawn from the posterior distribution. Sampling a posterior distribution requires a large number of evaluations of the simulation model, and the total computational costs could be prohibitively high when the simulation model is computationally expensive. A new computing strategy is proposed in this paper to alleviate this computational difficulty by making better use of the information generated in a costly run of the HES model by using multiple evaluations of the posterior density in the less computationally expensive subspace of error model parameters. A multiple-try Markov chain Monte Carlo (MCMC) algorithm is designed to implement this idea and is benchmarked with the Metropolis-Hastings algorithm, a basic recipe for MCMC sampling. The results show that the proposed strategy has potential for improving the computational efficiency of posterior sampling and easing its implementation in the Bayesian calibration of computationally expensive HES models.