Coupling and Ergodicity of Adaptive Markov Chain Monte Carlo Algorithms | Journal of Applied Probability | Cambridge Core (original) (raw)

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

We consider basic ergodicity properties of adaptive Markov chain Monte Carlo algorithms under minimal assumptions, using coupling constructions. We prove convergence in distribution and a weak law of large numbers. We also give counterexamples to demonstrate that the assumptions we make are not redundant.

References

Andrieu, C. and Moulines, E. (2006). On the ergodicity properties of some adaptive Markov chain Monte Carlo algorithms. Ann. Appl. Prob. 16, 1462–1505.CrossRefGoogle Scholar

Andrieu, C. and Robert, C. P. (2002). Controlled MCMC for optimal sampling. Preprint.Google Scholar

Atchadé, Y. F. and Rosenthal, J. S. (2005). On adaptive Markov chain Monte Carlo algorithms. Bernoulli 11, 815–828.CrossRefGoogle Scholar

Baxendale, P. H. (2005). Renewal theory and computable convergence rates for geometrically ergodic Markov chains. Ann. Appl. Prob. 15, 700–738.CrossRefGoogle Scholar

Bédard, M. (2006). On the robustness of optimal scaling for Metropolis–Hastings algorithms. , University of Toronto.Google Scholar

Brockwell, A. E. and Kadane, J. B. (2005). Identification of regeneration times in MCMC simulation, with application to adaptive schemes. J. Comput. Graph. Statist. 14, 436–458.Google Scholar

Fort, G. and Moulines, E. (2003). Polynomial ergodicity of Markov transition kernels. Stoch. Process. Appl. 103, 57–99.CrossRefGoogle Scholar

Fristedt, B. and Gray, L. (1997). A Modern Approach to Probability Theory. Birkhäuser, Boston, MA.CrossRefGoogle Scholar

Gilks, W. R., Roberts, G. O. and Sahu, S. K. (1998). Adaptive Markov chain Monte Carlo. J. Amer. Statist. Assoc. 93, 1045–1054.CrossRefGoogle Scholar

Haario, H., Saksman, E. and Tamminen, J. (2001). An adaptive Metropolis algorithm. Bernoulli 7, 223–242.Google Scholar

Häggström, O. (2001). A note on disagreement percolation. Random Structures Algorithms 18, 267–278.CrossRefGoogle Scholar

Jarner, S. F. and Roberts, G. O. (2002). Polynomial convergence rates of Markov chains. Ann. Appl. Prob. 12, 224–247.CrossRefGoogle Scholar

Meyn, S. P. and Tweedie, R. L. (1993). Markov Chains and Stochastic Stability. Springer, London.CrossRefGoogle Scholar

Meyn, S. P. and Tweedie, R. L. (1994). Computable bounds for convergence rates of Markov chains. Ann. Appl. Prob. 4, 981–1011.Google Scholar

Pasarica, C. and Gelman, A. (2003). Adaptively scaling the Metropolis algorithm using the average squared Jumped distance. Preprint.Google Scholar

Pemantle, R. and Rosenthal, J. S. (1999). Moment conditions for a sequence with negative drift to be uniformly bounded in Lr . Stoch. Process. Appl. 82, 143–155.CrossRefGoogle Scholar

Robbins, H. and Monro, S. (1951). A stochastic approximation method. Ann. Math. Statist. 22, 400–407.CrossRefGoogle Scholar

Roberts, G. O. and Rosenthal, J. S. (2001). Optimal scaling for various Metropolis–Hastings algorithms. Statist. Sci. 16, 351–367.Google Scholar

Roberts, G. O. and Rosenthal, J. S. (2002). One-shot coupling for certain stochastic recursive sequences. Stoch. Process. Appl. 99, 195–208.CrossRefGoogle Scholar

Roberts, G. O. and Rosenthal, J. S. (2004). General state space Markov chains and MCMC algorithms. Prob. Surveys 1, 20–71.Google Scholar

Roberts, G. O. and Tweedie, R. L. (1999). Bounds on regeneration times and convergence rates for Markov chains. Stoch. Process. Appl. 80, 211–229. (Correction: 91 (2001), 337–338.)CrossRefGoogle Scholar

Roberts, G. O., Gelman, A. and Gilks, W. R. (1997). Weak convergence and optimal scaling of random walk Metropolis algorithms. Ann. Appl. Prob. 7, 110–120.Google Scholar

Roberts, G. O., Rosenthal, J. S. and Schwartz, P. O. (1998). Convergence properties of perturbed Markov chains. J. Appl. Prob. 35, 1–11.Google Scholar

Rosenthal, J. S. (1995). Minorization conditions and convergence rates for Markov chain Monte Carlo. J. Amer. Statist. Assoc. 90, 558–566.Google Scholar

Rosenthal, J. S. (1997). Faithful couplings of Markov chains: now equals forever. Adv. Appl. Math. 18, 372–381.CrossRefGoogle Scholar

Rosenthal, J. S. (2000). A First Look at Rigorous Probability Theory. World Scientific, Singapore.CrossRefGoogle Scholar

Rosenthal, J. S. (2002). Quantitative convergence rates of Markov chains: a simple account. Electron. Commun. Prob. 7, 123–128.Google Scholar

Tierney, L. (1994). Markov chains for exploring posterior distributions (with discussion). Ann. Statist. 22, 1701–1762.Google Scholar