Erratum to: Extremal indices, geometric ergodicity of Markov chains, and MCMC (original) (raw)

Extremal indices, geometric ergodicity of Markov chains, and MCMC

Extremes, 2006

We investigate the connections between extremal indices on the one hand and stability of Markov chains on the other hand. Both theories relate to the tail behaviour of stochastic processes, and we find a close link between the extremal index and geometric ergodicity. Our results are illustrated throughout with examples from simple MCMC chains.

Computing the extremal index of special Markov chains and queues

Stochastic Processes and their Applications, 1996

We consider the extremal behaviour of Markov chains. Rootz en 18] gives conditions for stationary, regenerative sequences so that the normalized process of level exceedances converges in distribution to a compound Poisson process. He also provides expressions for the extremal index and the compounding probabilities, though in general it is not easy to evaluate these. We show how in a number of instances Markov chains can be coupled with two random walks which, in terms of extremal behaviour, bound the chain from above and below. Using a limiting argument it is shown that the lower bound converges to the upper one, yielding the extremal index and the compounding probabilities of the Markov chain. Fluctuation properties of random walks can be characterised accurately using Gr ubel's FFT technique 9]. His algorithm for the stationary distribution of a G/G/1 queue is adapted for the extremal index; it yields approximate, but very accurate results. Compounding probabilities are calculated explicitly in a similar fashion. The technique is applied to: (i) the G/G/1 queue; (ii) G/M/c queues; (iii) autoregressive conditional heteroscedastic (ARCH) processes, whose extremal behaviour de Haan et al. 6] characterized using simulation.

Exponential bounds and stopping rules for MCMC and general Markov chains

2006

We develop explicit, general bounds for the probability that the empirical sample averages of a function of a Markov chain on a general alphabet will exceed the steady-state mean of that function by a given amount. Our bounds combine simple information-theoretic ideas together with techniques from optimization and some fairly elementary tools from analysis. In one direction, motivated by central problems in simulation, we develop bounds for the general class of "geometrically ergodic" Markov chains. These bounds take a form that is particularly suited to simulation problems, and they naturally lead to a new class of sampling criteria. These are illustrated by several examples. In another direction, we obtain a new bound for the important special class of Doeblin chains; this bound is optimal, in the sense that in the special case of independent and identically distributed random variables it essentially reduces to the classical Hoeffding bound.

Strongly ergodic Markov chains and rates of convergence using spectral conditions

Stochastic Processes and their Applications, 1978

For finite Markov chains the eigenvalues of P can be used to characterize the chain and also determine the geometric rate at which P" converges to Q in case P is ergodic. For infinite Markov chains the spectrum of P plays the analogous role. It follows from Theorem 3.1 that IIP" -Q[j G C@" if and only if P is strongly ergodic. The best possible rate for p is the spectral radius of P-Q, which in this case is the same as sup{lh I: h E o(P), A # 1). The ques?ion of when this best rate equals S(P) is considered for both discrete and continuous time chains. Two characterizations of strong ergodicity are given using spectral properties of P -Q (Theorem 3 .S) and spectral properties of a submatrix of P (Theorem 3.16).

On the Geometric Ergodicity of

2016

A Markov chain is geometrically ergodic if it converges to its invariant distribution at a geometric rate in total variation norm. We study geometric ergodicity of deterministic and random scan versions of the two-variable Gibbs sampler. We give a sufficient condition which simultaneously guarantees both versions are geometrically ergodic. We also develop a method for simultaneously establishing that both versions are subgeometrically ergodic. These general results allow us to characterize the convergence rate of two-variable Gibbs samplers in a particular family of discrete bivariate distributions. 1. Draw X n+1 ∼ ̟ X|Y (•|y), and call the observed value x ′. 2. Draw Y n+1 ∼ ̟ Y |X (•|x ′). An alternative TGS is the random scan Gibbs sampler (RGS). Fix p ∈ (0, 1) and suppose the current state of the RGS chain is (X n , Y n) = (x, y). Then the next state, (X n+1 , Y n+1), is obtained as follows.

On the maximum entropy principle for uniformly ergodic Markov chains

Stochastic Processes and their Applications, 1989

For strongly ergodic discrete time Markov chains we discuss the possible limits as n + CC of probability measures on the path space of the form exp(ntl(l,,)) dP/Z,,. L,, is the empirical measure (or sojourn measure) of the process, H is a real-valued function (possibly attaining-3~) on the space of probability measures on the state space of the chain, and Z,, is the appropriate norming constant. The class of these transformations also includes conditional laws given L,, belongs to some set. The possible limit laws are mixtures of Markov chains minimizing a certain free energy. The method of proof strongly relies on large deviation techniques.

A new proof of convergence of MCMC via the ergodic theorem

Statistics & Probability Letters, 2011

A key result underlying the theory of MCMC is that any η-irreducible Markov chain having a transition density with respect to η and possessing a stationary distribution π is automatically positive Harris recurrent. This paper provides a short self-contained proof of this fact using the ergodic theorem in its standard form as the most advanced tool.

A Sufficiency Property Arising from the Characterization of Extremes of Markov Chains

Bernoulli, 2000

At extreme levels, it is known that for a particular choice of marginal distribution, transitions of a Markov chain behave like a random walk. For a broad class of Markov chains, we give a characterization for the step length density of the limiting random walk, which leads to an interesting suf®ciency property. This representation also leads us to propose a new technique for kernel density estimation for this class of models.

Sharp bounds for the tail of functionals of Markov chains. Theory of Probability and its Applications

2013

This paper is devoted to establishing sharp bounds for deviation probabilities of partial sums Σn i=1f(Xi), where X = (Xn)n2N is a positive recurrent Markov chain and f is a real valued function defined on its state space. Combining the regenerative method to the Esscher transformation, these estimates are shown in particular to generalize probability inequalities proved in the i.i.d. case to the Markovian setting for (not necessarily uniformly) geometrically ergodic chains.

Computable Strongly Ergodic Rates of Convergence for Continuous-Time Markov Chains

The ANZIAM Journal, 2008

In this paper, we investigate computable lower bounds for the best strongly ergodic rate of convergence of the transient probability distribution to the stationary distribution for stochastically monotone continuous-time Markov chains and reversible continuous-time Markov chains, using a drift function and the expectation of the first hitting time on some state. We apply these results to birth–death processes, branching processes and population processes.