Uniform Markov renewal theory and ruin probabilities in Markov random walks (original) (raw)

Asymptotic expansions in multidimensional Markov renewal theory and first passage times for Markov random walks

Advances in Applied Probability, 2001

We prove a d-dimensional renewal theorem, with an estimate on the rate of convergence, for Markov random walks. This result is applied to a variety of boundary crossing problems for a Markov random walk (Xn,Sn), n ≥0, in which Xn takes values in a general state space and Sn takes values in ℝd. In particular, for the case d = 1, we use this result to derive an asymptotic formula for the variance of the first passage time when Sn exceeds a high threshold b, generalizing Smith's classical formula in the case of i.i.d. positive increments for Sn. For d > 1, we apply this result to derive an asymptotic expansion of the distribution of (XT,ST), where T = inf { n : Sn,1 > b } and Sn,1 denotes the first component of Sn.

Asymptotic expansions on moments of the first ladder height in Markov random walks with small drift

Advances in Applied Probability, 2007

Let {(Xn, Sn), n ≥ 0} be a Markov random walk in which Xn takes values in a general state space and Sn takes values on the real line R. In this paper we present some results that are useful in the study of asymptotic approximations of boundary crossing problems for Markov random walks. The main results are asymptotic expansions on moments of the first ladder height in Markov random walks with small positive drift. In order to establish the asymptotic expansions we study a uniform Markov renewal theorem, which relates to the rate of convergence for the distribution of overshoot, and present an analysis of the covariance between the first passage time and the overshoot.

Saddlepoint approximations and nonlinear boundary crossing probabilities of Markov random walks

The Annals of Applied Probability, 2003

Saddlepoint approximations are developed for Markov random walks S n and are used to evaluate the probability that (j − i)g((S j − S i)/(j − i)) exceeds a threshold value for certain sets of (i, j). The special case g(x) = x reduces to the usual scan statistic in change-point detection problems, and many generalized likelihood ratio detection schemes are also of this form with suitably chosen g. We make use of this boundary crossing probability to derive both the asymptotic Gumbel-type distribution of scan statistics and the asymptotic exponential distribution of the waiting time to false alarm in sequential change-point detection. Combining these saddlepoint approximations with truncation arguments and geometric integration theory also yields asymptotic formulas for other nonlinear boundary crossing probabilities of Markov random walks satisfying certain minorization conditions.

The Markov Renewal Theorem and Related Results

We give a new probabilistic proof of the Markov renewal theorem for Markov random walks with positive drift and Harris recurrent driving chain. It forms an alternative to the one recently given in [1] and follows more closely the probabilistic proofs provided for Blackwell's theorem in the literature by making use of ladder variables, the stationary Markov delay distribution and a coupling argument. A major advantage is that the arguments can be refined to yield convergence rate results.

First Passage with Restart in Discrete Time: with applications to biased random walks on the half-line

2021

In recent years, it has been well-established that adding a restart mechanism can alter the first passage statistics of a stochastic processes in useful and interesting ways. Though different mechanisms have been investigated, we derive a probability generating function for a discrete-time First Passage process Under Restart and use it to examine two examples, including a biased random walk on the non-negative integers.

RECURRENCE THEOREMS FOR MARKOV RmOM WALKS

Let (M,, SJnao be a Markov random walk whose driving chain (M,JnbD with general. state space (9, G) is ergodic with unique stationary distribution 4. Providing n- S, + 0 in probability under PI, it is shown that the recurrence set of (S,- y (Mo)+ y (MJ)3).20 forms a closed subgroup of R depending on the lattice-type of (M,, The so-called shift function y is bounded and appears in that lattice-type condition. The recurrence set d (S&,*, itself is aIso given but may look more complicated depending on y. The results extend the classical recurrence theorem for random walks with il.d. increments and further sharpen results by Berbee, Dekking and others on the recurrence behavior of random walks with stationary increments. ANIS 1991 Subject Cl~ssifications: 60J05, 60515, 60K05, 60K15.

Cutoff for samples of Markov chains

ESAIM: Probability and Statistics, 1999

We study the convergence to equilibrium of n−samples of independent Markov chains in discrete and continuous time. They are defined as Markov chains on the n−fold Cartesian product of the initial state space by itself, and they converge to the direct product of n copies of the initial stationary distribution. Sharp estimates for the convergence speed are given in terms of the spectrum of the initial chain. A cutoff phenomenon occurs in the sense that as n tends to infinity, the total variation distance between the distribution of the chain and the asymptotic distribution tends to 1 or 0 at all times. As an application, an algorithm is proposed for producing an n−sample of the asymptotic distribution of the initial chain, with an explicit stopping test.

Couplings of Markov chains by randomized stopping times

Probability Theory and Related Fields, 1987

We consider a Markov chain on (E,N) generated by a Markov kernel P. We study the question, when we can find for two initial distributions v and p two randomized stopping times T of (,Xn),~ N and S of (uX,),~N, such that the distribution of ~X r equals the one of ,X s and T, S are both finite.

On the Moments of Markov Renewal Processes

Advances in Applied Probability, 1969

Recently Kshirsagar and Gupta [5] obtained expressions for the asymptotic values of the first two moments of a Markov renewal process. The method they employed involved formal inversion of matrices of Lap1ace-Stie1tjes transforms.