Couplings of Markov chains by randomized stopping times (original) (raw)
Related papers
Probab Theory Relat Field, 1987
We consider a Markov chain on (E,N) generated by a Markov kernel P. We study the question, when we can find for two initial distributions v and p two randomized stopping times T of (,Xn),~ N and S of (uX,),~N, such that the distribution of ~X r equals the one of ,X s and T, S are both finite.
Coupling and harmonic functions in the case of continuous time Markov processes
Stochastic Processes and their Applications, 1995
Consider two transient Markov processes (X:),,.. (X:'),t, with the same transition \cmigroup and initial distributions v and /L. The probability spaces supporting the processes each are also assumed to support an exponentially distributed random variable independent of the process. We show that there exist (randomized) stopping times S for (X:), T for (XI') with common final distribution.
On the Existence of Moments of Stopped Sums in Markov Renewal Theory
Probability and Mathematical Statistics
Let (M n) n≥0 be an ergodic Markov chain on a general state space X with stationary distribution π and g : X → [0, ∞) a measurable function. Define S 0 (g) def = 0 and S n (g) def = g(M 1) + ... + g(M n) for n ≥ 1. Given any stopping time T for (M n) n≥0 and any initial distribution ν for (M n) n≥0 , the purpose of this paper is to provide suitable conditions for the finiteness of E ν S T (g) p for p > 1. A typical result states that E ν S T (g) p ≤ C 1 (E ν S T (g p) + E ν T p) + C 2 for suitable finite constants C 1 , C 2. Our analysis is based to a large extent on martingale decompositions for S n (g) and on drift conditions for the function g and the transition kernel P of the chain. Some of the results are stated under the stronger assumption that (M n) n≥0 be positive Harris recurrent in which case stopping times T which are regeneration epochs for the chain are of particular interest. The important special case where T = T (t) def = inf{n ≥ 1 : S n (g) > t} for t ≥ 0 is also treated.
A zero-one law for Markov chains
Stochastics
We prove an analog of the classical Zero-One Law for both homogeneous and nonhomogeneous Markov chains (MC). Its almost precise formulation is simple: given any event A from the tail σ-algebra of MC (Z n), for large n, with probability near one, the trajectories of the MC are in states i, where P (A|Z n = i) is either near 0 or near 1. A similar statement holds for the entrance σ-algebra, when n tends to −∞. To formulate this second result, we give detailed results on the existence of nonhomogeneous Markov chains indexed by Z − or Z in both the finite and countable cases. This extends a well-known result due to Kolmogorov. Further, in our discussion, we note an interesting dichotomy between two commonly used definitions of MCs.
2013
This paper is devoted to establishing sharp bounds for deviation probabilities of partial sums Σn i=1f(Xi), where X = (Xn)n2N is a positive recurrent Markov chain and f is a real valued function defined on its state space. Combining the regenerative method to the Esscher transformation, these estimates are shown in particular to generalize probability inequalities proved in the i.i.d. case to the Markovian setting for (not necessarily uniformly) geometrically ergodic chains.
Hitting time and inverse problems for Markov chains
Journal of Applied Probability, 2008
Let Wn be a simple Markov chain on the integers. Suppose that Xn is a simple Markov chain on the integers whose transition probabilities coincide with those of Wn off a finite set. We prove that there is an M > 0 such that the Markov chain Wn and the joint distributions of first hitting time and first hitting place of Xn started at the origin for the sets {−M, M } and {−(M + 1), (M + 1)} algorithmically determine the transition probabilities of Xn.
Joint densities of hitting times for finite state Markov processes
TURKISH JOURNAL OF MATHEMATICS, 2018
For a finite state Markov process X and a finite collection {Γ k , k ∈ K} of subsets of its state space, let τ k be the first time the process visits the set Γ k. In general, X may enter some of the Γ k at the same time and therefore the vector τ := (τ k , k ∈ K) may put nonzero mass over lower dimensional regions of R * The research of T.R. Bielecki was supported by NSF Grant DMS-0604789 and NSF Grant DMS-0908099. † The research of Monique Jeanblanc is supported by Chair Markets in Transition (Fédération Bancaire Française) and Labex ANR 11-LABX-0019. ‡ The research of Ali Devin Sezer was supported by Rbuce-up Marie Curie European Project.
On Markov chains and ltrations
1997
In this paper we rederive some well known results for continuous time Markov processes that live on a nite state space. Martingale techniques are used throughout the paper. Special attention is paid to the construction of a continuous time Markov process, when we start from a discrete time Markov chain. The Markov property here holds with respect to ltrations that need not be minimal.