Introduction to Stochastic Processes Markov Chains Renewal Theory (original) (raw)

Continuous-time Markov Chains

Definition: A stochastic process {X(t), t ≥ 0} is said to be a continuous-time MC if the following conditions hold true:

On Markov chains and ltrations

1997

In this paper we rederive some well known results for continuous time Markov processes that live on a nite state space. Martingale techniques are used throughout the paper. Special attention is paid to the construction of a continuous time Markov process, when we start from a discrete time Markov chain. The Markov property here holds with respect to ltrations that need not be minimal.

ANALYSIS OF THE REACHABILITY MATRIX AND ABSORPTION PROBABILITIES FOR CLOSE AND OPEN CLASSIFICATION GROUPS OF STATES IN THE MARKOV CHAIN

Nigerian Journal of Scientific Research (NJSR), Ahmadu Bello University Zaria, Nigeria, 2021

The Physical or Mathematical behaviour of any system may be represented by describing all the different states it may occupy and by indicating how it moves among these states. In this study, the computation of the elements of the reachability matrix are separated into different categories depending on the classification of the initial and terminal states in such way that both states are recurrent and belong to the same closed communicating class, and both states are recurrent but belong to different closed communicating classes, when state is recurrent and state is transient. Hence, when both states are transient has been investigated in order to provide some insight into the performance measures in Absorption Probabilities for close and open classification group of states in Markov chain. Our quest is to obtain probabilities of moving from transient states to one or more of the closed communicating classes and also, to extract useful information in the context of Markov chains that have no transient states. The matrix operations and principles are used with the help of some existing equations and formulas in Markov chain to combine all states in an irreducible recurrent set into a single state, an absorbing state, and compute the probability of entering this state from transient state. Performance measures such as the probability that state is reached before state given state as the starting state, the probability that state is reached before state and the matrix of absorption probability are obtained.

ANALYSIS OF THE REACHABILITY MATRIX AND ABSORPTION PROBABILITIES FOR CLOSE AND OPEN CLASSIFICATION GROUP OF STATES IN MARKOV CHAIN

Nigerian Journal of Scientific Research, Ahmadu Bello University Zaria, Nigeria, 2022

The Physical or Mathematical behaviour of any system may be represented by describing all the different states it may occupy and by indicating how it moves among these states. In this study, the computation of the elements of the reachability matrix are separated into different categories depending on the classification of the initial and terminal states in such way that both states are recurrent and belong to the same closed communicating class, and both states are recurrent but belong to different closed communicating classes, when state is recurrent and state is transient. Hence, when both states are transient has been investigated in order to provide some insight into the performance measures in Absorption Probabilities for close and open classification group of states in Markov chain. Our quest is to obtain probabilities of moving from transient states to one or more of the closed communicating classes and also, to extract useful information in the context of Markov chains that have no transient states. The matrix operations and principles are used with the help of some existing equations and formulas in Markov chain to combine all states in an irreducible recurrent set into a single state, an absorbing state, and compute the probability of entering this state from transient state. Performance measures such as the probability that state is reached before state given state as the starting state, the probability that state is reached before state and the matrix of absorption probability are obtained.

Uniqueness criteria for continuous-time Markov chains with general transition structures

Advances in Applied Probability, 2005

We derive necessary and sufficient conditions for the existence of bounded or summable solutions to systems of linear equations associated with Markov chains. This substantially extends a famous result of Reuter , which provides a convenient means of checking various uniqueness criteria for birth-death processes. Our result allows chains with much more general transition structures to be accommodated. One application is to give a new proof of an important result of Chen [9] concerning upwardly skip-free processes. We then use our generalization of Reuter's lemma to prove new results for downwardly skip-free chains, such as the Markov branching process and several of its many generalizations. This permits us to establish uniqueness criteria for several models, including the general birth, death and catastrophe process, extended branching processes and asymptotic birth-death processes, the latter being neither upwardly skip-free nor downwardly skip-free.

Mixing times with applications to perturbed Markov chains

Linear Algebra and Its Applications, 2006

A measure of the "mixing time" or "time to stationarity" in a finite irreducible discrete time Markov chain is considered. The statistic η π i i j j m j m = = ∑ 1 , where {π j } is the stationary distribution and m ij is the mean first passage time from state i to state j of the Markov chain, is shown to be independent of the state i that the chain starts in (so that η i = η for all i), is minimal in the case of a periodic chain, yet can be arbitrarily large in a variety of situations. An application concerning the affect perturbations of transition probabilities have on the stationary distributions of Markov chains leads to a new bound, involving η, for the 1-norm of the difference between the stationary probability vectors of the original and the perturbed chain. When η is large the stationary distribution of the Markov chain is very sensitive to perturbations of the transition probabilities.

On Markov Chains and Filtrations

1997

In this paper we rederive some well known results for continuous time Markov processes that live on a nite state space. Martingale techniques are used throughout the paper. Special attention is paid to the construction of a continuous time Markov process, when we start from a discrete time Markov c hain. The Markov property here holds with respect to ltrations that need not be minimal.