Chains and specifications (original) (raw)

Chains with Complete Connections and One-Dimensional Gibbs Measures

Electronic Journal of Probability, 2004

We discuss the relationship between one-dimensional Gibbs measures and discrete-time processes (chains). We consider finite-alphabet (finitespin) systems, possibly with a grammar (exclusion rule). We establish conditions for a stochastic process to define a Gibbs measure and vice versa. Our conditions generalize well known equivalence results between ergodic Markov chains and fields, as well as the known Gibbsian character of processes with exponential continuity rate. Our arguments are purely probabilistic; they are based on the study of regular systems of conditional probabilities (specifications). Furthermore, we discuss the equivalence of uniqueness criteria for chains and fields and we establish bounds for the continuity rates of the respective systems of finite-volume conditional probabilities. As an auxiliary result we prove a (re)construction theorem for specifications starting from single-site conditioning, which applies in a more general setting (general spin space, specifications not necessarily Gibbsian).

A zero-one law for Markov chains

Stochastics

We prove an analog of the classical Zero-One Law for both homogeneous and nonhomogeneous Markov chains (MC). Its almost precise formulation is simple: given any event A from the tail σ-algebra of MC (Z n), for large n, with probability near one, the trajectories of the MC are in states i, where P (A|Z n = i) is either near 0 or near 1. A similar statement holds for the entrance σ-algebra, when n tends to −∞. To formulate this second result, we give detailed results on the existence of nonhomogeneous Markov chains indexed by Z − or Z in both the finite and countable cases. This extends a well-known result due to Kolmogorov. Further, in our discussion, we note an interesting dichotomy between two commonly used definitions of MCs.

Ergodic Properties of Markov Processes

Lecture Notes in Mathematics, 2006

in Spring 2006 We would of course have reached the same conclusion if we started with our switch being off at time 0. ELEMENTS OF PROBABILITY THEORY 2 2 Elements of probability theory Recall that a probability space (Ω, F , P) consists of a set Ω endowed with a σ-algebra F and a probability measure P. We have Definition 2.1 A σ-algebra F over a set Ω is a collection of subsets of Ω with the properties that ∈ F , if A ∈ F then A c ∈ F and, if {A n } n>0 is a countable collection of elements of F , then n>0 A n ∈ F. Note that if G is any collection of subsets of a set Ω, then there always exists a smallest σalgebra containing G. (Show that this is indeed the case.) We denote it by σG and call it the σ-algebra generated by G. Definition 2.2 A probability measure P on the measurable space (Ω, F) is a map P: F → [0, 1] with the properties • P() = 0 and P(Ω) = 1. • If {A n } n>0 is a countable collection of elements of F that are all disjoint, then one has P(n>0 A n) = n>0 P(A n). Throughout this course, we will always consider the case of discrete time. We therefore give the following definition of a stochastic process. Definition 2.3 A stochastic process x with state space X is a collection {x n } ∞ n=0 of X-valued random variables on some probability space (Ω, F , P). Given n, we refer to x n as the value of the process at time n. We will sometimes consider processes for which time can take negative values, i.e. {x n } n∈Z. Note that we didn't say anything about the state space X. For the moment, all we need is that the notion of X-valued random variable makes sense. For this, we need X to be a measurable space, so that an X-valued random variable is a measurable map from Ω to X. We will however always assume that X is a complete separable metric space, so that for example Fubini's theorem holds. We will impose more structure on X further on. Typical examples are: • A finite set, X = {1,. .. , n}. • X = R n or X = Z n. • Some manifold, for example X = S n , the n-dimensional sphere or X = T , the torus. • A Hilbert space X = L 2 ([0, 1]) or X = 2. We will always denote by B(X) the Borel σ-algebra on X , i.e. B(X) is the smallest σ-algebra which contains every open set. We will call a function f between two topological spaces measurable if f −1 (A) is a Borel set for every Borel set A. If f : Ω → X , we call f a random variable, provided that f −1 (A) ∈ F for every Borel set A. One actually has: Proposition 2.4 Let f : Ω → X and suppose that f −1 (A) ∈ F for every open set A. Then f −1 (A) ∈ F for every Borel set A. Proof. Define G 0 = {f −1 (A) | A open} and G = {f −1 (A) | A Borel}. Since G is a σ-algebra and G 0 ⊂ G , one has σG 0 ⊂ G. Define now F 0 = {A ∈ B(X) | f −1 (A) ∈ σG 0 }. It is straightforward to check that F 0 is a σalgebra and that it contains all open sets. Since B(X) is the smallest σ-algebra containing all open

Chuaqui's definition of probability in some stochastic processes

Revista Colombiana De Matematicas, 2012

Chains, Random Walks and Brownian Motion are constructed in the framework of Chuaqui's Definition of probability. Chuaqui 1980 and 198] explains how a semantical definition of probability can be applied to random experiments that give rise to compoundoutcomes. In order to do this, he introduces what he calls "compoundprobability structures" (CPS). These CPS are based on causal trees of the form ('1', R) where 'J' is a nonempty set and R is a partial order in '1' which reflects the causal dependence relation between the simple outcomes which make up the compoundoutcome. In the applications we are interested in, the elements of '1' are time mon~nts and R is a the natural linear order~. A compoundoutcome is a func t i on f with domain '1' for which f(t) is an outcome in a simple probability structure (SPS) (see Chuaqui 1977 and 1981). Starting with known probability measures on these SPS, he defines a probability measure on the set of compolHldoutcomes (see Chuaqui 1980). In what fo Llows we show how this definition works for some known stochastic processes.

Markov and almost Markov properties in one, two or more directions

2020

In this review-type paper written at the occasion of the Oberwolfach workshop {\\em One-sided vs. Two-sided stochastic processes} (february 22-29, 2020), we discuss and compare Markov properties and generalisations thereof in more directions, as well as weaker forms of conditional dependence, again either in one or more directions. In particular, we discuss in both contexts various extensions of Markov chains and Markov fields and their properties, such as ggg-measures, Variable Length Markov Chains, Variable Neighbourhood Markov Fields, Variable Neighbourhood (Parsimonious) Random Fields, and Generalized Gibbs Measures.

First results for a mathematical theory of possibilistic Markov processes

We provide basic results for the development of a theory of possibilistic Markov processes. We define and study possibilistic Markov processes and possibilistic Markov chains, and derive a possibilistic analogon of the Chapman-Kolmogorov equation. We also show how possibilistic Markov processes can be constructed using one-step transition possibilities.

Domain Theory in Stochastic Processes

We establish domain-theoretic models of nite-state discrete stochastic processes, Markov processes and vector recurrent iterated function systems. In each case, we show that the distribution of the stochastic process is canonically obtained as the least upper bound of an increasing chain of simple valuations in a probabilistic power domain associated to the process. This leads to various formulas and algorithms to compute the expected values of functions which are continuous almost everywhere with respect to the distribution of the stochastic process. We prove the existence and uniqueness of the invariant distribution of a vector recurrent iterated function system which is used in fractal image compression. We also present a nite algorithm to decode the image.