Chains with Complete Connections and One-Dimensional Gibbs Measures (original) (raw)
Related papers
A zero-one law for Markov chains
Stochastics
We prove an analog of the classical Zero-One Law for both homogeneous and nonhomogeneous Markov chains (MC). Its almost precise formulation is simple: given any event A from the tail σ-algebra of MC (Z n), for large n, with probability near one, the trajectories of the MC are in states i, where P (A|Z n = i) is either near 0 or near 1. A similar statement holds for the entrance σ-algebra, when n tends to −∞. To formulate this second result, we give detailed results on the existence of nonhomogeneous Markov chains indexed by Z − or Z in both the finite and countable cases. This extends a well-known result due to Kolmogorov. Further, in our discussion, we note an interesting dichotomy between two commonly used definitions of MCs.
2004
We review four types of results combining or relating the theories of discrete-time stochastic processes and of one-dimensional specifications. First we list some general properties of stochastic processes which are extremal among those consistent with a given set of transition probabilities. They include: triviality on the tail field, short-range correlations, realization via infinite-volume limits and ergodicity. Second we detail two new uniqueness criteria for stochastic processes and discuss corresponding mixing bounds. These criteria are analogous to those obtained by Dobrushin and Georgii for Gibbs measures. Third, we discuss conditions for a stochastic process to define a Gibbs measure and vice versa, that generalize well known equivalence results between ergodic Markov chains and fields. Finally we state a (re)construction theorem for specifications starting from single-site conditioning, which applies in a rather general setting.
Regular g-measures are not always Gibbsian
Electronic Communications in Probability, 2011
Regular g-measures are discrete-time processes determined by conditional expectations with respect to the past. One-dimensional Gibbs measures, on the other hand, are fields determined by simultaneous conditioning on past and future. For the Markovian and exponentially continuous cases both theories are known to be equivalent. Its equivalence for more general cases was an open problem. We present a simple example settling this issue in a negative way: there exist g-measures that are continuous and non-null but are not Gibbsian. Our example belongs, in fact, to a well-studied family of processes with rather nice attributes: It is a chain with variable-length memory, characterized by the absence of phase coexistence and the existence of a visible renewal scheme.
Projection of Markov measures may be Gibbsian
2003
We study the induced measure obtained from a 1-step Markov measure, supported by a topological Markov chain, after the mapping of the original alphabet onto another one. We give sufficient conditions for the induced measure to be a Gibbs measure (in the sense of Bowen) when the factor system is again a topological Markov chain. This amounts to constructing, when it does exist, the induced potential and proving its Hölder continuity. This is achieved through a matrix method. We provide examples and counterexamples to illustrate our results.
Combinatorial criteria for uniqueness of Gibbs measures
Random Structures and Algorithms, 2005
Among their many uses, growth processes (probabilistic amplification), were used for constructing reliable networks from unreliable components, and deriving complexity bounds of various classes of functions. Hence, determining the initial conditions for such processes is an important and challenging problem. In this paper we characterize growth processes by their initial conditions and derive conditions under which results such as Valiant's[Val84] hold. First, we completely characterize growth processes that use linear connectives. Second, by extending Savický's [Sav90] analysis, via "Restriction Lemmas", we characterize growth processes that use monotone connectives, and show that our technique is applicable to growth processes that use other connectives as well. Additionally, we obtain explicit bounds on the convergence rates of several growth processes, including the growth process studied by Savický (1990).
Conditional Markov Chains Revisited Part I: Construction and properties
In this paper we continue the study of conditional Markov chains (CMCs) with finite state spaces, that we initiated in Bielecki, Jakubowski and Niew\k{e}g{\l}owski (2014a) in an effort to enrich the theory of CMCs that was originated in Bielecki and Rutkowski (2004). We provide an alternative definition of a CMC and an alternative construction of a CMC via a change of probability measure. It turns out that our construction produces CMCs that are also doubly stochastic Markov chains (DSMCs), which allows for study of several properties of CMCs using tools available for DSMCs.
Markov and almost Markov properties in one, two or more directions
2020
In this review-type paper written at the occasion of the Oberwolfach workshop {\\em One-sided vs. Two-sided stochastic processes} (february 22-29, 2020), we discuss and compare Markov properties and generalisations thereof in more directions, as well as weaker forms of conditional dependence, again either in one or more directions. In particular, we discuss in both contexts various extensions of Markov chains and Markov fields and their properties, such as ggg-measures, Variable Length Markov Chains, Variable Neighbourhood Markov Fields, Variable Neighbourhood (Parsimonious) Random Fields, and Generalized Gibbs Measures.