Markov and almost Markov properties in one, two or more directions (original) (raw)

Conditional Markov Chains Revisited Part I: Construction and properties

In this paper we continue the study of conditional Markov chains (CMCs) with finite state spaces, that we initiated in Bielecki, Jakubowski and Niew\k{e}g{\l}owski (2014a) in an effort to enrich the theory of CMCs that was originated in Bielecki and Rutkowski (2004). We provide an alternative definition of a CMC and an alternative construction of a CMC via a change of probability measure. It turns out that our construction produces CMCs that are also doubly stochastic Markov chains (DSMCs), which allows for study of several properties of CMCs using tools available for DSMCs.

Chains with Complete Connections and One-Dimensional Gibbs Measures

Electronic Journal of Probability, 2004

We discuss the relationship between one-dimensional Gibbs measures and discrete-time processes (chains). We consider finite-alphabet (finitespin) systems, possibly with a grammar (exclusion rule). We establish conditions for a stochastic process to define a Gibbs measure and vice versa. Our conditions generalize well known equivalence results between ergodic Markov chains and fields, as well as the known Gibbsian character of processes with exponential continuity rate. Our arguments are purely probabilistic; they are based on the study of regular systems of conditional probabilities (specifications). Furthermore, we discuss the equivalence of uniqueness criteria for chains and fields and we establish bounds for the continuity rates of the respective systems of finite-volume conditional probabilities. As an auxiliary result we prove a (re)construction theorem for specifications starting from single-site conditioning, which applies in a more general setting (general spin space, specifications not necessarily Gibbsian).

Abstract Markov Property and Local Operators

Journal of Functional Analysis, 1996

The aim of this paper is to discuss in detail whether linear maps preserve certain Markov properties of generalized random fields. Since the work of Le vy [24] and McKean [26] on Brownian motion with multidimensional time, random fields and generalized random fields with Markov properties have been intensively studied, e.g. by Mandrekar, Molc$ an, Nelson, Rozanov, Urbanik, Wong and many others, see [27 30, 34, 35, 41, and 42]. It turned out that the appropriate Markov property to be treated depends delicately upon the chosen index set. For the theory of generalized fields indexed by smooth functions with compact support we refer to Rozanov's book [33]. Nelson has pointed out the importance of generalized random fields indexed by Schwartz distributions for the construction of quantum fields, see [30]. Generalized random fields indexed by measures with bounded energy have first been studied by Albeverio and Ho% egh Krohn [1] and by Dynkin [9]. As for Gaussian random fields the situation is fairly well understood, see for instance the work of Iwata and Scha fer [17a], Kolsrud [19], Kotani [20] and Ro ckner [31, 32]. For non-Gaussian fields the situation is much more complicated, see e.g. the review of Albeverio and Zegarlinski [4] and references therein. One method to construct new Markovian random fields not necessarily Gaussian is to``pull'' them back via inverses of local operators. This is our task here. The different index spaces require different notions of Markov property. We mention here only the so-called``sharp'' Markov property of Nelson article no.

A note on Gibbs and Markov random fields with constraints and their moments

Mathematics and Mechanics of Complex Systems, 2016

This paper focuses on the relation between Gibbs and Markov random fields, one instance of the close relation between abstract and applied mathematics so often stressed by Lucio Russo in his scientific work. We start by proving a more explicit version, based on spin products, of the Hammersley-Clifford theorem, a classic result which identifies Gibbs and Markov fields under finite energy. Then we argue that the celebrated counterexample of Moussouris, intended to show that there is no complete coincidence between Markov and Gibbs random fields in the presence of hard-core constraints, is not really such. In fact, the notion of a constrained Gibbs random field used in the example and in the subsequent literature makes the unnatural assumption that the constraints are infinite energy Gibbs interactions on the same graph. Here we consider the more natural extended version of the equivalence problem, in which constraints are more generally based on a possibly larger graph, and solve it. The bearing of the more natural approach is shown by considering identifiability of discrete random fields from support, conditional independencies and corresponding moments. In fact, by means of our previous results, we show identifiability for a large class of problems, and also examples with no identifiability. Various open questions surface along the way. Personal acknowledgment. One of us (Gandolfi) learned about the theory of Gibbs and Markov random fields from Lucio Russo in a course based on [Ruelle 1978]. He is indebted to Lucio for his inspirational lectures and for many other things, such as an interest in percolation theory and statistical physics, a deep conviction of the close relation between abstract and applied mathematics, and an involvement in questions about the history of science. This paper focuses on one instance of this close association between abstract and applied mathematics, namely the relation between Gibbs and Markov random fields; in spite of the great number of studies and applications of these models, this relationship has not been appropriately investigated in the literature.

A class of measure-valued Markov chains and Bayesian nonparametrics

Bernoulli, 2012

Measure-valued Markov chains have raised interest in Bayesian nonparametrics since the seminal paper by (Math. Proc. Cambridge Philos. Soc. 105 (1989) 579-585) where a Markov chain having the law of the Dirichlet process as unique invariant measure has been introduced. In the present paper, we propose and investigate a new class of measure-valued Markov chains defined via exchangeable sequences of random variables. Asymptotic properties for this new class are derived and applications related to Bayesian nonparametric mixture modeling, and to a generalization of the Markov chain proposed by (Math. Proc. Cambridge Philos. Soc. 105 (1989) 579-585), are discussed. These results and their applications highlight once again the interplay between Bayesian nonparametrics and the theory of measure-valued Markov chains.

Uniqueness of Markov random fields with higher-order dependencies

arXiv (Cornell University), 2023

Markov random fields on a countable set V are studied. They are canonically set by a specification γ, for which the dependence structure is defined by a pre-modification (he) e∈E-a consistent family of functions he : S e → [0, +∞), where S is a standard Borel space and E is an infinite collection of finite e ⊂ V. Different e may contain distinct number of elements, which, in particular, means that the dependence graph H = (V, E) is a hypergraph. Given e ∈ E, let δ(e) be the logarithmic oscillation of he. The result of this work is the assertion that the set of all fields G(γ) is a singleton whenever δ(e) satisfies a condition, a particular version of which can be δ(e) ≤ κg(n L (e)), holding for all e and some H-specific κ ∈ (0, 1). Here g is an increasing function, e.g., g(n) = a + log n, and n L (e) is the degree of e in the line-graph L(H), which may grow ad infinitum. This uniqueness condition is essentially less restrictive than those based on the classical Dobrushin uniqueness theorem, according to which either of |e|, n L (e) and δ(e) should be globally bounded. We also prove that its fulfilment implies that the unique element of G(γ) is globally Markov.

A zero-one law for Markov chains

Stochastics

We prove an analog of the classical Zero-One Law for both homogeneous and nonhomogeneous Markov chains (MC). Its almost precise formulation is simple: given any event A from the tail σ-algebra of MC (Z n), for large n, with probability near one, the trajectories of the MC are in states i, where P (A|Z n = i) is either near 0 or near 1. A similar statement holds for the entrance σ-algebra, when n tends to −∞. To formulate this second result, we give detailed results on the existence of nonhomogeneous Markov chains indexed by Z − or Z in both the finite and countable cases. This extends a well-known result due to Kolmogorov. Further, in our discussion, we note an interesting dichotomy between two commonly used definitions of MCs.

Two types of Markov property

Probability and Mathematical Statistics

The paper gives some insight into the relations between two types of Markov processes – in the strict sense and in the wide sense – as well as into two aspects of periodicity. It concerns Markov processes with finite state space, the elements of which are complex numbers. Firstly it is shown that under some assumptions this space can be transformed in such a way that the resulting Markov process is also Markov in the wide sense. Next, sufficient conditions are given under which periodic homogeneous Markov chain is a periodically correlated process.