A note regarding ″Loopy Belief Propagation ″convergence in probabilistic and possibilistic networks (original) (raw)
Related papers
An Approximate Propagation Algorithm for Product-Based Possibilistic Networks
2008
Product-Based Possibilistic Networks appear to be important tools to efficiently and compactly represent possibility distributions. The inference process is a crucial task to propagate information into network when new pieces of information, called evidence, are observed. However, this inference process is known to be a hard task especially for multiply connected networks. In this paper, we propose an approximate algorithm for product-based possibilistic networks. More precisely, we propose an adaptation of the probabilistic approach "Loopy Belief Propagation" (LBP) for possibilistic networks.
Loopy Belief Propagation in Bayesian Networks: Origin and possibilistic perspectives
In this paper we present a synthesis of the work performed on two inference algorithms: the Pearl's belief propagation (BP) algorithm applied to Bayesian networks without loops (i.e. polytree) and the Loopy belief propagation (LBP) algorithm (inspired from the BP) which is applied to networks containing undirected cycles. It is known that the BP algorithm, applied to Bayesian networks with loops, gives incorrect numerical results i.e. incorrect posterior probabilities. Murphy and al. find that the LBP algorithm converges on several networks and when this occurs, LBP gives a good approximation of the exact posterior probabilities. However this algorithm presents an oscillatory behaviour when it is applied to QMR (Quick Medical Reference) network . This phenomenon prevents the LBP algorithm from converging towards a good approximation of posterior probabilities. We believe that the translation of the inference computation problem from the probabilistic framework to the possibilistic framework will allow performance improvement of LBP algorithm. We hope that an adaptation of this algorithm to a possibilistic causal network will show an improvement of the convergence of LBP.
Approximate inference in dynamic possibilistic networks
2007 IEEE Congress on Evolutionary Computation, 2007
This paper describes an approximate algorithm for inference in dynamic possibilistic networks (DPNs). DPNs provide a succinct and expressive graphical language for representing sequential data and factoring joint possibility distributions and they are powerful models using only the concepts of random variables and conditional possibilities. The proposed algorithm, to perform inference in such networks, is an approximate one and it is based mainly on the standard Boyen-Koller (BK) algorithm well defined for dynamic probabilistic networks. The new possibilistic framework, proposed in this paper, is notable because it gives a counterpart of traditional probability framework, generally used to represent uncertainty in sequential data. The possibilistic BK algorithm is based on the junction tree technique where inference is done via an interface clusters that decrease the size of the dynamic network structured and amenable to a very simple form of inference. We present this algorithm in terms of two possibilistic conditioning; the product based and the minbased one.
Symbolic Probabilistic Inference in Belief Networks
National Conference on Artificial Intelligence, 1990
The Symbolic Probabilistic Inference (SPI) Algorithm (D'Ambrosio, 19891 provides an efficient framework for resolving general queries on a belief network. It applies the concept of dependency-directed backward search to probabilistic inference, and is incremental with respect to both queries and observations. Unlike most belief network algorithms, SPI is goal directed, performing only those calculations that are required to respond to
PROCEEDINGS OF THE NATIONAL …, 2005
Possibilistic networks are important tools for dealing with uncertain pieces of information. For multiplyconnected networks, it is well known that the inference process is a hard problem. This paper studies a new representation of possibilistic networks, called hybrid possibilistic networks. The uncertainty is no longer represented by local conditional possibility distributions, but by their compact representations which are possibilistic knowledge bases. We show that the inference algorithm in hybrid networks is strictly more efficient than the ones of standard propagation algorithm.
Belief propagation in Qualitative Probabilistic Networks
1993
Qualitative probabilistic networks (QPNs) are an abstraction of inuence diagrams and Bayesian belief networks replacing numerical relations by qualitative inuences and synergies. To reason in a QPN is to nd the eect of decision or new evidence on a variable of interest in terms of the sign of the change in belief (increase or decrease). We review our work on qualitative belief propagation, a computationally ecient reasoning scheme based on local sign propagation in QPNs. Qualitative belief propagation, unlike the existing graph-reduction algorithm, preserves the network structure and determines the eect of evidence on all nodes in the network. We show how this supports meta-level reasoning about the model and automatic generation of intuitive explanations of probabilistic reasoning.
Simulation Approaches to General Probabilistic Inference on Belief Networks
Machine Intelligence and Pattern Recognition, 1990
PEOI'@RPAL.COM Although a number of algorithms have been developed to solve probabilistic inference problems on belief networks, they can be divided into two main groups: exact techniques which exploit the conditional independence revealed when the graph structure is relatively sparse, and probabilistic sampling techniques which exploit the "conductance" of an embedded Markov chain when the conditional probabilities have non extreme values. In this paper, we investigate a family of Monte Carlo sampling techniques similar to Logic Sampling [Henrion, 1988] which appear to perform well even in some multiply-connected networks with extreme conditional probabilities, and thus would be generally applicable. We consider several enhancements which reduce the posterior variance using this approach and propose a framework and criteria for choosing when to use those enhancements.
Updating probabilities in multiply connected belief networks
… of Fourth Workshop on Uncertainty in …, 1988
This paper focuses on probability updates in multiply-connected belief networks. Pearl has designed the method of conditioning, which enables us to apply his algorithm for belief updates in singly-connected networks to multiply-connected belief networks by selecting a loop-cutset for the network and instantiating these loop-cutset nodes. We discuss conditions thj:zt need to be satisfied by the selected nodes. We present a heuristic algorithm for finding a loop-cutset that satisfies these conditions.
Global Conditioning for Probabilistic Inference in Belief Networks
Uncertainty Proceedings 1994, 1994
In this paper we propose a new approach to probabilistic inference on belief networks, global conditioning, which is a simple gener alization of Pearl's (1986b) method of loop cutset conditioning. We show that global conditioning, as well as loop-cutset condition ing, can be thought of as a special case of the method of Lauritzen and Spiegelhalter (1988) as refined by Jensen et al (1990a; 199Gb). Nonetheless, this approach provides new op portunities for parallel processing and, in the case of sequential processing, a tradeoff of time for memory. We also show how a hybrid method (Suermondt and others 1990) com bining loop-cutset conditioning with Jensen's method can be viewed within our framework. By exploring the relationships between these methods, we develop a unifying framework in which the advantages of each approach can be combined successfully.
Approximating Probabilistic Inference in Bayesian Belief Networks
IEEE Transactions on Pattern Analysis and Machine Intelligence, 1993
A belief network comprises a graphical representation of dependencies between variables of a domain and a set of conditional probabilities associated with each dependency. Unless P=NP, an efficient, exact algorithm does not exist to compute probabilistic inference in belief networks. Stochastic simulation methods, which often improve run times, provide an alternative to exact inference algorithms. We present such a stochastic simulation algorithm 2)-BNRAS that is a randomized approximation scheme. To analyze the run time, we parameterize belief networks by the dependence value P E , which is a measure of the cumulative strengths of the belief network dependencies given background evidence E. This parameterization defines the class of f-dependence networks. The run time of 2)-BNRAS is polynomial when f is a polynomial function. Thus, the results of this paper prove the existence of a class of belief networks for which inference approximation is polynomial and, hence, provably faster than any exact algorithm.