A simple approach to Bayesian network computations (original) (raw)

Factorisation of probability trees and its application to inference in Bayesian networks

2002

Bayesian networks can be seen as a factorisation of a joint probability distribution over a set of variables, based on the conditional independence relations amongst the variables. In this paper we show how it is possible to achieve a finer factorisation decomposing the origninal factors in which some conditions hold. The new ideas can be applied to algorithms able to deal wih factorised probabilistic potentials, as Lazy Propagation, Lazy-Penniless as well as Monte Carlo methods based on Importance Sampling.

An Algorithm for the Construction of Bayesian Network Structures from Data

Uncertainty in Artificial Intelligence, 1993

Previous algorithms for the construction of Bayesian belief network structures from data have been either highly dependent on conditional independence (CI) tests, or have required an ordering on the nodes to be supplied by the user. We present an algorithm that integrates these two approaches -CI tests are used to generate an ordering on the nodes from the database which is then used to recover the underlying Bayesian network structure using a non CI based method. Results of preliminary evaluation of the algorithm on two networks (ALARM and LED) are presented. We also discuss some algorithm performance issues and open problems.

Exploiting causal independence in Bayesian network inference

Arxiv preprint cs/9612101, 1996

A new method is proposed for exploiting causal independencies in exact Bayesian network inference. A Bayesian network can be viewed as representing a factorization of a joint probability into the multiplication of a set of conditional probabilities. We present a notion of causal independence that enables one to further factorize the conditional probabilities into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability. The new formulation of causal independence lets us specify the conditional probability of a variable given its parents in terms of an associative and commutative operator, such as "or", "sum" or "max", on the contribution of each parent. We start with a simple algorithm VE for Bayesian network inference that, given evidence and a query variable, uses the factorization to find the posterior distribution of the query. We show how this algorithm can be extended to exploit causal independence. Empirical studies, based on the CPCS networks for medical diagnosis, show that this method is more efficient than previous methods and allows for inference in larger networks than previous algorithms.

A Hybrid Algorithm to Compute Marginal and Joint Beliefs in Bayesian Networks and Its Complexity

1998

There exist two general forms of exact algorithms for updating probabilities in Bayesian Networks. The first approach involves using a structure, usually a clique tree, and performing local message based calculation to extract the belief in each variable. The second general class of algorithm involves the use of non-serial dynamic programming techniques to extract the belief in some desired group of variables. In this paper we present a hybrid algorithm based on the latter approach yet possessing the ability to retrieve the belief in all single variables. The technique is advantageous in that it saves a NP-hard computation step over using one algorithm of each type. Furthermore, this technique re-enforces a conjecture of Jensen and Jensen in that it still requires a single NP-hard step to set up the structure on which inference is performed, as we show by confirming Li and D'Ambrosio's [LD94] conjectured NP-hardness of OFP.

Construction of Bayesian network structures from data: A brief survey and an efficient algorithm

International Journal of Approximate Reasoning, 1995

Previous algorithms for the recovery of Bayesian belief network structures from data have been either highly dependent on conditional independence (CI) tests, or have required an ordering on the nodes to be supplied by the user. We present an algorithm that integrates these two approaches -CI tests are used to generate an ordering on the nodes from the database which is then used to recover the underlying Bayesian network structure using a non CI test based method. Results of the evaluation of the algorithm on a number of databases (e.g., ALARM, LED and SOYBEAN) are presented. We also discuss some algorithm performance issues and open problems.

Context-specific independence in Bayesian networks

Proceedings of the Twelfth …, 1996

Bayesian networks provide a language for qualitatively representing the conditional independence properties of a distribution. This allows a natural and compact representation of the distribution, eases knowledge acquisition, and supports effective inference algorithms. It is well-known, however, that there are certain independencies that we cannot capture qualitatively within the Bayesian network structure: independencies that hold only in certain contexts, i.e., given a specific assignment of values to certain variables. In this paper, we propose a formal notion of context-specific independence (CSI), based on regularities in the conditional probability tables (CPTs) at a node. We present a technique, analogous to (and based on) d-separation, for determining when such independence holds in a given network. We then focus on a particular qualitative representation scheme---tree-structured CPTs---for capturing CSI. We suggest ways in which this representation can be used to support effective inference algorithms. In particular, we present a structural decomposition of the resulting network which can improve the performance of clustering algorithms, and an alternative algorithm based on cutset conditioning.

Bayesian Network Inference Using Marginal Trees

Lecture Notes in Computer Science, 2014

Variable Elimination (VE) answers a query posed to a Bayesian network (BN) by manipulating the conditional probability tables of the BN. Each successive query is answered in the same manner. In this paper, we present an inference algorithm that is aimed at maximizing the reuse of past computation but does not involve precomputation. Compared to VE and a variant of VE incorporating precomputation, our approach fairs favourably in preliminary experimental results.

Bayesian Network Inference with Simple Propagation

2016

Simple Propagation (SP) was recently proposed as a new join tree propagation algorithm for exact inference in discrete Bayesian networks and empirically shown to be faster than Lazy Propagation (LP) when applied on optimal (or close to) join trees built from real-world and benchmark Bayesian networks. This paper extends SP in two directions. First, we propose and empirically evaluate eight heuristics for determining elimination orderings in SP. Second, we show that the relevant potentials in SP are precisely those in LP.