Pre-Processing Rules for Triangulation of Probabilistic Networks (original) (raw)

Pre-processing for triangulation of probabilistic networks

2001

The currently most efficient algorithm for inference with a probabilistic network builds upon a triangulation of a network's graph. In this paper, we show that pre-processing can help in finding good triangulations for probabilistic networks, that is, triangulations with a minimal maximum clique size. We provide a set of rules for stepwise reducing a graph. The reduction allows us to solve the triangulation problem on a smaller graph. From the smaller graph's triangulation, a triangulation of the original graph is obtained by reversing the reduction steps. Our experimental results show that the graphs of some well-known reallife probabilistic networks can be triangulated optimally just by pre-processing; for other networks, huge reductions in size are obtained.

An extended depth-first search algorithm for optimal triangulation of Bayesian networks

International Journal of Approximate Reasoning, 2017

The junction tree algorithm is currently the most popular algorithm for exact inference on Bayesian networks. To improve the time complexity of the junction tree algorithm, we need to find a triangulation with the optimal total table size. For this purpose, Ottosen and Vomlel have proposed a depth-first search (DFS) algorithm. They also introduced several techniques to improve the DFS algorithm, including dynamic clique maintenance and coalescing map pruning. Nevertheless, the efficiency and scalability of that algorithm leave much room for improvement. First, the dynamic clique maintenance allows to recompute some cliques. Second, in the worst case, the DFS algorithm explores the search space of all elimination orders, which has size n!, where n is the number of variables in the Bayesian network. To mitigate these problems, we propose an extended depth-first search (EDFS) algorithm. The new EDFS algorithm introduces the following two techniques as improvements to the DFS algorithm: (1) a new dynamic clique maintenance algorithm that computes only those cliques that contain a new edge, and (2) a new pruning rule, called pivot clique pruning. The new dynamic clique maintenance algorithm explores a smaller search space and runs faster than the Ottosen and Vomlel approach. This improvement can decrease the overhead cost of the DFS algorithm, and the pivot clique pruning reduces the size of the search space by a factor of O(n 2). Our empirical results show that our proposed algorithm finds an optimal triangulation markedly faster than the state-of-the-art algorithm does.

Optimizing the triangulation of Dynamic Bayesian Networks

In this paper, we address the problem of finding good quality elimination orders for triangulating dynamic Bayesian networks. In , the authors proposed a model and an algorithm to compute such orders, but in exponential time. We show that this can be done in polynomial time by casting the problem to the problem of finding a minimum s-t cut in a graph. In this approach, we propose a formal definition of an interface (a set of nodes which makes the past independent from the future), we link the notion of an interface with the notion of a graph cut-set. We also propose an algorithm which computes the minimum interface of a dBN in polynomial time. Given this interface, we show how to get an elimination order which guarantees, theoretically and experimentally, the triangulation quality.

A Depth-First Search Algorithm for Optimal Triangulation of Bayesian Network

2012

Finding the triangulation of a Bayesian network with minimum total table size reduces the computational cost for probabilistic reasoning in the Bayesian network. This task can be done by conducting a search in the space of all possible elimination orders of the Bayesian network. However, such a search is known to be NP-hard. To relax this problem, Ottosen and Vomlel (2010b) proposed a depth-first branch and bound algorithm, which reduces the computational complexity from Θ(β ·| V|!) to O(β ·| V|!), where β describes the overhead computation per node in the search space, and where |V|! is the search space size. Nevertheless, this algorithm entails a heavy computational cost. To mitigate this problem, this paper presents a proposal of an extended algorithm with the following features: (1) Reduction of the search space to O ((|V|-1)!) using Dirac’s theorem, and (2) reduction of the computational cost β per node. Some simulation experiments show that the proposed method is consistently ...

Comparing loop cutsets and clique trees in probabilistic inference

1997

More and more knowledge-based systems are being developed that employ the framework of Bayesian belief networks for reasoning with uncertainty. Such systems generally use for probabilistic inference either the algorithm of J. Pearl or the algorithm of S.L. Lauritzen and D.J. Spiegelhalter. These algorithms build on di erent graphical structures for their underlying computational architecture. By comparing these structures we examine the complexity properties of the two algorithms and show that Lauritzen and Spiegelhalter's algorithm has at most the same computational complexity a s P earl's algorithm.

Heuristic Algorithms for the Triangulation of Graphs

1994

Different uncertainty propagation algorithms in graphical structures can be viewed as a particular case of propagation in a joint tree, which can be obtained from different triangulations of the original graph. The complexity of the resulting propagation algorithms depends on the size of the resulting triangulated graph. The problem of obtaining an optimum graph triangulation is known to be NP-complete. Thus approximate algorithms which find a good triangulation in reasonable time are of particular interest. This work describes and compares several heuristic algorithms developed for this purpose.

On Bayesian network approximation by edge deletion

Proceedings of the 21st Conference on …, 2005

We consider the problem of deleting edges from a Bayesian network for the purpose of simplifying models in probabilistic inference. In particular, we propose a new method for deleting network edges, which is based on the evidence at hand. We provide some interesting bounds on the KL-divergence between original and approximate networks, which highlight the impact of given evidence on the quality of approximation and shed some light on good and bad candidates for edge deletion. We finally demonstrate empirically the promise of the proposed edge deletion technique as a basis for approximate inference.

A simple approach to Bayesian network computations

PROCEEDINGS OF THE BIENNIAL CONFERENCE- …, 1994

The general problem of computing posterior probabilities in Bayesian networks is NP-hard (Cooper 1990). However e cient algorithms are often possible for particular applications by exploiting problem structures. It is well understood that the key to the materialization of such a possibility i s t o m a k e use of conditional independence and work with factorizations of joint probabilities rather than joint probabilities themselves. Di erent exact approaches can be characterized in terms of their choices of factorizations. We propose a new approach which adopts a straightforward way for factorizing joint probabilities. In comparison with the clique tree propagation approach, our approach i s v ery simple. It allows the pruning of irrelevant v ariables, it accommodates changes to the knowledge base more easily. it is easier to implement. More importantly, it can be adapted to utilize both intercausal independence and conditional independence in one uniform framework. On the other hand, clique tree propagation is better in terms of facilitating precomputations.

Incremental Compilation of Bayesian Networks Based on Maximal Prime Subgraphs

International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 2011

When a Bayesian network (BN) is modified, for example adding or deleting a node, or changing the probability distributions, we usually will need a total recompilation of the model, despite feeling that a partial (re)compilation could have been enough. Especially when considering dynamic models, in which variables are added and removed very frequently, these recompilations are quite resource consuming. But even further, for the task of building a model, which is in many occasions an iterative process, there is a clear lack of flexibility. When we use the term Incremental Compilation or IC we refer to the possibility of modifying a network and avoiding a complete recompilation to obtain the new (and different) join tree (JT). The main point we intend to study in this work is JT-based inference in Bayesian networks. Apart from undertaking the triangulation problem itself, we have achieved a great improvement for the compilation in BNs. We do not develop a new architecture for BNs infer...