An extended depth-first search algorithm for optimal triangulation of Bayesian networks (original) (raw)
Related papers
A Depth-First Search Algorithm for Optimal Triangulation of Bayesian Network
2012
Finding the triangulation of a Bayesian network with minimum total table size reduces the computational cost for probabilistic reasoning in the Bayesian network. This task can be done by conducting a search in the space of all possible elimination orders of the Bayesian network. However, such a search is known to be NP-hard. To relax this problem, Ottosen and Vomlel (2010b) proposed a depth-first branch and bound algorithm, which reduces the computational complexity from Θ(β ·| V|!) to O(β ·| V|!), where β describes the overhead computation per node in the search space, and where |V|! is the search space size. Nevertheless, this algorithm entails a heavy computational cost. To mitigate this problem, this paper presents a proposal of an extended algorithm with the following features: (1) Reduction of the search space to O ((|V|-1)!) using Dirac’s theorem, and (2) reduction of the computational cost β per node. Some simulation experiments show that the proposed method is consistently ...
Optimizing the triangulation of Dynamic Bayesian Networks
In this paper, we address the problem of finding good quality elimination orders for triangulating dynamic Bayesian networks. In , the authors proposed a model and an algorithm to compute such orders, but in exponential time. We show that this can be done in polynomial time by casting the problem to the problem of finding a minimum s-t cut in a graph. In this approach, we propose a formal definition of an interface (a set of nodes which makes the past independent from the future), we link the notion of an interface with the notion of a graph cut-set. We also propose an algorithm which computes the minimum interface of a dBN in polynomial time. Given this interface, we show how to get an elimination order which guarantees, theoretically and experimentally, the triangulation quality.
Incremental Compilation of Bayesian Networks Based on Maximal Prime Subgraphs
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 2011
When a Bayesian network (BN) is modified, for example adding or deleting a node, or changing the probability distributions, we usually will need a total recompilation of the model, despite feeling that a partial (re)compilation could have been enough. Especially when considering dynamic models, in which variables are added and removed very frequently, these recompilations are quite resource consuming. But even further, for the task of building a model, which is in many occasions an iterative process, there is a clear lack of flexibility. When we use the term Incremental Compilation or IC we refer to the possibility of modifying a network and avoiding a complete recompilation to obtain the new (and different) join tree (JT). The main point we intend to study in this work is JT-based inference in Bayesian networks. Apart from undertaking the triangulation problem itself, we have achieved a great improvement for the compilation in BNs. We do not develop a new architecture for BNs infer...
Pre-Processing Rules for Triangulation of Probabilistic Networks
2003
Currently, the most efficient algorithm for inference with a probabilistic network builds upon a triangulation of a network's graph. In this paper, we show that pre-processing can help in finding good triangulations for probabilistic networks, that is, triangulations with a maximum clique size as small as possible. We provide a set of rules for stepwise reducing a graph, without losing optimality. This reduction allows us to solve the triangulation problem on a smaller graph. From the smaller graph's triangulation, a triangulation of the original graph is obtained by reversing the reduction steps. Our experimental results show that the graphs of some well-known real-life probabilistic networks can be triangulated optimally just by preprocessing; for other networks, huge reductions in their graph's size are obtained.
Pre-processing for triangulation of probabilistic networks
2001
The currently most efficient algorithm for inference with a probabilistic network builds upon a triangulation of a network's graph. In this paper, we show that pre-processing can help in finding good triangulations for probabilistic networks, that is, triangulations with a minimal maximum clique size. We provide a set of rules for stepwise reducing a graph. The reduction allows us to solve the triangulation problem on a smaller graph. From the smaller graph's triangulation, a triangulation of the original graph is obtained by reversing the reduction steps. Our experimental results show that the graphs of some well-known reallife probabilistic networks can be triangulated optimally just by pre-processing; for other networks, huge reductions in size are obtained.
2006
This article presents and analyzes algorithms that systematically generate random Bayesian networks of varying difficulty levels, with respect to inference using tree clustering. The results are relevant to research on efficient Bayesian network inference, such as computing a most probable explanation or belief updating, since they allow controlled experimentation to determine the impact of improvements to inference algorithms.
An Empirical Evaluation of Costs and Benefits of Simplifying Bayesian Networks by Removing Weak Arcs
We report the results of an empirical evaluation of structural simplification of Bayesian networks by removing weak arcs. We conduct a series of experiments on six networks built from real data sets selected from the UC Irvine Machine Learning Repository. We systematically remove arcs from the weakest to the strongest, relying on four measures of arc strength, and measure the classification accuracy of the resulting simplified models. Our results show that removing up to roughly 20 percent of the weakest arcs in a network has minimal effect on its classification accuracy. At the same time, structural simplification of networks leads to significant reduction of both the amount of memory taken by the clique tree and the amount of computation needed to perform inference.
Maximal Prime Subgraph Decomposition of Bayesian Networks: A Relational Database Perspective
2005
A maximal prime subgraph decomposition junction tree (MPD-JT) is a useful computational structure that facilitates lazy propagation in Bayesian networks (BNs). A graphical method was proposed to construct an MPD-JT from a BN. In this paper, we present a new method from a relational database (RDB) perspective which sheds light on the semantic meaning of the previously proposed graphical algorithm.
Bayesian Network Inference Using Marginal Trees
Lecture Notes in Computer Science, 2014
Variable Elimination (VE) answers a query posed to a Bayesian network (BN) by manipulating the conditional probability tables of the BN. Each successive query is answered in the same manner. In this paper, we present an inference algorithm that is aimed at maximizing the reuse of past computation but does not involve precomputation. Compared to VE and a variant of VE incorporating precomputation, our approach fairs favourably in preliminary experimental results.