Optimizing the triangulation of Dynamic Bayesian Networks (original) (raw)
Related papers
An extended depth-first search algorithm for optimal triangulation of Bayesian networks
International Journal of Approximate Reasoning, 2017
The junction tree algorithm is currently the most popular algorithm for exact inference on Bayesian networks. To improve the time complexity of the junction tree algorithm, we need to find a triangulation with the optimal total table size. For this purpose, Ottosen and Vomlel have proposed a depth-first search (DFS) algorithm. They also introduced several techniques to improve the DFS algorithm, including dynamic clique maintenance and coalescing map pruning. Nevertheless, the efficiency and scalability of that algorithm leave much room for improvement. First, the dynamic clique maintenance allows to recompute some cliques. Second, in the worst case, the DFS algorithm explores the search space of all elimination orders, which has size n!, where n is the number of variables in the Bayesian network. To mitigate these problems, we propose an extended depth-first search (EDFS) algorithm. The new EDFS algorithm introduces the following two techniques as improvements to the DFS algorithm: (1) a new dynamic clique maintenance algorithm that computes only those cliques that contain a new edge, and (2) a new pruning rule, called pivot clique pruning. The new dynamic clique maintenance algorithm explores a smaller search space and runs faster than the Ottosen and Vomlel approach. This improvement can decrease the overhead cost of the DFS algorithm, and the pivot clique pruning reduces the size of the search space by a factor of O(n 2). Our empirical results show that our proposed algorithm finds an optimal triangulation markedly faster than the state-of-the-art algorithm does.
A Depth-First Search Algorithm for Optimal Triangulation of Bayesian Network
2012
Finding the triangulation of a Bayesian network with minimum total table size reduces the computational cost for probabilistic reasoning in the Bayesian network. This task can be done by conducting a search in the space of all possible elimination orders of the Bayesian network. However, such a search is known to be NP-hard. To relax this problem, Ottosen and Vomlel (2010b) proposed a depth-first branch and bound algorithm, which reduces the computational complexity from Θ(β ·| V|!) to O(β ·| V|!), where β describes the overhead computation per node in the search space, and where |V|! is the search space size. Nevertheless, this algorithm entails a heavy computational cost. To mitigate this problem, this paper presents a proposal of an extended algorithm with the following features: (1) Reduction of the search space to O ((|V|-1)!) using Dirac’s theorem, and (2) reduction of the computational cost β per node. Some simulation experiments show that the proposed method is consistently ...
Pre-Processing Rules for Triangulation of Probabilistic Networks
2003
Currently, the most efficient algorithm for inference with a probabilistic network builds upon a triangulation of a network's graph. In this paper, we show that pre-processing can help in finding good triangulations for probabilistic networks, that is, triangulations with a maximum clique size as small as possible. We provide a set of rules for stepwise reducing a graph, without losing optimality. This reduction allows us to solve the triangulation problem on a smaller graph. From the smaller graph's triangulation, a triangulation of the original graph is obtained by reversing the reduction steps. Our experimental results show that the graphs of some well-known real-life probabilistic networks can be triangulated optimally just by preprocessing; for other networks, huge reductions in their graph's size are obtained.
Pre-processing for triangulation of probabilistic networks
2001
The currently most efficient algorithm for inference with a probabilistic network builds upon a triangulation of a network's graph. In this paper, we show that pre-processing can help in finding good triangulations for probabilistic networks, that is, triangulations with a minimal maximum clique size. We provide a set of rules for stepwise reducing a graph. The reduction allows us to solve the triangulation problem on a smaller graph. From the smaller graph's triangulation, a triangulation of the original graph is obtained by reversing the reduction steps. Our experimental results show that the graphs of some well-known reallife probabilistic networks can be triangulated optimally just by pre-processing; for other networks, huge reductions in size are obtained.
Incremental Compilation of Bayesian Networks Based on Maximal Prime Subgraphs
International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 2011
When a Bayesian network (BN) is modified, for example adding or deleting a node, or changing the probability distributions, we usually will need a total recompilation of the model, despite feeling that a partial (re)compilation could have been enough. Especially when considering dynamic models, in which variables are added and removed very frequently, these recompilations are quite resource consuming. But even further, for the task of building a model, which is in many occasions an iterative process, there is a clear lack of flexibility. When we use the term Incremental Compilation or IC we refer to the possibility of modifying a network and avoiding a complete recompilation to obtain the new (and different) join tree (JT). The main point we intend to study in this work is JT-based inference in Bayesian networks. Apart from undertaking the triangulation problem itself, we have achieved a great improvement for the compilation in BNs. We do not develop a new architecture for BNs infer...
Incremental compilation of Bayesian networks
Uncertainty in Artificial Intelligence, 2003
Most methods for exact probability propaga tion in Bayesian networks do not carry out the inference directly over the network, but over a secondary structure known as a junc tion tree or a join tree (JT). The process of obtaining a JT is usually termed compilation. As compilation is usually viewed as a whole process; each time the network is modified, a new compilation process has to be performed. The possibility of reusing an already exist ing JT in order to obtain the new one re garding only the modifications in the network has received only little attention in the liter ature. In this paper we present a method for incremental compilation of a Bayesian net work, following the classical scheme in which triangulation plays the key role. In order to perform incremental compilation we pro pose to recompile only those parts of the JT which may have been affected by the net work's modifications. To do so, we exploit the technique of maximal prime subgraph de composition in determining the minimal sub graph(s) that have to be recompiled, and thereby the minimal subtree(s) of the JT that should be replaced by new subtree(s). We fo cus on structural modifications: addition and deletion of links and variables.
On a simple method for testing independencies in Bayesian networks
Computational Intelligence, 2017
Testing independencies is a fundamental task in reasoning with Bayesian networks (BNs). In practice, d-separation is often used for this task, since it has linear-time complexity. However, many have had difficulties understanding d-separation in BNs. An equivalent method that is easier to understand, called m-separation, transforms the problem from directed separation in BNs into classical separation in undirected graphs. Two main steps of this transformation are pruning the BN and adding undirected edges. In this paper, we propose u-separation as an even simpler method for testing independencies in a BN. Our approach also converts the problem into classical separation in an undirected graph. However, our method is based upon the novel concepts of inaugural variables and rationalization. Thereby, the primary advantage of u-separation over m-separation is that m-separation can prune unnecessarily and add superfluous edges. Our experiment results show that u-separation performs 73% fewer modifications on average than m-separation.
Maximal Prime Subgraph Decomposition of Bayesian Networks: A Relational Database Perspective
2005
A maximal prime subgraph decomposition junction tree (MPD-JT) is a useful computational structure that facilitates lazy propagation in Bayesian networks (BNs). A graphical method was proposed to construct an MPD-JT from a BN. In this paper, we present a new method from a relational database (RDB) perspective which sheds light on the semantic meaning of the previously proposed graphical algorithm.
Characteristic imset: a simple algebraic representative of a Bayesian network structure
First, we recall the basic idea of an algebraic and geometric approach to learning a Bayesian network (BN) structure proposed in : to represent every BN structure by a certain uniquely determined vector. The original proposal was to use a so-called standard imset which is a vector having integers as components, as an algebraic representative of a BN structure. In this paper we propose an even simpler algebraic representative called the characteristic imset. It is 0-1-vector obtained from the standard imset by an affine transformation. This implies that every reasonable quality criterion is an affine function of the characteristic imset. The characteristic imset is much closer to the graphical description: we establish a simple relation to any chain graph without flags that defines the BN structure. In particular, we are interested in the relation to the essential graph, which is a classic graphical BN structure representative. In the end, we discuss two special cases in which the use of characteristic imsets particularly simplifies things: learning decomposable models and (undirected) forests.
Incremental thin junction trees for dynamic Bayesian networks
2004
In this paper, we study the relationship between the thin junction tree filter (TJTF) [Pas03] and the Boyen-Koller (BK) algorithm for approximate inference in discrete dynamic Bayesian networks. First, we review the TJTF for discrete networks and cast the BK algorithm as a special case of TJTF. Then, we employ a TJTF to automatically compute conditionally independent clusters for the BK algorithm. Theoretical work by Boyen and Koller showed that using conditionally independent clusters strongly improves BK's error bounds, and we demonstrate that the theoretical results carry over to practice. We achieve a contract anytime algorithm which is superior to BK with marginally independent clusters and faster than TJTF in its general form.