Unified conditional probability density functions for hybrid Bayesian networks (original) (raw)

Scalable Inference for Hybrid Bayesian Networks with Full Density Estimations

The simplest hybrid Bayesian network is Conditional Linear Gaussian (CLG). It is a hybrid model for which exact inference can be performed by the Junction Tree (JT) algorithm. However, the traditional JT only provides the exact first two moments for hidden continuous variables 1 . In general, the complexity of exact inference algorithms is exponential in the size of the largest clique of the strongly triangulated graph that is usually the one including all of discrete parent nodes for a connected continuous component in the model. Furthermore, for the general nonlinear non-Gaussian hybrid model, it is well-known that no exact inference is possible. This paper introduces a new inference approach by unifying message passing between different types of variables. This algorithm is able to provide an exact solution for polytree CLG, and approximate solution by loopy propagation for general hybrid models. To overcome the exponential complexity, we use Gaussian mixture reduction methods to approximate the original density and make the algorithm scalable. This new algorithm provides not only the first two moments, but full density estimates. Empirically, approximation errors due to reduced Gaussian mixtures and loopy propagation are relatively small, especially for nodes that are far away from the discrete parent nodes. Numerical experiments show encouraging results.

Hybrid Bayesian Networks with Linear Deterministic Variables

Uncertainty in Artificial Intelligence, 2005

When a hybrid Bayesian network has con- ditionally deterministic variables with con- tinuous parents, the joint density function for the continuous variables does not exist. Conditional linear Gaussian distributions can handle such cases when the continuous vari- ables have a multi-variate normal distribu- tion and the discrete variables do not have continuous parents. In this paper, opera- tions required for

Extended Shenoy-Shafer architecture for inference in hybrid bayesian networks with deterministic conditionals

International Journal of Approximate Reasoning, 2011

The main goal of this paper is to describe an architecture for solving large general hybrid Bayesian networks (BNs) with deterministic conditionals for continuous variables using local computation. In the presence of deterministic conditionals for continuous variables, we have to deal with the non-existence of the joint density function for the continuous variables. We represent deterministic conditional distributions for continuous

Operations for inference in continuous Bayesian networks with linear deterministic variables

International Journal of Approximate Reasoning, 2006

An important class of continuous Bayesian networks are those that have linear conditionally deterministic variables (a variable that is a linear deterministic function of its parents). In this case, the joint density function for the variables in the network does not exist. Conditional linear Gaussian (CLG) distributions can handle such cases when all variables are normally distributed. In this paper, we develop operations required for performing inference with linear conditionally deterministic variables in continuous Bayesian networks using relationships derived from joint cumulative distribution functions (CDF's). These methods allow inference in networks with linear deterministic variables and non-Gaussian distributions.

Inference in hybrid Bayesian networks

Reliability Engineering & …, 2009

Since the 1980s, Bayesian Networks (BNs) have become increasingly popular for building statistical models of complex systems. This is particularly true for boolean systems, where BNs often prove to be a more efficient modelling framework than traditional reliability-techniques (like fault trees and reliability block diagrams). However, limitations in the BNs' calculation engine have prevented BNs from becoming equally popular for domains containing mixtures of both discrete and continuous variables (so-called hybrid domains). In this paper we focus on these difficulties, and summarize some of the last decade's research on inference in hybrid Bayesian networks. The discussions are linked to an example model for estimating human reliability.

A Review of Inference Algorithms for Hybrid Bayesian Networks

Journal of Artificial Intelligence Research, 2018

Hybrid Bayesian networks have received an increasing attention during the last years. The difference with respect to standard Bayesian networks is that they can host discrete and continuous variables simultaneously, which extends the applicability of the Bayesian network framework in general. However, this extra feature also comes at a cost: inference in these types of models is computationally more challenging and the underlying models and updating procedures may not even support closed-form solutions. In this paper we provide an overview of the main trends and principled approaches for performing inference in hybrid Bayesian networks. The methods covered in the paper are organized and discussed according to their methodological basis. We consider how the methods have been extended and adapted to also include (hybrid) dynamic Bayesian networks, and we end with an overview of established software systems supporting inference in these types of models.

Some practical issues in inference in hybrid Bayesian networks with deterministic conditionals

2011

In this paper we discuss some practical issues that arise in solving hybrid Bayesian networks that include deterministic conditionals for continuous variables. We show how exact inference can become intractable even for small networks, due to the difficulty in handling deterministic conditionals (for continuous variables). We propose some strategies for carrying out the inference task using mixtures of polynomials and mixtures of truncated exponentials. Mixtures of polynomials can be defined on hypercubes or hyper-rhombuses. We compare these two methods. A key strategy is to re-approximate large potentials with potentials consisting of fewer pieces and lower degrees/number of terms. We discuss several methods for re-approximating potentials. We illustrate our methods in a practical application consisting of solving a stochastic PERT network.

Modeling Conditional Distributions of Continuous Variables in Bayesian Networks

Lecture Notes in Computer Science, 2005

The MTE (mixture of truncated exponentials) model was introduced as a general solution to the problem of specifying conditional distributions for continuous variables in Bayesian networks, especially as an alternative to discretization. In this paper we compare the behavior of two different approaches for constructing conditional MTE models in an example taken from Finance, which is a domain were uncertain variables commonly have continuous conditional distributions.

Hybrid Copula Bayesian Networks

2016

This paper introduces the hybrid copula Bayesian network (HCBN) model, a generalization of the copula Bayesian network (CBN) model developed by Elidan (2010) for continuous random variables to multivariate mixed probability distributions of discrete and continuous random variables. To this end, we extend the theorems proved by Nešlehová (2007) from bivariate to multivariate copulas with discrete and continuous marginal distributions. Using the multivariate copula with discrete and continuous marginal distributions as a theoretical basis, we construct an HCBN that can model all possible permutations of discrete and continuous random variables for parent and child nodes, unlike the popular conditional linear Gaussian network model. Finally, we demonstrate on a numerous synthetic datasets and a real life dataset that our HCBN compares favorably, from a modeling and flexibility viewpoint, to other hybrid models including the conditional linear Gaussian and the mixture of truncated expon...