Generating explanations and tutorial problems from Bayesian networks (original) (raw)

Supporting the construction of explanation models and diagnostic reasoning in probabilistic domains

MEDICUS (modeling, gxplanation, and aagnostic support for complex, uncertain subject matters) is an intelligent modeling and diagnosis environment designed to support the construction of explanation models and diagnostic reasoning in domains where knowledge is complex, fragile, and uncertain. MEDICUS is developed in collaboration with several medical institutions in the epidemiological fields of environmentally caused diseases and human genetics. Uncertainty is handled by the Bayesian network approach. In modeling, the user creates a Bayesian network for the problem at hand, receiving help information and explanations from the system. This differs from existing reasoning systems based on Bayesian networks, i.e. in medical domains, which contain a built-in knowledge base that may be used but not created or modified by the user. MEDICUS supports diagnostic rearoning by proposing diagnostic hypotheses and recommending examinations. In this paper we will focus on the modeling component of MEDICUS.

An intelligent problem solving environment for designing explanation models and for diagnostic reasoning in probabilistic domains

Intelligent Tutoring Systems, 1996

MEDICUS 2 is an Intelligent Problem Solving Environment (IPSE) currently under development. It is designed to support i) the construction of explanation models, and ii) the training of diagnostic reasoning and hypotheses testing in domains of complex, fragile, and uncertain knowledge. MEDICUS is currently developed and applied in the epidemiological fields of environmentally caused diseases and human genetics. Uncertainty is handled by the Bayesian network approach. Thus the modelling task for the learner consists of creating a Bayesian network for the problem at hand. He / she may test hypotheses about the model, and the system provides help. This differs from existing reasoning systems based on Bayesian networks, i.e. in medical domains, which contain a built-in knowledge base that may be used but not created or modified by the learner. For supporting diagnostic reasoning, MEDICUS proposes diagnostic hypotheses and examinations. This will be extended to support learners' acquisition and training of diagnostic strategies.

Qualitative verbal explanations in Bayesian belief networks

1996

Application of Bayesian belief networks in systems that interact directly with human users, such as decision support systems, requires effective user interfaces. The principal task of such interfaces is bridging the gap between probabilistic models and human intuitive approaches to ...

A review of explanation methods for Bayesian networks

The Knowledge Engineering Review, 2002

One of the key factors for the acceptance of expert systems in real-world domains is the ability to explain their reasoning (Buchanan & Shortliffe, 1984; Henrion & Druzdzel, 1990). This paper describes the basic properties that characterise explanation methods and reviews the methods developed to date for explanation in Bayesian networks.

cbCPT: Knowledge Engineering Support for CPTs in Bayesian Networks

Lecture Notes in Computer Science, 2002

Interacting with huge conditional probability tables (i.e. variables with multiple states and multiple parents) in Bayesian belief networks (BBNs) makes it difficult for experts to create and employ probabilistic models. Although it is possible to learn the structure and conditional probabilities of Bayesian networks from existing data using a variety of algorithms, the role of human experts is still crucial to validate and to maintain such systems. Researchers have investigated the use of graphical interfaces and knowledge engineering techniques to support experts' interaction with complex BBNs. We propose a case-based approach to interact with conditional probability tables. This approach allows experts to define particular cases and focus their attention on them. By focussing on cases, rather than the whole conditional probability table (CPT), the intellectual burden on the expert is diminished, or at least divided into manageable pieces. Important cases defined by experts can be saved for further inspection and maintenance of CPTs. The advantages of this approach are evident when the network contains variables with multiple parents and special configurations of the network (i.e. variables with common parents). We developed a cased-based tool (cbCPT) especially designed to apply knowledge engineering principles to CPT navigation, elicitation, maintenance and evaluation. In addition, we report on a preliminary usability study that shows how users reacted to cbCPT and other available CPT tools.

Visualizing and inspecting bayesian belief models

International …, 2001

Bayesian Belief Networks (BBNs) have become accepted and used widely to model uncertain reasoning and causal relationships. We have developed an interactive visualization tool (VisNet) that allows students and/or teachers to inspect BBNs. Using VisNet it is possible to experiment with concepts such as marginal probability, changes in probability, probability propagation and cause-effect relationships in BBNs using visualization techniques. ViSMod (Visualization of Bayesian Student Models), an extended version of VisNet, opens the internal representation of the student's knowledge to teachers and/or students interested in knowing more about the knowledge about them represented in the system. Both VisNet and ViSMod aim to support reflection processes in learning environments that rely on the use of Bayesian models.

Matilda: A visual tool for modeling with Bayesian networks

International Journal of Intelligent Systems, 2006

A Bayesian Network~BN! consists of a qualitative part representing the structural assumptions of the domain and a quantitative part, the parameters. To date, knowledge engineering support has focused on parameter elicitation, with little support for designing the graphical structure. Poor design choices in BN construction can impact the network's performance, network maintenance, and the explanatory power of the output. We present a tool to help domain experts examine BN structure independently of the parameters. Our qualitative evaluation of the tool shows that it can help in identifying possible structural modeling errors and, hence, improve the quality of BN models.

GeNIeRate: An interactive generator of diagnostic Bayesian network models

2005

We propose a methodology to simplify and speed up the design of very large Bayesian network models. The models produced using our methodology are based on two simplifying assumptions: (1) the structure of the model has three layers of variables and (2) the interaction among the variables can be modeled by canonical models such as the Noisy-MAX gate. The methodology is implemented in an application named GeNIeRate, which aims at supporting construction of diagnostic Bayesian network models consisting of hundreds or even thousands of variables. Preliminary qualitative evaluation of GeNIeRate shows great promise. We conducted an experiment comparing our approach to traditional techniques for building Bayesian network models by rebuilding a Bayesian network model for diagnosis of liver disorders, HEPAR-II. We found that the performance of the model created with GeNIeRate is comparable to the performance of the original HEPAR-II. from A to B denotes that the age of the car influences the likelihood of having a dead battery.

Persuasive Contrastive Explanations for Bayesian Networks

Lecture Notes in Computer Science, 2021

1 Explanation in Artificial Intelligence is often focused on providing reasons for why a model under consideration and its outcome are correct. Recently, research in explainable machine learning has initiated a shift in focus on including so-called counterfactual explanations. In this paper we propose to combine both types of explanation in the context of explaining Bayesian networks. To this end we introduce persuasive contrastive explanations that aim to provide an answer to the question Why outcome t instead of t ? posed by a user. In addition, we propose an algorithm for computing persuasive contrastive explanations. Both our definition of persuasive contrastive explanation and the proposed algorithm can be employed beyond the current scope of Bayesian networks.