Scaling up explanation generation: Large-scale knowledge bases and empirical studies (original) (raw)

Scaling Up Explanation Generation: Large-Scale Knowledge irical St ies

Abstract To explain complex phenomena, an explanation system must be able to select information from a formal representation of domain knowledge, organize the selected information into multisentential discourse plans, and realize the discourse plans in text. Although recent years have witnessed significant progress in the development of sophisticated computational mechanisms for explanation, empirical results have been limited.

Developing and empirically evaluating robust explanation generators: The KNIGHT experiments

1997

Abstract To explain complex phenomena, an explanation system must be able to select information from a formal representation of domain knowledge, organize the selected information into multisentential discourse plans, and realize the discourse plans in text. Although recent years have witnessed significant progress in the development of sophisticated computational mechanisms for explanation, empirical results have been limited.

The KNIGHT Experiments: Empirically Evaluating an Explanation Generation System

1995

Abstract Empirically evaluating explanation generators poses a notoriously difficult problem. To address this problem, we constructed KNIGHT, a robust explanation generator that dynamically constructs natural language explanations about scientific phenomena. We then undertook the most extensive and rigorous empirical evaluation ever conducted on aa explanation generator. First, KNIGHT constructed explanations on randomly chosen topics from the Biology Knowledge Base.

Construing and testing explanations in a complex domain

Computers in Human Behavior, 1996

Explanations were construed for an expert system in the domain of protein purification and based upon the multiple-explanation construction model (MEC model). Various explanations were construed covering different relevant aspects of the explanation space, expert-level explanations (quantitative representation), low-level explanations (qualitative representation), grounds explanations (background knowledge) and backing explanations (general abstract principles). These were tested on laboratory staff working at Pharmacia LKB Biotechnology AB in Uppsala, Sweden. The variables being tested were learning, understanding, usability, and novelty of the explanation types. The results indicate that the model is valuable in construing explanations with different "knowledge levels" with the purpose of fulfilling the needs of experts as well as "less-experts" covering different important aspects of the explanation space. In the context of learning, the results show that experts prefer expert-level explanations and low-level explanations whereas less-experts prefer a combination of all explanation types. A multiexplanation perspective has to be taken, where explanations covering different aspects of the explanation space on different levels have to be available to less-experts to facilitate learning from explanations in a specific complex domain. These results can have strong implications for learning, for example in the context of computer-supported education. Explanations Within Knowledge-Based Systems Explanations within knowledge-based systems have lately gained a lot of attention. One important aspect in this context is knowledge representation since the way knowledge is represented in a reasoning system, for example, strongly

Generating explanations in context

… of the 1st international conference on …, 1993

If user interfaces are to reap the bene ts of natural language interaction, they must be endowed with the properties that make h uman natural language interaction so e ective. Human-human explanation is an inherently incremental and interactive process. New information must be highlighted and related to what has already been presented. In this paper, we describe the explanation component of a medical information-giving system. We describe the architectural features that enable this component to generate subsequent explanations that take i n to account the context created by its prior utterances.

Dynamically improving explanations: A revision-based approach to explanation generation

1997

Abstract Recent years have witnessed rapid progress in explanation generation. Despite these advances, the quality of prose produced by explanation generators warrants significant improvement. Revision-based explanation generation offers a promising means for improving explanations at runtime. In contrast to singledraft explanation generation architectures, a revision-based generator could dynamically create, evaluate, and refine multiple drafts of explanations.

Generating explanations in context: The system perspective

Expert Systems with Applications, 1995

Explanations for expert systems are best provided in context, and, recently, many systems have used some notion of context in different ways in their explanation module. For example, some explanation systems take into account a user model. Others generate an explanation depending on the preceding and current discourse. In this article, we bring together these different notions of context as elements of a global picture that might be taken into account by an explanation module, depending on the needs of the application and the user. We characterize each of these elements, describe the constraints they place on communication, and present examples to illustrate the points being made. We discuss the implications of these different aspects of context on the design of explanation facilities. Finally, we describe and illustrate with examples, an implemented intentionbased planning framework for explanation that can take into account the different aspects of context discussed above.

Software Components for Generating Explanations

The paper discusses the issue of generating explanations in intelligent tutoring systems. Specifically, it shows how explanations are generated according to the GET-BITS model of intelligent tutoring systems. The major concern is what software components are needed in order to generate meaningful explanations to different classes of the end-users of such systems. The process of explanation is considered in the context of the types of knowledge present in the knowledge base of an intelligent tutoring system. Throughout the paper, the process of explanation generation is treated from the software engineering point of view. Some design examples, describing several classes developed in support of explanation generation based on the GET-BITS model, are also presented.

Contextualized explanations

Proceedings of International Conference on Expert Systems for Development

The aim of the paper is to fill the gap between theory and practice in the production of explanations by a system. One reason of this gap arises from the fact that a problem is often solved thanks to a cooperation between the user and the system, and both participants in the cooperation need explanations. Explanations essentially depend on the context in which the user and the system interact. Such contextualized explanations are the result of a process and constitute a medium of communication between the user and the system during the problem solving. We focus on the need to make the context notion explicit in the explanation process. We analyze explanation and context in term of chunks of knowledge. Then we point out what the contribution of the context to explanation is. An example, which is drawn from a real application, introduces what the problem is.

Distributed Knowledge by Explanation Networks

2004

A knowledge based system may be considered as knowledge, distributed between one or several experts and the system users. Explanations in such a system provide for a more intensive interaction between the system and the user. We construct an explanation network by defining relationships between various knowledge fragments. Knowledge fragments on varying levels are formulated using the Qualitative Process Theory. The relationships are defined by links, compatible with Rhetorical Structure Theory. Different knowledge elements are combined into an explanation path by using Toulmin's argumentation theory. The feasibility of this approach is investigated. We show the following: By representing relations in a concept hierarchy as well as representing the relationships between elements in a rule of a knowledge base, both problem solving inferences and explanations can be generated. At the moment, the derivation of explanations cannot be performed automatically, but ready-made explanations may be stored and presented in a useful way. The explanation network idea has great knowledge acquisition power. An empirical study with users showed that different paths within the explanation net are useful for users with different prior knowledge. To conclude, the idea of distributing knowledge by support from an explanation network is fruitful and feasible.