Scaling up explanation generation: Large-scale knowledge bases and empirical studies (original) (raw)
Dynamically improving explanations: A revision-based approach to explanation generation
1997
Abstract Recent years have witnessed rapid progress in explanation generation. Despite these advances, the quality of prose produced by explanation generators warrants significant improvement. Revision-based explanation generation offers a promising means for improving explanations at runtime. In contrast to singledraft explanation generation architectures, a revision-based generator could dynamically create, evaluate, and refine multiple drafts of explanations.
Generating explanations in context: The system perspective
Expert Systems with Applications, 1995
Explanations for expert systems are best provided in context, and, recently, many systems have used some notion of context in different ways in their explanation module. For example, some explanation systems take into account a user model. Others generate an explanation depending on the preceding and current discourse. In this article, we bring together these different notions of context as elements of a global picture that might be taken into account by an explanation module, depending on the needs of the application and the user. We characterize each of these elements, describe the constraints they place on communication, and present examples to illustrate the points being made. We discuss the implications of these different aspects of context on the design of explanation facilities. Finally, we describe and illustrate with examples, an implemented intentionbased planning framework for explanation that can take into account the different aspects of context discussed above.
Software Components for Generating Explanations
The paper discusses the issue of generating explanations in intelligent tutoring systems. Specifically, it shows how explanations are generated according to the GET-BITS model of intelligent tutoring systems. The major concern is what software components are needed in order to generate meaningful explanations to different classes of the end-users of such systems. The process of explanation is considered in the context of the types of knowledge present in the knowledge base of an intelligent tutoring system. Throughout the paper, the process of explanation generation is treated from the software engineering point of view. Some design examples, describing several classes developed in support of explanation generation based on the GET-BITS model, are also presented.
Proceedings of International Conference on Expert Systems for Development
The aim of the paper is to fill the gap between theory and practice in the production of explanations by a system. One reason of this gap arises from the fact that a problem is often solved thanks to a cooperation between the user and the system, and both participants in the cooperation need explanations. Explanations essentially depend on the context in which the user and the system interact. Such contextualized explanations are the result of a process and constitute a medium of communication between the user and the system during the problem solving. We focus on the need to make the context notion explicit in the explanation process. We analyze explanation and context in term of chunks of knowledge. Then we point out what the contribution of the context to explanation is. An example, which is drawn from a real application, introduces what the problem is.
Distributed Knowledge by Explanation Networks
2004
A knowledge based system may be considered as knowledge, distributed between one or several experts and the system users. Explanations in such a system provide for a more intensive interaction between the system and the user. We construct an explanation network by defining relationships between various knowledge fragments. Knowledge fragments on varying levels are formulated using the Qualitative Process Theory. The relationships are defined by links, compatible with Rhetorical Structure Theory. Different knowledge elements are combined into an explanation path by using Toulmin's argumentation theory. The feasibility of this approach is investigated. We show the following: By representing relations in a concept hierarchy as well as representing the relationships between elements in a rule of a knowledge base, both problem solving inferences and explanations can be generated. At the moment, the derivation of explanations cannot be performed automatically, but ready-made explanations may be stored and presented in a useful way. The explanation network idea has great knowledge acquisition power. An empirical study with users showed that different paths within the explanation net are useful for users with different prior knowledge. To conclude, the idea of distributing knowledge by support from an explanation network is fruitful and feasible.
Continuous Explanation Generation in a Multi-Agent Domain
2015
: An agent operating in a dynamic, multi-agent environment with partial observability should continuously generate and maintain an explanation of its observations that describes what is occurring around it. We update our existing formal model of occurrence-based explanations to describe ambiguous explanations and the actions of other agents. We also introduce a new version of DiscoverHistory, an algorithm that continuously maintains such explanations as new observations are received. In our empirical study this version of DiscoverHistory outperformed a competitor in terms of efficiency while maintaining correctness (i.e., precision and recall).
Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it's important to remember Box's maxim that "All models are wrong but some are useful." We focus on the distinction between these models and explanations in philosophy and sociology. These models can be understood as a "do it yourself kit" for explanations, allowing a practitioner to directly answer "what if questions" or generate contrastive explanations without external assistance. Although a valuable ability, giving these models as explanations appears more difficult than necessary, and other forms of explanation may not have the same trade-offs. We contrast the different schools of thought on what makes an explanation, and suggest that machine learning might benefit from viewing the problem more broadly.
Towards an Explanation Generation System for Robots: Analysis and Recommendations
Robotics
A fundamental challenge in robotics is to reason with incomplete domain knowledge to explain unexpected observations and partial descriptions extracted from sensor observations. Existing explanation generation systems draw on ideas that can be mapped to a multidimensional space of system characteristics, defined by distinctions, such as how they represent knowledge and if and how they reason with heuristic guidance. Instances in this multidimensional space corresponding to existing systems do not support all of the desired explanation generation capabilities for robots. We seek to address this limitation by thoroughly understanding the range of explanation generation capabilities and the interplay between the distinctions that characterize them. Towards this objective, this paper first specifies three fundamental distinctions that can be used to characterize many existing explanation generation systems. We explore and understand the effects of these distinctions by comparing the capabilities of two systems that differ substantially along these axes, using execution scenarios involving a robot waiter assisting in seating people and delivering orders in a restaurant. The second part of the paper uses this study to argue that the desired explanation generation capabilities corresponding to these three distinctions can mostly be achieved by exploiting the complementary strengths of the two systems that were explored. This is followed by a discussion of the capabilities related to other major distinctions to provide detailed recommendations for developing an explanation generation system for robots.
Improving explanations in knowledge-based systems: RATIONALE
Knowledge Acquisition, 1990
The paper* describes a framework, RATIONALE, for building knowledge-based diagnostic systems that explain by reasoning explicitly. Unlike most existing explanation facilities that are grafted onto an independently designed inference engine, RATIONALE behaves as though it has to deliberate over and explain to itself, each refinement step. By treating explanation as primary, RATIONALE forces the system designer to represent knowledge explicitly that might otherwise be left implicit. This includes knowledge as to why a particular hypothesis is preferred, an exception is ignored, and a global inference strategy is chosen. RATIONALE integrates explanations with reasoning by allowing a causal and/or functional description of the domain to be represented explicitly. Reasoning proceeds by constructing a hypothesis-based classification tree whose root hypothesis contains the most general diagnosis of the system. Guided by a focusing algorithm, the classification tree branches into more specific hypotheses that explain the more detailed symptoms provided by the user. As the system is used, the classification tree also forms the basis for a dynamically generated explanation tree which holds both the successful and failed branches of the reasoning knowledge. RATIONALE is implemented in Quintus Prolog with a hypertext and graphics oriented interface tinder NEWS. ยง It provides an environment for tying together the processes of knowledge acquisition, system implementation and explanation of system reasoning.