The problem : focusing explanation (original) (raw)

Adaptive similarity assessment for case-based explanation

1995

Guiding the generation of abductive explanations is a di cult problem. Applying casebased reasoning to abductive explanation generation|generating new explanations by retrieving and adapting explanations for prior episodes|o ers the bene t of re-using successful explanatory reasoning but raises new issues concerning how to perform similarity assessment to judge the relevance of prior explanations to new situations. Similarity assessment a ects two points in the case-based explanation process: deciding which explanations to retrieve and evaluating the retrieved candidates. We address the problem of identifying similar explanations to retrieve by basing that similarity assessment on a categorization of anomaly types. We show that the problem of evaluating retrieved candidate explanations is often impeded by incomplete information about the situation to be explained, and address that problem with a novel similarity assessment method which we call constructive similarity assessment. Constructive similarity assessment contrasts with traditional \feature-mapping" similarity assessment methods by using the contents of memory to hypothesize important features in the new situation, and in using a pragmatic criterion|the system's ability to adapt features of the old case into features that apply in the new circumstances|as the basis for comparing features. Thus constructive similarity assessment does not merely compare new cases to old; instead, based on adaptation of prior cases in memory, it addresses the problem of incomplete input cases by building up and reasoning about augmented descriptions of those cases.

A review of explanation and explanation in case-based reasoning

2003

With the arrival of more powerful computers and improved algorithms in machine learning, the availability of computer-based Knowledge Based Systems (KBSs) has increased rapidly. There is a wide range of KBSs available in many different types of domains, including medicine, finance, industry and technical diagnosis (Armengol et al, 2000; Ong et al, 1997; Rowe & Wright, 1993; Mark et al, 1996; Ye, 1995). Sales of KBS development tools are growing at a rate of about 16% per annum since 1988 (Durkin, 1996).

Explanation-driven retrieval, reuse, and learning of cases

1993

A method for integrated case-based and generalization-based reasoning and learning is described. The primary role of general domain knowledge is to provide explanatory support for the case-based processes. A general explanation engine -the ACTIVATE-EXPLAIN-FOCUS cycle -utilizes a presumably rich, multirelational knowledge model in producing context-dependent explanations.

Gaining insight through case-based explanation

2009

Abstract Traditional explanation strategies in machine learning have been dominated by rule and decision tree based approaches. Case-based explanations represent an alternative approach which has inherent advantages in terms of transparency and user acceptability. Case-based explanations are based on a strategy of presenting similar past examples in support of and as justification for recommendations made.

Creativity and learning in a case-based explainer

Artificial Intelligence, 1989

Explanation-based learning (EBL) is a very powerful method for category formation. Since EBL algorithms depend on having good explanations, it is crucial to have effective ways to build explanations, especially in complex real-world situations where complete causal information is not available. When people encounter new situations, they often explain them by remembering old explanations, and adapting them to fit. We believe that this case-based approach to explanation holds promise for use in AI systems, both for routine explanation and to creatively explain situations quite unlike what the system has encountered before. Building new explanations from old ones relies on having explanations available in memory. We describe explanation patterns (XPs), knowledge structures that package the reasoning underlying explanations. Using the SWALE system as a base, we discuss the retrieval and modification process, and the criteria used when deciding which explanation to accept. We also discuss issues in learning XPs: what generalization strategies are appropriate for real-world explanations, and which indexing strategies are appropriate for XPs. SWALE' s explanations allow it to understand nonstandard stories, and the XPs it learns increase its efficiency in dealing with similar anomalies in the future.

Let Me Explain: Adaptation of Explanations Extracted from Incident Reports

Ai Communications , 2017

In order for us to trust artificial intelligence systems, they need to be able to explain their decisions. This article investigates a case-based reasoning (CBR) approach for generating explanations where a new event is explained by combining and modifying explanations of multiple previous events. This approach is implemented in a CBR system for incident analysis where the goal is to identify causes of transportation incidents. The system generates explanations that connect the observed events to the root causes of the incident through intermediate states and events. Explanations for past incidents are automatically extracted from textual incident reports using natural language processing, thus avoiding the manual effort of constructing explanations. The system is evaluated on incident reports from the Transportation Safety Board of Canada, supporting the hypothesis that adapting explanations of multiple previous events rather than a single event results in a more comprehensive explanation of a new event.