Contextual utility affects the perceived quality of explanations (original) (raw)
Related papers
Goals Affect the Perceived Quality of Explanations
Cognitive Science, 2015
Do people evaluate the quality of explanations differently depending on their goals? In particular, are explanations of different kinds (formal, mechanistic, teleological) judged differently depending on the future judgments the evaluator anticipates making? We report two studies demonstrating that the perceived “goodness” of explanations depends on the evaluator’s current goals, with explanations receiving a relative boost when they are based on relationships that support anticipated judgments. These findings shed light on the functions of explanation and support pragmatic and pluralist approaches to explanation.
On The Explanatory Power of Explanations: Why Context Matters.
In contemporary popular-scientific explanations, what is to count as a powerful or 'better' explanation not seldomly is determined by its providing of 'lower-level' information. 1 This appeal to lower-level explanations seems indicative of a certain scientific tendency: it suggests that higher-level explanations ultimately are translatable to lower-level explanations. That is, the translation of higher-level explanations to lower-level explanations is considered a fruitful enterprise, since it is believed that ultimately all the currently separate scientific domains-including their respective higher-level explanations-can in principle be explained by unifying these in fewer lower-level explanations. This implicitly assumes that, by carrying out such a reduction, the explanatory power remains intact. The aim of this paper is to object to this assumption by stressing that, because of its dependence on contextual factors, explanatory power does not necessarily remain intact during the translation in question. Rather, it is the existence of contextual factors that seems determinative of explanatory power proper. In what follows, I will first provide a sketch of the hierarchical structure that the 'level' classification of scientific explanations represents. Second, I will assess an everyday scenario and consider what type of explanation seems better-suited to capture it in terms of explanatory power.
Explanation and qualitative reasoning
1999
Qualitative Reasoning is often seen as a powerful basis for generating explanations, because the behaviour of interest is explicitly modelled in terms of relevant components, processes, causality relations, quantity spaces, assumptions, states and transitions, while neglecting unnecessary details like quantitative values. However, the link between qualitative reasoning and explanation is often seen as a direct one-to-one mapping, whereas studies of human explanation indicate that this is a simplification. Explanation is an interactive process in which the context plays an important role. This position paper takes a closer look at the relation between qualitative reasoning, explanation generation and contextual factors such as the tasks and goals of the user, and the dialogue history.
Productive Explanation: A Framework for Evaluating Explanations in Psychological Science
The explanation of psychological phenomena is a central aim of psychological science. However, the nature of explanation and the processes by which we evaluate whether a theory explains a phenomenon are often unclear. Consequently, it is often unknown whether a given psychological theory indeed explains a phenomenon. We address this shortcoming by characterizing the nature of explanation in psychology, and proposing a framework in which to evaluate explanation. We present a productive account of explanation: a theory putatively explains a phenomenon if and only if a formal model of the theory produces the statistical pattern representing the phenomenon. Using this account, we outline a workable methodology of explanation: (a) explicating a verbal theory into a formal model, (b) representing phenomena as statistical patterns in data, and (c) assessing whether the formal model produces these statistical patterns. In addition, we explicate three major criteria for evaluating the goodne...
The pragmatic theory of explanation (Van Fraassen, 1988) proposes that background knowledge constrains the explanatory process. Although this is a reasonable hypothesis and research has shown the importance of background knowledge when evaluating explanations, there has been no empirical study of how the background constrains the generation of explanations. In our study, participants viewed one of two sets of preliminary movie clips of some novel items engaged in a series of actions and then all were asked to explain the same final clip. Between conditions, we varied whether the events in the preliminary clips completed a system. In the systematic condition, a greater proportion of functional explanations were generated for the final clip compared to the non-systematic condition. Interestingly, despite the difference in the types of explanations generated, the participants showed high agreement in the evaluation of explanations provided by the experimenters.
A contrastive account of explanation generation
Psychonomic Bulletin & Review, 2017
In this article, we propose a contrastive account of explanation generation. Though researchers have long wrestled with the concepts of explanation and understanding, as well as with the procedures by which we might evaluate explanations, less attention has been paid to the initial generation stages of explanation. Before an explainer can answer a question, he or she must come to some understanding of the explanandum-what the question is asking-and of the explanatory form and content called for by the context. Here candidate explanations are constructed to respond to the particular interpretation of the question, which, according to the pragmatic approach to explanation, is constrained by a contrast class-a set of related but nonoccurring alternatives to the topic that emerge from the surrounding context and the explainer's prior knowledge. In this article, we suggest that generating an explanation involves two operations: one that homes in on an interpretation of the question, and a second one that locates an answer. We review empirical work that supports this account, consider the implications of these contrastive processes, and identify areas for future study.
2021
Humans reason about the world around them by seeking to understand why and how something occurs. The same principle extends to the technology that so many of human activities increasingly rely on. Issues of trust, transparency, and understandability are critical in promoting adoption and proper use of systems. However, with increasing complexity of the systems and technologies we use, it is hard or even impossible to comprehend their function and behavior, and justify surprising observations through manual investigation alone. Explanation support can ease humans’ interactions with technology: explanations can help users understand a system’s function, justify system results, and increase their trust in automated decisions. Our goal in this article is to provide an overview of existing work in explanation support for data-driven processes, through a lens that identifies commonalities across varied problem settings and solutions. We suggest a classification of explainability requireme...
General Theories of Explanation: Buyer Beware
We argue that there is no general theory of explanation that spans the sciences, mathematics, and ethics, etc. More specifically, there is no good reason to believe that substantive and domain-invariant constraints on explanatory information exist. Using Nickel (Noûs 44(2):305–328, 2010) as an exemplar of the contrary, generalist position, we first show that Nickel’s arguments rest on several ambiguities, and then show that even when these ambiguities are charitably corrected, Nickel’s defense of general theories of explanation is inadequate along several different dimensions. Specifically, we argue that Nickel’s argument has three fatal flaws. First, he has not provided any compelling illustrations of domain-invariant constraints on explanation. Second, in order to fend off the most vehement skeptics of domain-invariant theories of explanation, Nickel must beg all of the important questions. Third, Nickel’s examples of explanations from different domains with common explanatory structure rely on incorrect formulations of the explanations under consideration, circular justifications, and/or a mischaracterization of the position Nickel intends to critique. Given that the best and most elaborate defense of the generalist position fails in so many ways, we conclude that the standard practice in philosophy (and in philosophy of science in particular), which is to develop theories of explanation that are tailored to specific domains, still is justified. For those who want to buy into a more ambitious project:beware of the costs!
Too much, too little, or just right? Ways explanations impact mental models
The proceedings of VL/HCC 2013, 2013
Research is emerging on how end users can correct mistakes their intelligent agents make, but before users can correctly "debug" an intelligent agent, they need some degree of understanding of how it works. In this paper we consider ways intelligent agents should explain themselves to end users, especially focusing on how the soundness and completeness of the explanations impacts the fidelity of end users' mental models. Our findings suggest that completeness is more important than soundness: increasing completeness via certain information types helped participants' mental models and, surprisingly, their perception of the cost/benefit tradeoff of attending to the explanations. We also found that oversimplification, as per many commercial agents, can be a problem: when soundness was very low, participants experienced more mental demand and lost trust in the explanations, thereby reducing the likelihood that users will pay attention to such explanations at all.