The Pragmatics of Explanation (original) (raw)
Related papers
A contrastive account of explanation generation
Psychonomic Bulletin & Review, 2017
In this article, we propose a contrastive account of explanation generation. Though researchers have long wrestled with the concepts of explanation and understanding, as well as with the procedures by which we might evaluate explanations, less attention has been paid to the initial generation stages of explanation. Before an explainer can answer a question, he or she must come to some understanding of the explanandum-what the question is asking-and of the explanatory form and content called for by the context. Here candidate explanations are constructed to respond to the particular interpretation of the question, which, according to the pragmatic approach to explanation, is constrained by a contrast class-a set of related but nonoccurring alternatives to the topic that emerge from the surrounding context and the explainer's prior knowledge. In this article, we suggest that generating an explanation involves two operations: one that homes in on an interpretation of the question, and a second one that locates an answer. We review empirical work that supports this account, consider the implications of these contrastive processes, and identify areas for future study.
The Role of Explanation in Very Simple Tasks
Abstract Much research on explanation has focused on the ability of explanations to draw upon relevant knowledge to aid in understanding some event or observation. However, explanations may also structure our understanding of events and related tasks more generally, even when they add no relevant information. In three experiments, we show that explanations affect performance in simple, binary decision tasks where they could not possibly add relevant information.
Toward a Theory of the Process of Explanation
Synthese, 2005
What is an explanation? An extensive but rather inconclusive discussion has been devoted to this question in the last several decades. This dis-cussion has been surveyed by Salmon (1990) and by Stegmüller (1983). Much of the early stages of this discussion dealt with ...
Contextual utility affects the perceived quality of explanations
Psychonomic Bulletin & Review
Are explanations of different kinds (formal, mechanistic, teleological) judged differently depending on their contextual utility, defined as the extent to which they support the kinds of inferences required for a given task? We report three studies demonstrating that the perceived Bgoodness^of an explanation depends on the evaluator's current task: Explanations receive a relative boost when they support task-relevant inferences, even when all three explanation types are warranted. For example, mechanistic explanations receive higher ratings when participants anticipate making further inferences on the basis of proximate causes than when they anticipate making further inferences on the basis of category membership or functions. These findings shed light on the functions of explanation and support pragmatic and pluralist approaches to explanation.
Productive Explanation: A Framework for Evaluating Explanations in Psychological Science
The explanation of psychological phenomena is a central aim of psychological science. However, the nature of explanation and the processes by which we evaluate whether a theory explains a phenomenon are often unclear. Consequently, it is often unknown whether a given psychological theory indeed explains a phenomenon. We address this shortcoming by characterizing the nature of explanation in psychology, and proposing a framework in which to evaluate explanation. We present a productive account of explanation: a theory putatively explains a phenomenon if and only if a formal model of the theory produces the statistical pattern representing the phenomenon. Using this account, we outline a workable methodology of explanation: (a) explicating a verbal theory into a formal model, (b) representing phenomena as statistical patterns in data, and (c) assessing whether the formal model produces these statistical patterns. In addition, we explicate three major criteria for evaluating the goodne...
Too much, too little, or just right? Ways explanations impact mental models
The proceedings of VL/HCC 2013, 2013
Research is emerging on how end users can correct mistakes their intelligent agents make, but before users can correctly "debug" an intelligent agent, they need some degree of understanding of how it works. In this paper we consider ways intelligent agents should explain themselves to end users, especially focusing on how the soundness and completeness of the explanations impacts the fidelity of end users' mental models. Our findings suggest that completeness is more important than soundness: increasing completeness via certain information types helped participants' mental models and, surprisingly, their perception of the cost/benefit tradeoff of attending to the explanations. We also found that oversimplification, as per many commercial agents, can be a problem: when soundness was very low, participants experienced more mental demand and lost trust in the explanations, thereby reducing the likelihood that users will pay attention to such explanations at all.
The Pragmatic-Rhetorical Theory of Explanation
Springer eBooks, 2007
Explanation is one of the most discussed notions in philosophy of science. This may be because there is little consensus among specialists on how explanation in a scientific context should be characterised. Three main approaches appear to be alive today: the formal-logical view, the ontological view, and the pragmatic view. Between these three classes of theories little agreement seems possible. Beyond the expectation that explanation is meant to provide a particular kind of information about facts of matter, there seems to be little agreement at all. Given this, the pragmatic view has at least one advantage, namely, its ability to accept the others. Alternative conceptions of explanation may be construed as promoting wholly possible goals of a given scientific explanation in so far as the pragmatic situation determines that it is appropriate to pursue these goals. What pragmatists deny is that any of these other views tell us what scientific explanation is or that they cover all forms of scientific explanation, i.e., that there is any one goal of scientific explanation. 1.1 Various approaches The formal-logical approach considers scientific explanation as something quite distinct and very different from ordinary explanation. It holds that every scientific explanation should have certain objective features by which it can be completely characterised and understood. Following Carl Hempel, a scientific explanation is to be construed as an argument with a propositional structure, i.e., an explanandum is a proposition that follows deductively from an explanans. This kind of approach gives us a prescriptive account of explanation in the sense that a proposition counts as a scientific explanation if, and only if, it fulfils certain formal requirements. As Hempel remarked, summarising his own position, "Explicating the concept of scientific explanation is not the same thing as writing an entry on the word 'explain' for the Oxford English Dictionary." 1 His approach offers certain norms with respect to which we can demarcate scientific explanations from other forms of explanation. Apart from Hempel's original covering law model this view includes approaches
Explanation and qualitative reasoning
1999
Qualitative Reasoning is often seen as a powerful basis for generating explanations, because the behaviour of interest is explicitly modelled in terms of relevant components, processes, causality relations, quantity spaces, assumptions, states and transitions, while neglecting unnecessary details like quantitative values. However, the link between qualitative reasoning and explanation is often seen as a direct one-to-one mapping, whereas studies of human explanation indicate that this is a simplification. Explanation is an interactive process in which the context plays an important role. This position paper takes a closer look at the relation between qualitative reasoning, explanation generation and contextual factors such as the tasks and goals of the user, and the dialogue history.