Explanation as a social practice: Toward a conceptual framework for the social design of AI systems (original) (raw)
Related papers
Can we do better explanations? A proposal of user-centered explainable AI
2019
Artificial Intelligence systems are spreading to multiple applications and they are used by a more diverse audience. With this change of the use scenario, AI users will increasingly require explanations. The first part of this paper makes a review of the state of the art of Explainable AI and highlights how the current research is not paying enough attention to whom the explanations are targeted. In the second part of the paper, it is suggested a new explainability pipeline, where users are classified in three main groups (developers or AI researchers, domain experts and lay users). Inspired by the cooperative principles of conversations, it is discussed how creating different explanations for each of the targeted groups can overcome some of the difficulties related to creating good explanations and evaluating them.
Innovations in Explainable AI : Bridging the Gap Between Complexity and Understanding
International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2023
The integration of Artificial Intelligence (AI) into various domains has witnessed remarkable advancements, yet the opacity of complex AI models poses challenges for widespread acceptance and application. This research paper delves into the field of Explainable AI (XAI) and explores innovative strategies aimed at bridging the gap between the intricacies of advanced AI algorithms and the imperative for human comprehension. We investigate key developments, including interpretable model architectures, local and visual explanation techniques, natural language explanations, and model-agnostic approaches. Emphasis is placed on ethical considerations to ensure transparency and fairness in algorithmic decision-making. By surveying and analyzing these innovations, this research contributes to the ongoing discourse on making AI systems more accessible, accountable, and trustworthy, ultimately fostering a harmonious collaboration between humans and intelligent machines in an increasingly AI-driven world.
Artificial Intelligence, 2021
Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these 'stakeholders' desiderata') in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability of artificial systems and reviews their desiderata.
Flexible and Context-Specific AI Explainability: A Multidisciplinary Approach
SSRN Electronic Journal, 2020
The recent enthusiasm for artificial intelligence (AI) is due principally to advances in deep learning. Deep learning methods are remarkably accurate, but also opaque, which limits their potential use in safety-critical applications. To achieve trust and accountability, designers and operators of machine learning algorithms must be able to explain the inner workings, the results and the causes of failures of algorithms to users, regulators, and citizens. The originality of this paper is to combine technical, legal and economic aspects of explainability to develop a framework for defining the "right" level of explainability in a given context. We propose three logical steps: First, define the main contextual factors, such as who the audience of the explanation is, the operational context, the level of harm that the system could cause, and the legal/regulatory framework. This step will help characterize the operational and legal needs for explanation, and the corresponding social benefits. Second, examine the technical tools available, including post hoc approaches (input perturbation, saliency maps...) and hybrid AI approaches. Third, as function of the first two steps, choose the right levels of global and local explanation outputs, taking into the account the costs involved. We identify seven kinds of costs and emphasize that explanations are socially useful only when total social benefits exceed costs.
The Alan Turing Institute, 2024
The purpose of this workbook is to introduce participants to the principle of AI Explainability. Understanding how, why, and when explanations of AI-supported or -generated outcomes need to be provided, and what impacted people’s expectations are about what these explanations should include, is crucial to fostering responsible and ethical practices within your AI projects. To guide you through this process, we will address essential questions: What do we need to explain? And who do we need to explain this to? This workbook offers practical insights and tools to facilitate your exploration of AI Explainability. By providing actionable approaches, we aim to equip you and your team with the means to identify when and how to employ various types of explanations effectively. This workbook is part of the AI Ethics and Governance in Practice series (https://aiethics.turing.ac.uk) co-developed by researchers at The Alan Turing Institute in partnership with key public sector stakeholders.
Principles of Explanation in Human-AI Systems
arXiv (Cornell University), 2021
Explainable Artificial Intelligence (XAI) has re-emerged in response to the development of modern AI and ML systems. These systems are complex and sometimes biased, but they nevertheless make decisions that impact our lives. XAI systems are frequently algorithm-focused; starting and ending with an algorithm that implements a basic untested idea about explainability. These systems are often not tested to determine whether the algorithm helps users accomplish any goals, and so their explainability remains unproven. We propose an alternative: to start with human-focused principles for the design, testing, and implementation of XAI systems, and implement algorithms to serve that purpose. In this paper, we review some of the basic concepts that have been used for user-centered XAI systems over the past 40 years of research. Based on these, we describe the "Self-Explanation Scorecard", which can help developers understand how they can empower users to by enabling self-explanation. Finally, we present a set of empiricallygrounded, user-centered design principles that may guide developers to create successful explainable systems. User-Centered Explanation in AI Although usability testing is a cornerstone of user-centered design, evaluation often comes too late to provide guidance about implementing a usable system. In response, researchers and designers have proposed guidelines that codify research on human users and advocate for the involvement of users in system development from the beginning (e.g., Greenbaum and Kyng 1991; Hoffman et al. 2010). The most famous and detailed set of guidelines may be Apple's Human Interface Guidelines (cf. Mountford 1998), but others have proposed simpler principles such as Neilson's (1994) interface design heuristics or Karat's (1998) "User's Bill of Rights". With the advent of new, powerful AI systems that are complex and difficult to understand, the field of Explainable AI (XAI) has re-emerged as an important area of human-machine interaction. Much of the interest in XAI has focused on deep learning systems. Consequently, most explanations have concentrated on technologies to visualize or otherwise expose deep networks structures, features, or
ArXiv, 2019
This is an integrative review that address the question, "What makes for a good explanation?" with reference to AI systems. Pertinent literatures are vast. Thus, this review is necessarily selective. That said, most of the key concepts and issues are expressed in this Report. The Report encapsulates the history of computer science efforts to create systems that explain and instruct (intelligent tutoring systems and expert systems). The Report expresses the explainability issues and challenges in modern AI, and presents capsule views of the leading psychological theories of explanation. Certain articles stand out by virtue of their particular relevance to XAI, and their methods, results, and key points are highlighted. It is recommended that AI/XAI researchers be encouraged to include in their research reports fuller details on their empirical or experimental methods, in the fashion of experimental psychology research reports: details on Participants, Instructions, Procedur...