Developing a Catalogue of Explainability Methods to Support Expert and Non-expert Users (original) (raw)
Related papers
Evaluating Explainability Methods Intended for Multiple Stakeholders
KI - Künstliche Intelligenz, 2021
Explanation mechanisms for intelligent systems are typically designed to respond to specific user needs, yet in practice these systems tend to have a wide variety of users. This can present a challenge to organisations looking to satisfy the explanation needs of different groups using an individual system. In this paper we present an explainability framework formed of a catalogue of explanation methods, and designed to integrate with a range of projects within a telecommunications organisation. Explainability methods are split into low-level explanations and high-level explanations for increasing levels of contextual support in their explanations. We motivate this framework using the specific case-study of explaining the conclusions of field network engineering experts to non-technical planning staff and evaluate our results using feedback from two distinct user groups; domain-expert telecommunication engineers and non-expert desk agent staff. We also present and investigate two met...
Explainability in Context: Lessons from an Intelligent System in the IT Services Domain
2019
We report from an ongoing study of the design, development, and deployment of an intelligent workplace system in the IT services domain. We describe the system, which is designed to augment the complex design work of highly-skilled IT architects with the use of natural language processing (NLP) and optimization modelling. We outline results from our study, which analyzes feedback from architects as they interacted with various prototypes of the system. This feedback focuses on their sensemaking and uncertainty around: system actions; interactivity and system outputs; and integration with existing processes. These findings point to “explanation” as a multi-dimensional requirement. Such multi-dimensionality requires more careful articulation of the different types of explanations needed to support workers as they make sense of and successfully integrate smart systems in their everyday work practice. CCS CONCEPTS • Human-centered computing → Computing methodologies; Artificial intellig...
iSee: Intelligent Sharing of Explanation Experience by Users for Users
28th International Conference on Intelligent User Interfaces
The right to obtain an explanation of the decision reached by an Artificial Intelligence (AI) model is now an EU regulation. Different stakeholders of an AI system (e.g. managers, developers, auditors, etc.) may have different background knowledge, competencies and goals, thus requiring different kinds of interpretations and explanations. Fortunately, there is a growing armoury of tools to interpret ML models and explain their predictions, recommendations and diagnoses which we will refer to collectively as explanation strategies. As these explanation strategies mature, practitioners will gain experience that helps them know which strategies to deploy in different circumstances. What is lacking, and is addressed by iSee, is capturing, sharing and re-using explanation strategies based on past positive experiences. The goal of the iSee platform is to improve every user's experience of AI, by harnessing experiences and best practices in Explainable AI. CCS CONCEPTS • Human-centered computing → Natural language interfaces; • Computing methodologies → Knowledge representation and reasoning.
EUCA: the End-User-Centered Explainable AI Framework
arXiv (Cornell University), 2021
The ability to explain decisions to end-users is a necessity to deploy AI as critical decision support. Yet making AI explainable to non-technical end-users is a relatively ignored and challenging problem. To bridge the gap, we first identify twelve end-userfriendly explanatory forms that do not require technical knowledge to comprehend, including feature-, example-, and rule-based explanations. We then instantiate the explanatory forms as prototyping cards in four AI-assisted critical decision-making tasks, and conduct a user study to co-design low-fidelity prototypes with 32 layperson participants. The results confirm the relevance of using explanatory forms as building blocks of explanations, and identify their proprieties-pros, cons, applicable explanation goals, and design implications. The explanatory forms, their proprieties, and prototyping supports (including a suggested prototyping process, design templates and exemplars, and associated algorithms to actualize explanatory forms) constitute the End-User-Centered explainable AI framework EUCA, and is available at http://weinajin.github.io/end-user-xai. It serves as a practical prototyping toolkit for HCI/AI practitioners and researchers to understand user requirements and build end-user-centered explainable AI. CCS Concepts: • Computing methodologies → Artificial intelligence; • Human-centered computing → User studies.
Can we do better explanations? A proposal of user-centered explainable AI
2019
Artificial Intelligence systems are spreading to multiple applications and they are used by a more diverse audience. With this change of the use scenario, AI users will increasingly require explanations. The first part of this paper makes a review of the state of the art of Explainable AI and highlights how the current research is not paying enough attention to whom the explanations are targeted. In the second part of the paper, it is suggested a new explainability pipeline, where users are classified in three main groups (developers or AI researchers, domain experts and lay users). Inspired by the cooperative principles of conversations, it is discussed how creating different explanations for each of the targeted groups can overcome some of the difficulties related to creating good explanations and evaluating them.
Explainable (and maintainable) expert systems
1985
ABSTRACT Principled development techniques could greatly enhance the understandability of expert systems for both users and system developers. Current systems have limited explanatory capabilities and present maintenance problems because of a failure to explicitly represent the knowledge and reasoning that went into their design.
Issues Affecting User Confidence in Explanation Systems
2018
Recent successes of artificial intelligence, machine learning, and deep learning have generated exciting challenges in the area of explainability. For societal, regulatory, and utility reasons, systems that exploit these technologies are increasingly being required to explain their outputs to users. In addition, appropriate and timely explanation can improve user experience, performance, and confidence. We have found that users are reluctant to use such systems if they lack the understanding and confidence to explain the underlying processes and reasoning behind the results. In this paper, we present a preliminary study by nine experts that identified research issues concerning explanation and user confidence. We used a three-session collaborative process to collect, aggregate, and generate joint reflections from the group. Using this process, we identified six areas of interest that we hope will serve as a catalyst for stimulating discussion.
Directions for Explainable Knowledge-Enabled Systems
2020
Interest in the field of Explainable Artificial Intelligence has been growing for decades and has accelerated recently. As Artificial Intelligence models have become more complex, and often more opaque, with the incorporation of complex machine learning techniques, explainability has become more critical. Recently, researchers have been investigating and tackling explainability with a user-centric focus, looking for explanations to consider trustworthiness, comprehensibility, explicit provenance, and context-awareness. In this chapter, we leverage our survey of explanation literature in Artificial Intelligence and closely related fields and use these past efforts to generate a set of explanation types that we feel reflect the expanded needs of explanation for today's artificial intelligence applications. We define each type and provide an example question that would motivate the need for this style of explanation. We believe this set of explanation types will help future system ...
Design Decision Framework for AI Explanations
2021
Explanations can help users of Artificial Intelligent (AI) systems gain a better understanding of the reasoning behind the model's decision, facilitate their trust in AI, and assist them in making informed decisions. Due to its numerous benefits in improving how users interact and collaborate with AI, this has stirred the AI/ML community towards developing understandable or interpretable models to a larger degree, while design researchers continue to study and research ways to present explanations of these models' decisions in a coherent form. However, there is still the lack of intentional design effort from the HCI community around these explanation system designs. In this paper, we contribute a framework to support the design and validation of explainable AI systems; one that requires carefully thinking through design decisions at several important decision points. This framework captures key aspects of explanations ranging from target users, to the data, to the AI models...