Expl(AI)n It to Me – Explainable AI and Information Systems Research (original) (raw)
Related papers
AI & SOCIETY
Given the pervasiveness of AI systems and their potential negative effects on people’s lives (especially among already marginalised groups), it becomes imperative to comprehend what goes on when an AI system generates a result, and based on what reasons, it is achieved. There are consistent technical efforts for making systems more “explainable” by reducing their opaqueness and increasing their interpretability and explainability. In this paper, we explore an alternative non-technical approach towards explainability that complement existing ones. Leaving aside technical, statistical, or data-related issues, we focus on the very conceptual underpinnings of the design decisions made by developers and other stakeholders during the lifecycle of a machine learning project. For instance, the design and development of an app to track snoring to detect possible health risks presuppose some picture or another of “health”, which is a key notion that conceptually underpins the project. We take...
Explainable AI: Making Machine Learning Decisions Transparent
Explainable AI (XAI) has emerged as a critical field in artificial intelligence, addressing the "black box" nature of complex machine learning models. This article explores the importance of transparency in AI decision-making, the techniques used to achieve explainability, and the implications for various sectors. We examine the current state of XAI, its applications in healthcare, finance, and other critical areas, and discuss the ethical and regulatory considerations surrounding transparent AI. The paper concludes with an analysis of XAI's contribution to the broader field of AI and its potential future developments.
Proposed Guidelines for the Responsible Use of Explainable Machine Learning
2019
Explainable machine learning (ML) enables human learning from ML, human appeal of automated model decisions, regulatory compliance, and security audits of ML models. Explainable ML (i.e. explainable artificial intelligence or XAI) has been implemented in numerous open source and commercial packages and explainable ML is also an important, mandatory, or embedded aspect of commercial predictive modeling in industries like financial services. However, like many technologies, explainable ML can be misused, particularly as a faulty safeguard for harmful black-boxes, e.g. fairwashing or scaffolding, and for other malevolent purposes like stealing models and sensitive training data. To promote best-practice discussions for this already in-flight technology, this short text presents internal definitions and a few examples before covering the proposed guidelines. This text concludes with a seemingly natural argument for the use of interpretable models and explanatory, debugging, and disparat...
Pitfalls of Explainable ML: An Industry Perspective
ArXiv, 2021
As machine learning (ML) systems take a more prominent and central role in contributing to life-impacting decisions, ensuring their trustworthiness and accountability is of utmost importance. Explanations sit at the core of these desirable attributes of a ML system. The emerging field is frequently called “Explainable AI (XAI)” or “Explainable ML.” The goal of explainable ML is to intuitively explain the predictions of a ML system, while adhering to the needs to various stakeholders. Many explanation techniques were developed with contributions from both academia and industry. However, there are several existing challenges that have not garnered enough interest and serve as roadblocks to widespread adoption of explainable ML. In this short paper, we enumerate challenges in explainable ML from an industry perspective. We hope these challenges will serve as promising future research directions, and would contribute to democratizing explainable ML.
Explaining Machine Learning Decisions
Philosophy of Science, 2022
The operations of deep networks are widely acknowledged to be inscrutable. The growing field of Explainable AI (XAI) has emerged in direct response to this problem. However, owing to the nature of the opacity in question, XAI has been forced to prioritise interpretability at the expense of completeness, and even realism, so that its explanations are frequently interpretable without being underpinned by more comprehensive explanations faithful to the way a network computes its predictions. While this has been taken to be a shortcoming of the field of XAI, I argue that it is broadly the right approach to the problem.
Individual Explanations in Machine Learning Models: A Survey for Practitioners
ArXiv, 2021
In recent years, the use of sophisticated statistical models that influence decisions in domains of high societal relevance is on the rise. Although these models can often bring substantial improvements in the accuracy and efficiency of organizations, many governments, institutions, and companies are reluctant to their adoption as their output is often difficult to explain in human-interpretable ways. Hence, these models are often regarded as black-boxes, in the sense that their internal mechanisms can be opaque to human audit. In real-world applications, particularly in domains where decisions can have a sensitive impact—e.g., criminal justice, estimating credit scores, insurance risk, health risks, etc.—model interpretability is desired. Recently, the academic literature has proposed a substantial amount of methods for providing interpretable explanations to machine learning models. This survey reviews the most relevant and novel methods that form the state-of-the-art for addressi...
Explaining Any ML Model? -- On Goals and Capabilities of XAI
Cornell University - arXiv, 2022
An increasing ubiquity of machine learning (ML) motivates research on algorithms to "explain" ML models and their predictions-so-called eXplainable Artificial Intelligence (XAI). Despite many survey papers and discussions, the goals and capabilities of XAI algorithms are far from being well understood. We argue that this is because of a problematic reasoning scheme in XAI literature: XAI algorithms are said to complement ML models with desired properties, such as interpretability, or explainability. These properties are in turn assumed to contribute to a goal, like trust in an ML system. But most properties lack precise definitions and their relationship to such goals is far from obvious. The result is a reasoning scheme that obfuscates research results and leaves an important question unanswered: What can one expect from XAI algorithms? In this article, we clarify the goals and capabilities of XAI algorithms from a concrete perspective: that of their users. "Explaining" ML models is only necessary if users have questions about them. We show that users can ask diverse questions, but that only one of them can be answered by current XAI algorithms. Answering this core question can be trivial, difficult or even impossible, depending on the ML application. Based on these insights, we outline which capabilities policymakers, researchers and society can reasonably expect from XAI algorithms.
Explainable Artificial Intelligence in Machine Learning
Capstone Project Report, 2024
Explainable Artificial Intelligence aims to assist in opening the blackbox of opaque algorithms, especially present in machine learning systems, so they can earn due trustworthiness. This work employs a systematic literature review in order to understand how the concepts of explainability and interpretability, among others, are defined in this context. It offers a structured vocabulary to better comprehend and classify artificial intelligence systems and models, as well as the techniques used to make them more intelligible. Inspired by Human-Centered Computing notions, it becomes crucial to consider the audience to which explanations are owed, emphasizing that it is composed of a community of heterogeneous users.
Explaining Explanations: An Overview of Interpretability of Machine Learning
2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA)
There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we describe foundational concepts of explainability and show how they can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.