Designing Explainability of an Artificial Intelligence System (original) (raw)
Related papers
Science and Engineering Ethics, 2019
This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence (AI) technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws attention to the issues of transparency and explainability. In contrast to standard discussions, however, it is then argued that this knowledge problem regarding agents of responsibility is linked to the other side of the responsibility relation: the addressees or “patients” of responsibility, who may demand reasons for actions and decisions made by using AI. Inspired by a relational approach, responsibility as answerability thus offers an important additional, if not primary, justification for explainability based, not on agency, but on patiency.
From Responsibility to Reason-Giving Explainable Artificial Intelligence 12
We (Kevin Baum, Susanne Mantel, Eva Schmidt and Timo Speith) argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to be responsible for her decision, the human in the loop has to have an explanation available of the system's recommendation. Reason explanations are especially well-suited to this end and we examine whetherand howit might be possible to make such explanations fit with AI systems. We support our claims by focusing on a case of disagreement between human in the loop and AI system.
From Responsibility to Reason-Giving Explainable Artificial Intelligence
Philosophy & Technology
We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue that, in order to be responsible for her decision, the human in the loop has to have an explanation available of the system’s recommendation. Reason explanations are especially well-suited to this end, and we examine whether—and how—it might be possible to make such explanations fit with AI systems. We support our claims by focusing on a case of disagreement between human in the loop and AI system.
DC JURIX, 2019
This research aims to explore explainable artificial intelligence , a new sub field of artificial intelligence that is gaining importance in academic and business literature due to increased use of intelligent systems in our daily lives. As part of the research project, first of all, the necessity of the explainability in AI systems will be explained in terms of accountability, transparency, liability, and fundamental rights & freedoms. The latest explainable AI algorithms introduced by the AI researchers will be examined firstly from technical and then, from legal perspectives. Their statistical and legal competencies will be analyzed. After detecting the deficiencies of the current solutions, a comprehensive and technical AI system design will be proposed which satisfies not only the statistical requisites; but also the legal, ethical, and logical requisites.
The Ethics of Understanding: Exploring Moral Implications of Explainable AI
International Journal of Science and Research (IJSR), 2024
Explainable AI (XAI) refers to a specific kind of artificial intelligence systems that are intentionally built to ensure that their operations and results can be comprehended by humans. The main objective is to enhance the transparency of AI systems' decisionmaking processes, allowing users to understand the rationale behind certain judgements. Important elements of XAI include transparency, interpretability, reasoning, traceability, and user-friendliness. The advantages of Explainable Artificial Intelligence (XAI) include trust and confidence in the system's outputs, ensuring accountability and compliance with regulations, facilitating debugging and refinement of the model, promoting greater cooperation between humans and AI systems, and enabling informed decision-making based on transparent explanations. Examples of XAI applications include healthcare, banking, legal systems, and autonomous systems. Healthcare guarantees that AI-powered diagnosis and treatment suggestions are presented in a straightforward and comprehensible manner, while finance offers explicit elucidations for credit score, loan approvals, and fraud detection. Legal frameworks promote transparency in the implementation of AI applications, therefore assuring equity and mitigating the risk of biases. As artificial intelligence becomes more embedded in society, the significance of explainability will persistently increase, guaranteeing responsible and efficient utilization of these systems. The study of explainable AI is essential as it tackles the ethical, sociological, and technical difficulties presented by the growing use of AI systems. The level of transparency in AI decision-making processes has a direct influence on accountability, since systems that are not transparent might hide the reasoning behind the judgements. Explainability is crucial for detecting and reducing biases in AI systems, so preventing them from perpetuating or worsening social injustices. The objective of the study is to ascertain significant ethical concerns, comprehend the viewpoints of stakeholders, establish an ethical framework, and provide suggestions for policies. The incorporation of Explainable AI into different industries has a significant and far-reaching effect on both technology and society. This includes potential benefits such as increased trust and acceptance, adherence to regulations, improved AI development and troubleshooting, ethical AI design, empowerment and equal access, advancements in education and collaboration, changes in skill requirements, and the establishment of new ethical guidelines.
Accountability of AI Under the Law: The Role of Explanation
SSRN Electronic Journal
The ubiquity of systems using artificial intelligence or "AI" has brought increasing attention to how those systems should be regulated. The choice of how to regulate AI systems will require care. AI systems have the potential to synthesize large amounts of data, allowing for greater levels of personalization and precision than ever before-applications range from clinical decision support to autonomous driving and predictive policing. That said, our AIs continue to lag in common sense reasoning [McCarthy, 1960], and thus there exist legitimate concerns about the intentional and unintentional negative consequences of AI systems [Bostrom, 2003, Amodei et al., 2016, Sculley et al., 2014]. How can we take advantage of what AI systems have to o↵er, while also holding them accountable? In this work, we focus on one tool: explanation. Questions about a legal right to explanation from AI systems was recently debated in the EU General Data Protection Regulation [Goodman and Flaxman, 2016, Wachter et al., 2017a], and thus thinking carefully about when and how explanation from AI systems might improve accountability is timely. Good choices about when to demand explanation can help prevent negative consequences from AI systems, while poor choices may not only fail to hold AI systems accountable but also hamper the development of much-needed beneficial AI systems. Below, we briefly review current societal, moral, and legal norms around explanation, and then focus on the di↵erent contexts under which explanation is currently required under the law. We find that there exists great variation around when explanation is demanded, but there also exist important consistencies: when demanding explanation from humans, what we typically want to know is whether and how certain input factors a↵ected the final decision or outcome. These consistencies allow us to list the technical considerations that must be considered if we desired AI systems that could provide kinds of explanations that are currently required of humans under the law. Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should generally be technically feasible but may sometimes be practically onerous-there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a di↵erent standard.
The Role of Explainable AI in the Research Field of AI Ethics
ACM Transactions on Interactive Intelligent Systems
Ethics of Artificial Intelligence (AI) is a growing research field that has emerged in response to the challenges related to AI. Transparency poses a key challenge for implementing AI ethics in practice. One solution to transparency issues is AI systems that can explain their decisions. Explainable AI (XAI) refers to AI systems that are interpretable or understandable to humans. The research fields of AI ethics and XAI lack a common framework and conceptualization. There is no clarity of the field’s depth and versatility. A systematic approach to understanding the corpus is needed. A systematic review offers an opportunity to detect research gaps and focus points. This paper presents the results of a systematic mapping study (SMS) of the research field of the Ethics of AI. The focus is on understanding the role of XAI and how the topic has been studied empirically. An SMS is a tool for performing a repeatable and continuable literature search. This paper contributes to the research ...
Toward Accountable and Explainable Artificial Intelligence Part one: Theory and Examples
2022
After reviewing the current state of explainable Artificial Intelligence (XAI) capabilities in artificial Intelligence (AI) systems developed for critical domains like criminology, engineering, governance, health, law and psychology, this paper proposes a domain-independent Accountable explainable Artificial Intelligence (AXAI) capability framework. The proposed AXAI framework extends the XAI capability to let AI systems share their decisions and adequately explain the underlying reasoning processes. The idea is to help AI system developers overcome algorithmic biases and system limitations through incorporation of domain independent AXAI capabilities. Moreover, existing XAI methods would neither separate nor quantify measures of comprehensibility, accuracy and accountability so incorporating and assessing XAI capabilities remains difficult. Assessment of the AXAI capabilities of two AI systems in this paper demonstrates that the proposed AXAI framework facilitates separation and me...
A Philosophical Approach for a Human-centered Explainable AI
2020
Requests for technical specifications on the notion of explainability of AI are urgent, although the definitions proposed are sometimes confusing. It is clear from the available literature that it is not easy to provide explicit, discrete and general criteria according to which an algorithm can be considered explainable, especially regarding the issue of trust in the human-machine relationship. The question of black boxes has turned out to be less obvious than we