The Role of Explainable AI in the Research Field of AI Ethics (original) (raw)

Mapping the landscape of ethical considerations in explainable AI research

Ethics and information technology, 2024

With its potential to contribute to the ethical governance of AI, eXplainable AI (XAI) research frequently asserts its relevance to ethical considerations. Yet, the substantiation of these claims with rigorous ethical analysis and reflection remains largely unexamined. This contribution endeavors to scrutinize the relationship between XAI and ethical considerations. By systematically reviewing research papers mentioning ethical terms in XAI frameworks and tools, we investigate the extent and depth of ethical discussions in scholarly research. We observe a limited and often superficial engagement with ethical theories, with a tendency to acknowledge the importance of ethics, yet treating it as a monolithic and not contextualized concept. Our findings suggest a pressing need for a more nuanced and comprehensive integration of ethics in XAI research and practice. To support this, we propose to critically reconsider transparency and explainability in regards to ethical considerations during XAI systems design while accounting for ethical complexity in practice. As future research directions, we point to the promotion of interdisciplinary collaborations and education, also for underrepresented ethical perspectives. Such ethical grounding can guide the design of ethically robust XAI systems, aligning technical advancements with ethical considerations.

The Ethics of Understanding: Exploring Moral Implications of Explainable AI

International Journal of Science and Research (IJSR), 2024

Explainable AI (XAI) refers to a specific kind of artificial intelligence systems that are intentionally built to ensure that their operations and results can be comprehended by humans. The main objective is to enhance the transparency of AI systems' decisionmaking processes, allowing users to understand the rationale behind certain judgements. Important elements of XAI include transparency, interpretability, reasoning, traceability, and user-friendliness. The advantages of Explainable Artificial Intelligence (XAI) include trust and confidence in the system's outputs, ensuring accountability and compliance with regulations, facilitating debugging and refinement of the model, promoting greater cooperation between humans and AI systems, and enabling informed decision-making based on transparent explanations. Examples of XAI applications include healthcare, banking, legal systems, and autonomous systems. Healthcare guarantees that AI-powered diagnosis and treatment suggestions are presented in a straightforward and comprehensible manner, while finance offers explicit elucidations for credit score, loan approvals, and fraud detection. Legal frameworks promote transparency in the implementation of AI applications, therefore assuring equity and mitigating the risk of biases. As artificial intelligence becomes more embedded in society, the significance of explainability will persistently increase, guaranteeing responsible and efficient utilization of these systems. The study of explainable AI is essential as it tackles the ethical, sociological, and technical difficulties presented by the growing use of AI systems. The level of transparency in AI decision-making processes has a direct influence on accountability, since systems that are not transparent might hide the reasoning behind the judgements. Explainability is crucial for detecting and reducing biases in AI systems, so preventing them from perpetuating or worsening social injustices. The objective of the study is to ascertain significant ethical concerns, comprehend the viewpoints of stakeholders, establish an ethical framework, and provide suggestions for policies. The incorporation of Explainable AI into different industries has a significant and far-reaching effect on both technology and society. This includes potential benefits such as increased trust and acceptance, adherence to regulations, improved AI development and troubleshooting, ethical AI design, empowerment and equal access, advancements in education and collaboration, changes in skill requirements, and the establishment of new ethical guidelines.

Should explainability be a fifth ethical principle in AI ethics?

AI and Ethics, 2022

It has been recently claimed that explainability should be added as a fifth principle to AI ethics, supplementing the four principles that are usually accepted in Bioethics: Autonomy, Beneficence, Nonmaleficence and Justice. We propose here that with regard to AI, on the one hand explainability is indeed a new dimension of ethical concern that should be paid attention to, while on the other hand, explainability in itself should not necessarily be considered an ethical “principle”. We think of explainability rather (i) as an epistemic requirement for taking into account ethical principles, but not as an ethical principle in itself; (ii) as an ethical demand that can be derived from ethical principles. We do agree that explainability is a key demand in AI Ethics, with practical importance for stakeholders to take into account; but we argue that it should not be considered as a fifth ethical principle, to maintain a philosophical consistency in the organization of AI ethical principles.

Ethical content in artificial intelligence systems: A demand explained in three critical points

Frontiers in Psychology, 2023

Artificial intelligence (AI) advancements are changing people's lives in ways never imagined before. We argue that ethics used to be put in perspective by seeing technology as an instrument during the first machine age. However, the second machine age is already a reality, and the changes brought by AI are reshaping how people interact and flourish. That said, ethics must also be analyzed as a requirement in the content. To expose this argument, we bring three critical points-autonomy, right of explanation, and value alignment-to guide the debate of why ethics must be part of the systems, not just in the principles to guide the users. In the end, our discussion leads to a reflection on the redefinition of AI's moral agency. Our distinguishing argument is that ethical questioning must be solved only after giving AI moral agency, even if not at the same human level. For future research, we suggest appreciating new ways of seeing ethics and finding a place for machines, using the inputs of the models we have been using for centuries but adapting to the new reality of the coexistence of artificial intelligence and humans.

Examination of Current AI Systems within the Scope of Right to Explanation and Designing Explainable AI Systems

DC JURIX, 2019

This research aims to explore explainable artificial intelligence , a new sub field of artificial intelligence that is gaining importance in academic and business literature due to increased use of intelligent systems in our daily lives. As part of the research project, first of all, the necessity of the explainability in AI systems will be explained in terms of accountability, transparency, liability, and fundamental rights & freedoms. The latest explainable AI algorithms introduced by the AI researchers will be examined firstly from technical and then, from legal perspectives. Their statistical and legal competencies will be analyzed. After detecting the deficiencies of the current solutions, a comprehensive and technical AI system design will be proposed which satisfies not only the statistical requisites; but also the legal, ethical, and logical requisites.

On Knowing the 'why' and the impossibility of ethical AI

Recently, the idea of 'ethical AI' has been gaining traction, and while the project is much needed, a worry about a decontextualized principle-driven ethics takes on added dimensions for the project of ethical AI. Indeed, the absence of context remains a general worry for many aspects of AI. This paper reflects on two arguments for deeply context-sensitive ethics-namely, particularism and feminist ethics-and argues that they demonstrate a tension in the pursuit of ethical AI. Particularism argues that principles are suspect because in the right context, any putative principle could switch valence. Feminist ethics and feminist epistemology are also deeply contextual, and argue that contextual values are relevant to all types of theory justification. Ultimately, I argue that a truly ethical AI must find a way to resolve this deep tension. The idea and importance of 'ethical AI' has been gaining traction, and that is undoubtedly a good thing. Artificial Intelligence, machine learning, and data mining have been developed to perform or streamline complex processes and tasks. However, many of these tasks are fraught with ethical complexity.

Leslie, D., Rincón, C., Briggs, M., Perini, A., Jayadeva, S., Borda, A., et al (2024). AI Explainability in Practice. The Alan Turing Institute.

The Alan Turing Institute, 2024

The purpose of this workbook is to introduce participants to the principle of AI Explainability. Understanding how, why, and when explanations of AI-supported or -generated outcomes need to be provided, and what impacted people’s expectations are about what these explanations should include, is crucial to fostering responsible and ethical practices within your AI projects. To guide you through this process, we will address essential questions: What do we need to explain? And who do we need to explain this to? This workbook offers practical insights and tools to facilitate your exploration of AI Explainability. By providing actionable approaches, we aim to equip you and your team with the means to identify when and how to employ various types of explanations effectively. This workbook is part of the AI Ethics and Governance in Practice series (https://aiethics.turing.ac.uk) co-developed by researchers at The Alan Turing Institute in partnership with key public sector stakeholders.

Unlocking the Black Box: Explainable Artificial Intelligence (XAI) for Trust and Transparency in AI Systems

Journal of digital art & humanities, 2023

Explainable Artificial Intelligence (XAI) has emerged as a critical field in AI research, addressing the lack of transparency and interpretability in complex AI models. This conceptual review explores the significance of XAI in promoting trust and transparency in AI systems. The paper analyzes existing literature on XAI, identifies patterns and gaps, and presents a coherent conceptual framework. Various XAI techniques, such as saliency maps, attention mechanisms, rule-based explanations, and model-agnostic approaches, are discussed to enhance interpretability. The paper highlights the challenges posed by black-box AI models, explores the role of XAI in enhancing trust and transparency, and examines the ethical considerations and responsible deployment of XAI. By promoting transparency and interpretability, this review aims to build trust, encourage accountable AI systems, and contribute to the ongoing discourse on XAI.

The role of explainable AI in the context of the AI Act

2023 ACM Conference on Fairness, Accountability, and Transparency

The proposed EU regulation for Artificial Intelligence (AI), the AI Act, has sparked some debate about the role of explainable AI (XAI) in high-risk AI systems. Some argue that black-box AI models will have to be replaced with transparent ones, others argue that using XAI techniques might help in achieving compliance. This work aims to bring some clarity as regards XAI in the context of the AI Act and focuses in particular on the AI Act requirements for transparency and human oversight. After outlining key points of the debate and describing the current limitations of XAI techniques, this paper carries out an interdisciplinary analysis of how the AI Act addresses the issue of opaque AI systems. In particular, we argue that neither does the AI Act mandate a requirement for XAI, which is the subject of intense scientific research and is not without technical limitations, nor does it ban the use of black-box AI systems. Instead, the AI Act aims to achieve its stated policy objectives with the focus on transparency (including documentation) and human oversight. Finally, in order to concretely illustrate our findings and conclusions, a use case on AI-based proctoring is presented.