The Role of Normware in Trustworthy and Explainable AI (original) (raw)

The Ethics of Understanding: Exploring Moral Implications of Explainable AI

International Journal of Science and Research (IJSR), 2024

Explainable AI (XAI) refers to a specific kind of artificial intelligence systems that are intentionally built to ensure that their operations and results can be comprehended by humans. The main objective is to enhance the transparency of AI systems' decisionmaking processes, allowing users to understand the rationale behind certain judgements. Important elements of XAI include transparency, interpretability, reasoning, traceability, and user-friendliness. The advantages of Explainable Artificial Intelligence (XAI) include trust and confidence in the system's outputs, ensuring accountability and compliance with regulations, facilitating debugging and refinement of the model, promoting greater cooperation between humans and AI systems, and enabling informed decision-making based on transparent explanations. Examples of XAI applications include healthcare, banking, legal systems, and autonomous systems. Healthcare guarantees that AI-powered diagnosis and treatment suggestions are presented in a straightforward and comprehensible manner, while finance offers explicit elucidations for credit score, loan approvals, and fraud detection. Legal frameworks promote transparency in the implementation of AI applications, therefore assuring equity and mitigating the risk of biases. As artificial intelligence becomes more embedded in society, the significance of explainability will persistently increase, guaranteeing responsible and efficient utilization of these systems. The study of explainable AI is essential as it tackles the ethical, sociological, and technical difficulties presented by the growing use of AI systems. The level of transparency in AI decision-making processes has a direct influence on accountability, since systems that are not transparent might hide the reasoning behind the judgements. Explainability is crucial for detecting and reducing biases in AI systems, so preventing them from perpetuating or worsening social injustices. The objective of the study is to ascertain significant ethical concerns, comprehend the viewpoints of stakeholders, establish an ethical framework, and provide suggestions for policies. The incorporation of Explainable AI into different industries has a significant and far-reaching effect on both technology and society. This includes potential benefits such as increased trust and acceptance, adherence to regulations, improved AI development and troubleshooting, ethical AI design, empowerment and equal access, advancements in education and collaboration, changes in skill requirements, and the establishment of new ethical guidelines.

How Explainability Contributes to Trust in AI

2022 ACM Conference on Fairness, Accountability, and Transparency

We provide a philosophical explanation of the relation between artificial intelligence (AI) explainability and trust in AI, providing a case for expressions, such as "explainability fosters trust in AI, " that commonly appear in the literature. This explanation relates the justification of the trustworthiness of an AI with the need to monitor it during its use. We discuss the latter by referencing an account of trust, called "trust as anti-monitoring," that different authors contributed developing. We focus our analysis on the case of medical AI systems, noting that our proposal is compatible with internalist and externalist justifications of trustworthiness of medical AI and recent accounts of warranted contractual trust. We propose that "explainability fosters trust in AI" if and only if it fosters justified and warranted paradigmatic trust in AI, i.e., trust in the presence of the justified belief that the AI is trustworthy, which, in turn, causally contributes to rely on the AI in the absence of monitoring. We argue that our proposed approach can intercept the complexity of the interactions between physicians and medical AI systems in clinical practice, as it can distinguish between cases where humans hold different beliefs on the trustworthiness of the medical AI and exercise varying degrees of monitoring on them. Finally, we apply our account to user's trust in AI, where, we argue, explainability does not contribute to trust. By contrast, when considering public trust in AI as used by a human, we argue, it is possible for explainability to contribute to trust. Our account can explain the apparent paradox that in order to trust AI, we must trust AI users not to trust AI completely. Summing up, we can explain how explainability contributes to justified trust in AI, without leaving a reliabilist framework, but only by redefining the trusted entity as an AI-user dyad. CCS CONCEPTS • Human-centered computing → HCI theory, concepts and models; • Applied computing → Sociology; • Social and professional topics → Computing / technology policy; • Computing methodologies → Artificial intelligence.

The Role of Explainable AI in the Research Field of AI Ethics

ACM Transactions on Interactive Intelligent Systems

Ethics of Artificial Intelligence (AI) is a growing research field that has emerged in response to the challenges related to AI. Transparency poses a key challenge for implementing AI ethics in practice. One solution to transparency issues is AI systems that can explain their decisions. Explainable AI (XAI) refers to AI systems that are interpretable or understandable to humans. The research fields of AI ethics and XAI lack a common framework and conceptualization. There is no clarity of the field’s depth and versatility. A systematic approach to understanding the corpus is needed. A systematic review offers an opportunity to detect research gaps and focus points. This paper presents the results of a systematic mapping study (SMS) of the research field of the Ethics of AI. The focus is on understanding the role of XAI and how the topic has been studied empirically. An SMS is a tool for performing a repeatable and continuable literature search. This paper contributes to the research ...

Making Sense of the Conceptual Nonsense ‘Trustworthy AI’

AI and Ethics , 2022

Following the publication of numerous ethical principles and guidelines, the concept of 'Trustworthy AI' has become widely used. However, several AI ethicists argue against using this concept, often backing their arguments with decades of conceptual analyses made by scholars who studied the concept of trust. In this paper, I describe the historical philosophical roots of their objection and the premise that trust entails a human quality that technologies lack. Then, I review existing criticisms about 'Trustworthy AI' and the consequence of ignoring these criticisms: if the concept of 'Trustworthy AI' is kept being used, we risk attributing responsibilities to agents who cannot be held responsible, and consequently, deteriorate social structures which regard accountability and liability. Nevertheless, despite suggestions to shift the paradigm from 'Trustworthy AI' to 'Reliable AI', I argue that, realistically, this concept will be kept being used. I end by arguing that, ultimately, AI ethics is also about power, social justice, and scholarly activism. Therefore, I propose that community-driven and social justice-oriented ethicists of AI and trust scholars further focus on (a) democratic aspects of trust formation; and (b) draw attention to critical social aspects highlighted by phenomena of distrust. This way, it will be possible to further reveal shifts in power relations, challenge unfair status quos, and suggest meaningful ways to keep the interests of citizens.

Keep trusting! A plea for the notion of Trustworthy AI

AI & SOCIETY, 2023

A lot of attention has recently been devoted to the notion of Trustworthy AI (TAI). However, the very applicability of the notions of trust and trustworthiness to AI systems has been called into question. A purely epistemic account of trust can hardly ground the distinction between trustworthy and merely reliable AI, while it has been argued that insisting on the importance of the trustee's motivations and goodwill makes the notion of TAI a categorical error. After providing an overview of the debate, we contend that the prevailing views on trust and AI fail to account for the ethically relevant and value-laden aspects of the design and use of AI systems, and we propose an understanding of the notion of TAI that explicitly aims at capturing these aspects. The problems involved in applying trust and trustworthiness to AI systems are overcome by keeping apart trust in AI systems and interpersonal trust. These notions share a conceptual core but should be treated as distinct ones.