Designing Trust in Artificial Intelligence: A Comparative Study Among Specifications, Principles and Levels of Control (original) (raw)
Related papers
Trust and Trust-Engineering in Artificial Intelligence Research: Theory and Praxis
Philosophy & Technology, 2021
In this paper, I will identify two problems of trust in an AI-relevant context: a theoretical problem and a practical one. I will identify and address a number of skeptical challenges to an AI-relevant theory of trust. In addition, I will identify what I shall term the ‘scope challenge’, which I take to hold for any AI-relevant theory (or collection of theories) of trust that purports to be representationally adequate to the multifarious forms of trust and AI. Thereafter, I will suggest how trust-engineering, a position that is intermediate between the modified pure rational-choice account and an account that gives rise to trustworthy AI, might allow us to address the practical problem of trust, before identifying and critically evaluating two candidate trust-engineering approaches.
Electronic Markets
With the rise of artificial intelligence (AI), the issue of trust in AI emerges as a paramount societal concern. Despite increased attention of researchers, the topic remains fragmented without a common conceptual and theoretical foundation. To facilitate systematic research on this topic, we develop a Foundational Trust Framework to provide a conceptual, theoretical, and methodological foundation for trust research in general. The framework positions trust in general and trust in AI specifically as a problem of interaction among systems and applies systems thinking and general systems theory to trust and trust in AI. The Foundational Trust Framework is then used to gain a deeper understanding of the nature of trust in AI. From doing so, a research agenda emerges that proposes significant questions to facilitate further advances in empirical, theoretical, and design research on trust in AI.
Keep trusting! A plea for the notion of Trustworthy AI
AI & SOCIETY, 2023
A lot of attention has recently been devoted to the notion of Trustworthy AI (TAI). However, the very applicability of the notions of trust and trustworthiness to AI systems has been called into question. A purely epistemic account of trust can hardly ground the distinction between trustworthy and merely reliable AI, while it has been argued that insisting on the importance of the trustee's motivations and goodwill makes the notion of TAI a categorical error. After providing an overview of the debate, we contend that the prevailing views on trust and AI fail to account for the ethically relevant and value-laden aspects of the design and use of AI systems, and we propose an understanding of the notion of TAI that explicitly aims at capturing these aspects. The problems involved in applying trust and trustworthiness to AI systems are overcome by keeping apart trust in AI systems and interpersonal trust. These notions share a conceptual core but should be treated as distinct ones.
A conceptual framework for establishing trust in real world intelligent systems
2021
Intelligent information systems that contain emergent elements often encounter trust problems because results do not get sufficiently explained and the procedure itself can not be fully retraced. This is caused by a control flow depending either on stochastic elements or on the structure and relevance of the input data. Trust in such algorithms can be established by letting users interact with the system so that they can explore results and find patterns that can be compared with their expected solution. Reflecting features and patterns of human understanding of a domain against algorithmic results can create awareness of such patterns and may increase the trust that a user has in the solution. If expectations are not met, close inspection can be used to decide whether a solution conforms to the expectations or whether it goes beyond the expected. By either accepting or rejecting a solution, the user’s set of expectations evolves and a learning process for the users is established. ...
The Value of Measuring Trust in AI - A Socio-Technical System Perspective
2022
Building trust in AI-based systems is deemed critical for their adoption and appropriate use. Recent research has thus attempted to evaluate how various attributes of these systems affect user trust. However, limitations regarding the definition and measurement of trust in AI have hampered progress in the field, leading to results that are inconsistent or difficult to compare. In this work, we provide an overview of the main limitations in defining and measuring trust in AI. We focus on the attempt of giving trust in AI a numerical value and its utility in informing the design of real-world human-AI interactions. Taking a socio-technical system perspective on AI, we explore two distinct approaches to tackle these challenges. We provide actionable recommendations on how these approaches can be implemented in practice and inform the design of human-AI interactions. We thereby aim to provide a starting point for researchers and designers to re-evaluate the current focus on trust in AI, improving the alignment between what empirical research paradigms may offer and the expectations of real-world human-AI interactions.
Philosophy & Technology, 2019
Real engines of the artificial intelligence (AI) revolution, machine learning (ML) models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In this contribution, we will focus on selected ethical investigations around AI by proposing an incremental model of trust that can be applied to both human-human and human-AI interactions. Starting with a quick overview of the existing accounts of trust, with special attention to Taddeo’s concept of “e-trust,” we will discuss all the components of the proposed model and the reasons to trust in human-AI interactions in an example of releva...
On Trust in AI - A Systemic Approach
scip Labs, 2018
Trust as one critical success factor for AI adoption. Hype, complexity, and monopolism prevent trust in AI. A systemic approach investigates the concepts of trust from various perspectives: person, process, environment and time. Competence and reliability as most important trust factors. Development of the A-IQ to foster trust in AI. The A-IQ is an objective measurement of AI capabilities.
Technological Forecasting and Social Change, 2016
Automation with inherent artificial intelligence (AI) is increasingly emerging in diverse applications, for instance, autonomous vehicles and medical assistance devices. However, despite their growing use, there is still noticeable skepticism in society regarding these applications. Drawing an analogy from human social interaction, the concept of trust provides a valid foundation for describing the relationship between humans and automation. Accordingly, this paper explores how firms systematically foster trust regarding applied AI. Based on empirical analysis using nine case studies in the transportation and medical technology industries, our study illustrates the dichotomous constitution of trust in applied AI. Concretely, we emphasize the symbiosis of trust in the technology as well as in the innovating firm and its communication about the technology. In doing so, we provide tangible approaches to increase trust in the technology and illustrate the necessity of a democratic development process for applied AI.