White Lies on Silver Tongues: Why robots need to deceive (and how) (original) (raw)

2017, Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence

It is easy to see that social robots will need the ability to detect and evaluate deceptive speech; otherwise they will be vulnerable to manipulation by malevolent humans. More surprisingly, we argue that effective social robots must also be able to produce deceptive speech. Many forms of technically deceptive speech perform a positive pro-social function, and the social integration of artificial agents will be possible only if they participate in this market of constructive deceit. We demonstrate that a crucial condition for detecting and producing deceptive speech is possession of a theory of mind. Furthermore, strategic reasoning about deception requires identifying a type of goal distinguished by its priority over the norms of conversation, which we call an ulterior motive. We argue that this goal is the appropriate target for ethical evaluation, not the veridicality of speech per se. Consequently, deception-capable robots are compatible with the most prominent programs to ensure that robots behave ethically.

Sign up for access to the world's latest research

checkGet notified about relevant papers

checkSave papers to use in your research

checkJoin the discussion with peers

checkTrack your impact

Sign up for access to the world's latest research

Acting deceptively: Providing robots with the capacity for deception

2011

Deception is utilized by a variety of intelligent systems ranging from insects to human beings. It has been argued that the use of deception is an indicator of theory of mind (Cheney & Seyfarth, 2008) and of social intelligence (Hauser, 1992). We use interdependence theory and game theory to explore the phenomena of deception from the perspective of robotics, and to develop an algorithm which allows an artificially intelligent system to determine if deception is warranted in a social situation. Using techniques introduced in (Wagner, 2009a), we present an algorithm that bases a robot's deceptive action select on its model of the individual it's attempting to deceive. Simulation and robot experiments using these algorithms which investigate the nature of deception itself are discussed.

Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents

Cognitive Science, 2021

The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary people willing to ascribe deceptive intentions to artificial agents? (b) Are they as willing to judge a robot lie as a lie as they would be when human agents engage in verbal deception? (c) Do people blame a lying artificial agent to the same extent as a lying human agent? The response to all three questions is a resounding yes. This, I argue, implies that robot deception and its normative consequences deserve considerably more attention than they presently receive.

Robot deception: Recognizing when a robot should deceive

2009

This article explores the possibility of developing robot control software capable of discerning when and if a robot should deceive. Exploration of this problem is critical for developing robots with deception capabilities and may lend valuable insight into the phenomena of deception itself. In this paper we explore deception from an interdependence/game theoretic perspective. Further, we develop and experimentally investigate an algorithm capable of indicating whether or not a particular social situation warrants deception on the part of the robot. Our qualitative and quantitative results provide evidence that, indeed, our algorithm recognizes situations which justify deception and that a robot capable of discerning these situations is better suited to act than one that does not.

Can a robot lie?

The potential capacity for robots to deceive has received considerable attention recently. Many papers focus on the technical possibility for a robot to engage in deception for beneficial purposes (e.g. in education or health). In this short experimental paper, I focus on a more paradigmatic case: Robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment with 399 participants which explores the following three questions: (i) Are ordinary people willing to ascribe intentions to deceive to artificial agents? (ii) Are they as willing to judge a robot lie as a lie as they would be when human agents engage in verbal deception? (iii) Do they blame a lying artificial agent to the same extent as a lying human agent? The response to all three questions is a resounding yes. This, I argue, implies that robot deception and its normative consequences deserve considerably more attention than it presently attracts.

We need to talk about deception in social robotics!

Ethics and Information Technology

Although some authors claim that deception requires intention, we argue that there can be deception in social robotics, whether or not it is intended. By focusing on the deceived rather than the deceiver, we propose that false beliefs can be created in the absence of intention. Supporting evidence is found in both human and animal examples. Instead of assuming that deception is wrong only when carried out to benefit the deceiver, we propose that deception in social robotics is wrong when it leads to harmful impacts on individuals and society. The appearance and behaviour of a robot can lead to an overestimation of its functionality or to an illusion of sentience or cognition that can promote misplaced trust and inappropriate uses such as care and companionship of the vulnerable. We consider the allocation of responsibility for harmful deception. Finally, we make the suggestion that harmful impacts could be prevented by legislation, and by the development of an assessment framework f...

Other-Oriented Robot Deception: How Can a Robot’s Deceptive Feedback Help Humans in HRI?

Social Robotics, 2016

Deception is a common and essential behavior of social agents. By increasing the use of social robots, the need for robot deception is also growing to achieve more socially intelligent robots. It is a goal that robot deception should be used to benefit humankind. We define this type of benevolent deceptive behavior as other-oriented robot deception. In this paper, we explore an appropriate context in which a robot can potentially use other-oriented deceptive behaviors in a beneficial way. Finally, we conduct a formal humanrobot interaction study with elderly persons and demonstrate that using otheroriented robot deception in a motor-cognition dual task can benefit deceived human partners. We also discuss the ethical implications of robot deception, which is essential for advancing research on this topic.

Ethical Issues in Robotic Deception 1st Draft

This chapter focuses on robotic deception and offers a general overview of the ethical debate on such issue. Deception is not only a complex human phenomenon that has interested philosophers, psychologists, political theorists, artists, scientists and other intellectuals throughout history, but also an opportunity widely exploited in many areas of technological design, such as in computer interface design and social robotics. After a brief theoretical introduction that accounts for the connection between technology and deception, deceptive design is explored first in the field of Human Computer Interaction (HCI)-more specifically, in captology (the science of persuasive technologies)-and secondly in Human Robot Interaction (HRI) and social robotics. Afterwards, the ethical admissibility of deception by design is put to test and the most relevant positions on the matter are reviewed with an eye to the philosophical background against which they are to be understood. It will be shown how deontological and consequentialist approaches are the most common methods of inquiry adopted in the debate and their respective strengths and weaknesses will be underlined. Finally, it will be discussed the ethical admissibility of consciously engaging in fictional relationships with robot companions and carers, which represents a relevant but also controversial case in this respect.

Discussions About Lying With An Ethical Reasoning Robot

Proceedings of the 2017 IEEE International Symposium on Robot and Human Interactive Communication, 2017

— The conversational ethical reasoning robot Im-manuel is presented. Immanuel can reason about moral dilemmas from multiple ethical views. The reported study evaluates the perceived morality of the robot. The participants had a conversation with the robot on whether lying is permissibile in a given situation. Immanuel first signaled uncertainty about whether lying is right or wrong in the situation, then disagreed with the participant's view, and finally asked for justification. The results indicate that participants with a higher tendency to utilitarian judgments are initially more certain about their view as compared to participants with a higher tendency to deontological judgments. These differences vanish towards the end of the dialogue. Lying is defended and argued against by both utilitarian and deontologically oriented participants. The diversity of the reported arguments points to the variety of human moral judgment and calls for more fine-grained representations of moral reasons for social robots.

Artificial Liars: Why Computers Will (Necessarily) Deceive Us and Each Other

In H-C interaction, computer supported cooperation and organisation, computer mediated commerce, intelligent data bases, teams of robots. etc. there will be purposively deceiving computers. In particular, within the Agent-based paradigm we will have “deceiving agents”. Several kinds of deception will be present in interaction with the user, or among people via computer, or among artificial agents not only for malicious reasons (war, commerce, fraud, etc.) but also for goodwill and in our interest. Social control, trust, and moral aspects in artificial societies will be the focus of theoretical worm as well as of implementation. Is there any special affinity between AI and deception?

When is it permissible for artificial intelligence to lie? A trust-based approach

ArXiv, 2021

Conversational Artificial Intelligence (AI) used in industry settings can be trained to closely mimic human behaviors, including lying and deception. However, lying is often a necessary part of negotiation. To address this, we develop a normative framework for when it is ethical or unethical for a conversational AI to lie to humans, based on whether there is what we call “invitation of trust” in a particular scenario. Importantly, cultural norms play an important role in determining whether there is invitation of trust across negotiation settings, and thus an AI trained in one culture may not be generalizable to others. Moreover, individuals may have different expectations regarding the invitation of trust and propensity to lie for human vs. AI negotiators, and these expectations may vary across cultures as well. Finally, we outline how a conversational chatbot can be trained to negotiate ethically by applying autoregressive models to large dialog and negotiations datasets. As busin...

Loading...

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.