When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interactions (original) (raw)

Robot Mindreading and the Problem of Trust

AISB 2021 Proceedings, 2021

Robot mindreading is the attribution of beliefs, desires, and intentions to robots. Assuming that humans engage in robot mindreading, and assuming that attributing intentional states to robots fosters trust towards them, the question is whether the development of mind-readable robots is compatible with the goal of enhancing transparency and understanding in automatic decision making. There is a risk that features that enhance mind-readability will make the mechanisms that determine automatic decisions even more opaque than they already are. And current strategies to eliminate opacity do not enhance mind-readability. This paper discusses different ways of analyzing this apparent trade-off and suggests that a possible solution is to adopt tolerable degrees of opacity that depend on pragmatic factors connected to the level of trust required for the intended uses of the robot.

Trust and Cooperation in Human-Robot Decision Making

2016

Trust plays a key role in social interactions, particularly when the decisions we make depend on the people we face. In this paper, we use game theory to explore whether a person’s decisions are influenced by the type of agent they interact with:human or robot. By adopting a coin entrustment game, we quantitatively measure trust and cooperation to see if such phenomena emerge differently when a person believes they are playing a robot rather than another human. We found that while people cooperate with other humans and robots at a similar rate, they grow to trust robots more completely than humans. As a possible explanation for these differences, our survey results suggest that participants perceive humans as having faculty for feelings and sympathy, whereas they perceive robots as being more precise and reliable.

Trust Toward Robots and Artificial Intelligence: An Experimental Approach to Human–Technology Interactions Online

Frontiers in Psychology, 2020

Robotization and artificial intelligence (AI) are expected to change societies profoundly. Trust is an important factor of human–technology interactions, as robots and AI increasingly contribute to tasks previously handled by humans. Currently, there is a need for studies investigating trust toward AI and robots, especially in first-encounter meetings. This article reports findings from a study investigating trust toward robots and AI in an online trust game experiment. The trust game manipulated the hypothetical opponents that were described as either AI or robots. These were compared with control group opponents using only a human name or a nickname. Participants (N = 1077) lived in the United States. Describing opponents with robots or AI did not impact participants’ trust toward them. The robot called jdrx894 was the most trusted opponent. Opponents named “jdrx894” were trusted more than opponents called “Michael.” Further analysis showed that having a degree in technology or en...

If you trust me, I will trust you: the role of reciprocity in human-robot trust

arXiv (Cornell University), 2021

Humans are constantly influenced by others' behavior and opinions. Of importance, social influence among humans is shaped by reciprocity: we follow more the advice of someone who has been taking into consideration our opinions. In the current work, we investigate whether reciprocal social influence can emerge while interacting with a social humanoid robot. In a joint task, a human participant and a humanoid robot made perceptual estimates and then could overtly modify them after observing the partner's judgment. Results show that endowing the robot with the ability to express and modulate its own level of susceptibility to the human's judgments represented a double-edged sword. On the one hand, participants lost confidence in the robot's competence when the robot was following their advice; on the other hand, participants were unwilling to disclose their lack of confidence to the susceptible robot, suggesting the emergence of reciprocal mechanisms of social influence supporting human-robot collaboration.

Evaluating People’s Perceptions of Trust in a Robot in a Repeated Interactions Study

Social Robotics, 2020

Trust has been established to be a key factor in fostering human-robot interactions. However, trust can change overtime according to different factors, including a breach of trust due to a robot's error. In this exploratory study, we observed people's interactions with a companion robot in a real house, adapted for human-robot interaction experimentation, over three weeks. The interactions happened in six scenarios in which a robot performed different tasks under two different conditions. Each condition included fourteen tasks performed by the robot, either correctly, or with errors with severe consequences on the first or last day of interaction. At the end of each experimental condition, participants were presented with an emergency scenario to evaluate their trust in the robot. We evaluated participants' trust in the robot by observing their decision to trust the robot during the emergency scenario, and by collecting their views through questionnaires. We concluded that there is a correlation between the timing of an error with severe consequences performed by the robot and the corresponding loss of trust of the human in the robot. In particular, people's trust is subjected to the initial mental formation.

The Influence of a Robot's Embodiment on Trust

Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction - HRI '17, 2017

Trust, taken from the human perspective, is an essential factor that determines the use of robots as companions or care robots, especially given the long-term character of the interaction. This study investigated the influence of a robot's embodiment on people's trust over a prolonged period of time. The participants engaged in a collaborative task either with a physical robot or a virtual agent in 10 sessions spread over a period of 6 weeks. While our results showed that the level of trust was not influenced by the type of embodiment, time here was an important factor showing a significant increase in user's trust. Our results raise new questions on the role of the embodiment in trust and contribute to the growing research in the area of trust in human-robot interaction.

A Novel Cognitive Approach for Measuring the Trust in Robots

Journal of Information Technology Research, 2019

Oneofthemajorchallengesinhuman-robotinteractionistodeterminethetrustworthinessoftherobot. Inordertoenhanceandaugmentthehumancapabilitiesbyestablishingahumanrobotpartnership, itisimportanttoevaluatethereliabilityanddependabilityoftherobotsforthespecifictasks.The trustrelationshipbetweenthehumanandrobotbecomescriticalespeciallyinthecaseswherethere isstrongcohesionbetweenhumansandrobots.Inthisarticle,acognitionbased-trustmodelhas beendevelopedwhichmeasuresthetrustandotherrelatedcognitiveparametersoftherobot.This trustmodelhasbeenappliedonacustomizedrobotwhichperformspathplanningtasksusingthree differentalgorithms.Thesimulationofthemodelhasbeendonetoevaluatethetrustoftherobotfor thethreealgorithms.Theresultsshowthatwitheachlearningcycleofeachmethod,thetrustofthe robotincreases.Anempiricalevaluationhasalsobeendonetovalidatethemodel.

Promises and trust in human–robot interaction

Scientific Reports, 2021

Understanding human trust in machine partners has become imperative due to the widespread use of intelligent machines in a variety of applications and contexts. The aim of this paper is to investigate whether human-beings trust a social robot—i.e. a human-like robot that embodies emotional states, empathy, and non-verbal communication—differently than other types of agents. To do so, we adapt the well-known economic trust-game proposed by Charness and Dufwenberg (2006) to assess whether receiving a promise from a robot increases human-trust in it. We find that receiving a promise from the robot increases the trust of the human in it, but only for individuals who perceive the robot very similar to a human-being. Importantly, we observe a similar pattern in choices when we replace the humanoid counterpart with a real human but not when it is replaced by a computer-box. Additionally, we investigate participants’ psychophysiological reaction in terms of cardiovascular and electrodermal ...

FIDES: How Emotions and Small Talks May Influence Trust in an Embodied vs. Non-embodied Robot

2017

Trust is known as a complex social-emotional concept and its formation is highly affected by non-verbal behavior. Social robots, as any other social entities, are supposed to maintain a level of trustworthiness during their interactions with humans. In this sense, we have examined the influence of a set of factors, including emotional representation, performing small talk and embodiment, on the way people infer trustworthiness of a robot. To examine these factors, we have performed different experiments using two robots, NAO and Emys, with and without physical embodiment respectively. To measure trust levels, we assumed two different metrics, a trust questionnaire and the amount of donations the participants would make. The results suggest that these factors influence significantly the level of trust. As, people tend to trust on Emys significantly differently depending on its facial expressions and making or not making small talk, and, people tend to donate differently to NAO when i...

Trust as indicator of robot functional and social acceptance. An experimental study on user conformation to the iCub's answers

arXiv (Cornell University), 2015

To investigate the functional and social acceptance of a humanoid robot, we carried out an experimental study with 56 adult participants and the iCub robot. Trust in the robot has been considered as a main indicator of acceptance in decisionmaking tasks characterized by perceptual uncertainty (e.g. , evaluating the weight of two objects) and sociocognitive uncertainty (e.g. , evaluating which is the most suitable item in a specific context), and measured by the participants' conformation to the iCub's answers to specific questions. In particular, we were interested in understanding whether specific (i) userrelated features (i.e. desire for control), (ii) robotrelated features (i.e. , attitude towards social influence of robots), and (iii) contextrelated features (i.e. , collaborative vs. competitive scenario), may influence their trust towards the iCub robot. We found that participants conformed more to the iCub's answers when their decisions were about functional issues than when they were about social issues. Moreover, the few participants conforming to the iCub's answers for social issues also conformed less for functional issues. Trust in the robot's functional savvy does not thus seem to be a prerequisite for trust in its social savvy. Finally, desire for control, attitude towards social influence of robots and type of interaction scenario did not influence the trust in iCub. Results are discussed with relation to methodology of HRI research.