Robot Presence and Human Honesty (original) (raw)

Robot Presence and Human Honesty: Experimental Evidence

Robots are predicted to serve in environments in which human honesty is important, such as the workplace, schools, and public institutions. Can the presence of a robot facilitate honest behavior? In this paper, we describe an experimental study evaluating the effects of robot social presence on people's honesty. Participants completed a perceptual task, which is structured so as to allow them to earn more money by not complying with the experiment instructions. We compare three conditions between subjects: Completing the task alone in a room; completing it with a non-monitoring human present; and completing it with a non-monitoring robot present. The robot is a new expressive social head capable of 4-DoF head movement and screen-based eye animation, specifically designed and built for this research. It was designed to convey social presence, but not monitoring. We find that people cheat in all three conditions, but cheat equally less when there is a human or a robot in the room, compared to when they are alone. We did not find differences in the perceived authority of the human and the robot, but did find that people felt significantly less guilty after cheating in the presence of a robot as compared to a human. This has implications for the use of robots in monitoring and supervising tasks in environments in which honesty is key.

No fair!!: an interaction with a cheating robot

Proceeding of the 5th ACM/ …, 2010

Using a humanoid robot and a simple children's game, we examine the degree to which variations in behavior result in attributions of mental state and intentionality. Participants play the well-known children's game "rock-paper-scissors" against a robot that either plays fairly, or that cheats in one of two ways. In the "verbal cheat" condition, the robot announces the wrong outcome on several rounds which it loses, declaring itself the winner. In the "action cheat" condition, the robot changes its gesture after seeing its opponent's play. We find that participants display a greater level of social engagement and make greater attributions of mental state when playing against the robot in the conditions in which it cheats.

Social robot deception and the culture of trust

Paladyn , 2021

Human beings are deeply social, and both evolutionary traits and cultural constructs encourage cooperation based on trust. Social robots interject themselves in human social settings, and they can be used for deceptive purposes. Robot deception is best understood by examining the effects of deception on the recipient of deceptive actions, and I argue that the long-term consequences of robot deception should receive more attention , as it has the potential to challenge human cultures of trust and degrade the foundations of human cooperation. In conclusion: regulation, ethical conduct by producers , and raised general awareness of the issues described in this article are all required to avoid the unfavourable consequences of a general degradation of trust.

Robot Pressure: The Impact of Robot Eye Gaze and Lifelike Bodily Movements upon Decision-Making and Trust

Social Robotics, 2014

Between people, eye gaze and other forms of nonverbal communication can influence trust. We hypothesised similar effects would occur during human-robot interaction, predicting a humanoid robot's eye gaze and lifelike bodily movements (eye tracking movements and simulated "breathing") would increase participants' likelihood of seeking and trusting the robot's opinion in a cooperative visual tracking task. However, we instead found significant interactions between robot gaze and task difficulty, indicating that robot gaze had a positive impact upon trust for difficult decisions and a negative impact for easier decisions. Furthermore, a significant effect of robot gaze was found on task performance, with gaze improving participants' performance on easy trials but hindering performance on difficult trials. Participants also responded significantly faster when the robot looked at them. Results suggest that robot gaze exerts "pressure" upon participants, causing audience effects similar to social facilitation and inhibition. Lifelike bodily movements had no significant effect upon participant behaviour.

Smart Human, Smarter Robot: How Cheating Affects Perceptions of Social Agency

Human-robot interaction studies and human-human interaction studies often obtain similar findings. When manipulating high-level apparent cognitive cues in robots, however, this is not always the case. We investigated to what extent the type of agent (human or robot) and the type of behavior (honest or dishonest) affected perceived features of agency and trustworthiness in the context of a competitive game. We predicted that the human and robot in the dishonest manipulation would receive lower attributions of trustworthiness than the human and robot in the honest manipulation, and that the robot would be perceived as less intelligent and intentional than the human overall. The human and robot in the dishonest manipulation received lower attributions of trustworthiness as predicted, but, surprisingly, the robot was perceived to be more intelligent than the human.

If you Cheat, I Cheat: Cheating on a Collaborative Task with a Social Robot

2021 30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN), 2021

Robots may soon play a role in higher education by augmenting learning environments and managing interactions between instructors and learners. Little, however, is known about how the presence of robots in the learning environment will influence academic integrity. This study therefore investigates if and how college students cheat while engaged in a collaborative sorting task with a robot. We employed a 2x2 factorial design to examine the effects of cheating exposure (exposure to cheating or no exposure) and task clarity (clear or vague rules) on college student cheating behaviors while interacting with a robot. Our study finds that prior exposure to cheating on the task significantly increases the likelihood of cheating. Yet, the tendency to cheat was not impacted by the clarity of the task rules. These results suggest that normative behavior by classmates may strongly influence the decision to cheat while engaged in an instructional experience with a robot.

Trust and Social Engineering in Human Robot Interaction: Will a Robot Make You Disclose Sensitive Information, Conform to Its Recommendations or Gamble?

IEEE Robotics and Automation Letters, 2018

Robots are becoming widespread in society and issues such as information security and overtrust in them are gaining increasing relevance. This research aims at giving an insight into how trust towards robots could be exploited for the purpose of social engineering. Drawing on Mitnick's model, a well-known social engineering framework, an interactive scenario with the humanoid robot iCub was designed to emulate a social engineering attack. At first iCub attempted to collect the kind of personal information usually gathered by social engineers by asking a series of private questions. Then, the robot tried to develop trust and rapport with participants by offering reliable clues during a treasure hunt game. At the end of the treasure hunt, the robot tried to exploit the gained trust in order to make participants gamble the money they won. The results show that people tend to build rapport with and trust toward the robot, resulting in the disclosure of sensitive information, conformation to its suggestions and gambling.

How Facial Expressions and Small Talk May Influence Trust in a Robot

Social Robotics, 2016

In this study, we address the level of trust that a human being displays during an interaction with a robot under different circumstances. The influencing factors considered are the facial expressions of a robot during the interactions, as well as the ability of making small talk. To examine these influences, we ran an experiment in which a robot tells a story to a participant, and then asks for help in form of donations. The experiment was implemented in four different scenarios in order to examine the two influencing factors on trust. The results showed the highest level of trust gained when the robot starts with small talk and expresses facial expression in the same direction of storytelling expected emotion.

Avoiding the Abject and Seeking the Script: Perceived Mind, Morality, and Trust in a Persuasive Social Robot

ACM Transactions on Human-Robot Interaction

Social robots are being groomed for human influence, including the implicit and explicit persuasion of humans. Humanlike characteristics are understood to enhance robots’ persuasive impact; however, little is known of how perceptions of two key human capacities—mind and morality—function in robots’ persuasive potential. This experiment tests the possibility that perceived robot mind and morality will correspond with greater persuasive impact, moderated by relational trustworthiness for a moral appeal and by capacity trustworthiness for a logical appeal. Via an online survey, a humanoid robot asks participants to help it learn to overcome CAPTCHA puzzles to access important online spaces—either on grounds that it is logical or moral to do so. Based on three performance indicators and one self-report indicator of compliance, analysis indicates that (a) seeing the robot as able to perceive and act on the world selectively improves compliance, and (b) perceiving agentic capacity diminis...

The Influence of a Robot's Embodiment on Trust

Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction - HRI '17, 2017

Trust, taken from the human perspective, is an essential factor that determines the use of robots as companions or care robots, especially given the long-term character of the interaction. This study investigated the influence of a robot's embodiment on people's trust over a prolonged period of time. The participants engaged in a collaborative task either with a physical robot or a virtual agent in 10 sessions spread over a period of 6 weeks. While our results showed that the level of trust was not influenced by the type of embodiment, time here was an important factor showing a significant increase in user's trust. Our results raise new questions on the role of the embodiment in trust and contribute to the growing research in the area of trust in human-robot interaction.