Competing with or Against Cozmo, the Robot: Influence of Interaction Context and Outcome on Mind Perception (original) (raw)

We perceive a mind in a robot when we help it

PLOS ONE

People sometimes perceive a mind in inorganic entities like robots. Psychological research has shown that mind perception correlates with moral judgments and that immoral behaviors (i.e., intentional harm) facilitate mind perception toward otherwise mindless victims. We conducted a vignette experiment (N = 129; M age = 21.8 ± 6.0 years) concerning humanrobot interactions and extended previous research's results in two ways. First, mind perception toward the robot was facilitated when it received a benevolent behavior, although only when participants took the perspective of an actor. Second, imagining a benevolent interaction led to more positive attitudes toward the robot, and this effect was mediated by mind perception. These results help predict what people's reactions in future human-robot interactions would be like, and have implications for how to design future social rules about the treatment of robots.

Attribution of Mental State in Strategic Human-Robot Interactions

Research Square (Research Square), 2022

The paper, based on an experiment in which human subjects are paired with either another human or an anthropomorphic robot, when playing an iterated prisoner's dilemma, investigates whether (and how) the level of mental state attributed to an anthropomorphic robot by subjects depends on the the "earnestness" of the robot, i.e. the correspondence of what the robot said and how the robot behaved after a non optimal social outcome is achieved.

Reading the Mind in Robots: How Theory of Mind Ability Alters Mental State Attributions During Human-Robot Interactions

Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2019

This study examined how human-robot interaction is influenced by individual differences in theory of mind ability. Participants engaged in a hallway navigation task with a robot over a number of trials. The display on the robot and its proxemics behavior was manipulated, and participants made mental state attributions across trials. Participant ability in theory of mind was also assessed. Results show that proxemics behavior and robotic display characteristics differentially influence the degree to which individuals perceive the robot when making mental state attributions about self or other. Additionally, theory of mind ability interacted with proxemics and display characteristics. The findings illustrate the importance of understanding individual differences in higher level cognition. As robots become more social, the need to understand social cognitive processes in human-robot interactions increases. Results are discussed in the context of how individual differences and social si...

Smart Human, Smarter Robot: How Cheating Affects Perceptions of Social Agency

Human-robot interaction studies and human-human interaction studies often obtain similar findings. When manipulating high-level apparent cognitive cues in robots, however, this is not always the case. We investigated to what extent the type of agent (human or robot) and the type of behavior (honest or dishonest) affected perceived features of agency and trustworthiness in the context of a competitive game. We predicted that the human and robot in the dishonest manipulation would receive lower attributions of trustworthiness than the human and robot in the honest manipulation, and that the robot would be perceived as less intelligent and intentional than the human overall. The human and robot in the dishonest manipulation received lower attributions of trustworthiness as predicted, but, surprisingly, the robot was perceived to be more intelligent than the human.

Mind Perception and Social Robots: The Role of Agent Appearance and Action Types

Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 2021

Mind perception is considered to be the ability to attribute mental states to non-human beings. As social robots increasingly become part of our lives, one important question for HRI is to what extent we attribute mental states to these agents and the conditions under which we do so. In the present study, we investigated the effect of appearance and the type of action a robot performs on mind perception. Participants rated videos of two robots in different appearances (one metallic, the other human-like), each of which performed four different actions (manipulating an object, verbal communication, non-verbal communication, and an action that depicts a biological need) on Agency and Experience dimensions. Our results show that the type of action that the robot performs affects the Agency scores. When the robot performs human-specific actions such as communicative actions or an action that depicts a biological need, it is rated to have more agency than when it performs a manipulative action. On the other hand, the appearance of the robot did not have any effect on the Agency or the Experience scores. Overall, our study suggests that the behavioral skills we build into social robots could be quite important in the extent we attribute mental states to them. CCS CONCEPTS • Human-centered computing • Human-computer interaction (HCI) • HCI design and evaluation methods • User studies

Mind in the Machine: Perceived Minds Induce Decision Change

2018

Recent research on human robot interaction explored whether people's tendency to conform to others extends to artificial agents (Hertz & Wiese, 2016). However, little is known about to what extent perception of a robot as having a mind affects people's decisions. Grounded on the theory of mind perception, the current study proposes that artificial agents can induce decision change to the extent in which individuals perceive them as having minds. By varying the degree to which robots expressed ability to act (agency) or feel (experience), we specifically investigated the underlying mechanisms of mind attribution to robots and social influence. Our results show an interactive effect of perceived experience and perceived agency on social influence induced by artificial agents. The findings provide preliminary insights regarding autonomous robots' influence on individuals' decisions and form a basis for understanding the underlying dynamics of decision making with robots.

Designing Artificial Agents as Social Companions

Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2017

When we interact with others, we make inferences about their internal states (i.e., intentions, emotions) and use this information to understand and predict their behavior. Reasoning about the internal states of others is referred to as mentalizing, and presupposes that our social partners are believed to have a mind. Seeing mind in others increases trust, prosocial behaviors and feelings of social connection, and leads to improved joint performance. However, while human agents trigger mind perception by default, artificial agents are not automatically treated as intentional entities but need to be designed to do so. The panel addresses this issue by discussing how mind attribution to robots and other automated agents can be elicited by design, what the effects of mind perception are on attitudes and performance in human-robot and human-machine interaction and what behavioral and neuroscientific paradigms can be used to investigate these questions. Application areas covered include ...

Supplemental material for Of like mind: The (mostly) similar mentalizing of robots and humans

Technology, Mind, and Behavior, 2021

Mentalizing is the process of inferencing others' mental states and it contributes to an inferential system known as Theory of Mind (ToM)-a system that is critical to human interactions as it facilitates sense-making and the prediction of future behaviors. As technological agents like social robots increasingly exhibit hallmarks of intellectual and social agency-and are increasingly integrated into contemporary social life-it is not yet fully understood whether humans hold ToM for such agents. To build on extant research in this domain, five canonical tests that signal implicit mentalizing (white lie detection, intention inferencing, facial affect interpretation, vocal affect interpretation, and false-belief detection) were conducted for an agent (anthropomorphic or machinic robots, or a human) through video-presented (Study 1) and physically copresent interactions (Study 2). Findings suggest that mentalizing tendencies for robots and humans are more alike than different; however, the use of nonliteral language, copresent interactivity, and reliance on agent-class heuristics may reduce tendencies to mentalize robots.

No fair!!: an interaction with a cheating robot

Proceeding of the 5th ACM/ …, 2010

Using a humanoid robot and a simple children's game, we examine the degree to which variations in behavior result in attributions of mental state and intentionality. Participants play the well-known children's game "rock-paper-scissors" against a robot that either plays fairly, or that cheats in one of two ways. In the "verbal cheat" condition, the robot announces the wrong outcome on several rounds which it loses, declaring itself the winner. In the "action cheat" condition, the robot changes its gesture after seeing its opponent's play. We find that participants display a greater level of social engagement and make greater attributions of mental state when playing against the robot in the conditions in which it cheats.

It’s a Match: Task Assignment in Human–Robot Collaboration Depends on Mind Perception

International Journal of Social Robotics

Robots are becoming more available for workplace collaboration, but many questions remain. Are people actually willing to assign collaborative tasks to robots? And if so, exactly which tasks will they assign to what kinds of robots? Here we leverage psychological theories on person-job fit and mind perception to investigate task assignment in human–robot collaborative work. We propose that people will assign robots to jobs based on their “perceived mind,” and also that people will show predictable social biases in their collaboration decisions. In this study, participants performed an arithmetic (i.e., calculating differences) and a social (i.e., judging emotional states) task, either alone or by collaborating with one of two robots: an emotionally capable robot or an emotionally incapable robot. Decisions to collaborate (i.e., to assign the robots to generate the answer) rates were high across all trials, especially for tasks that participants found challenging (i.e., the arithmeti...