Perception of Social Intelligence in Robots Performing False-Belief Tasks (original) (raw)

Reading the Mind in Robots: How Theory of Mind Ability Alters Mental State Attributions During Human-Robot Interactions

Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2019

This study examined how human-robot interaction is influenced by individual differences in theory of mind ability. Participants engaged in a hallway navigation task with a robot over a number of trials. The display on the robot and its proxemics behavior was manipulated, and participants made mental state attributions across trials. Participant ability in theory of mind was also assessed. Results show that proxemics behavior and robotic display characteristics differentially influence the degree to which individuals perceive the robot when making mental state attributions about self or other. Additionally, theory of mind ability interacted with proxemics and display characteristics. The findings illustrate the importance of understanding individual differences in higher level cognition. As robots become more social, the need to understand social cognitive processes in human-robot interactions increases. Results are discussed in the context of how individual differences and social si...

Supplemental material for Of like mind: The (mostly) similar mentalizing of robots and humans

Technology, Mind, and Behavior, 2021

Mentalizing is the process of inferencing others' mental states and it contributes to an inferential system known as Theory of Mind (ToM)-a system that is critical to human interactions as it facilitates sense-making and the prediction of future behaviors. As technological agents like social robots increasingly exhibit hallmarks of intellectual and social agency-and are increasingly integrated into contemporary social life-it is not yet fully understood whether humans hold ToM for such agents. To build on extant research in this domain, five canonical tests that signal implicit mentalizing (white lie detection, intention inferencing, facial affect interpretation, vocal affect interpretation, and false-belief detection) were conducted for an agent (anthropomorphic or machinic robots, or a human) through video-presented (Study 1) and physically copresent interactions (Study 2). Findings suggest that mentalizing tendencies for robots and humans are more alike than different; however, the use of nonliteral language, copresent interactivity, and reliance on agent-class heuristics may reduce tendencies to mentalize robots.

When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interactions

2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2020

Trust is a critical issue in human-robot interactions (HRI) as it is the core of human desire to accept and use a non-human agent. Theory of Mind (ToM) has been defined as the ability to understand the beliefs and intentions of others that may differ from one's own. Evidences in psychology and HRI suggest that trust and ToM are interconnected and interdependent concepts, as the decision to trust another agent must depend on our own representation of this entity's actions, beliefs and intentions. However, very few works take ToM of the robot into consideration while studying trust in HRI. In this paper, we investigated whether the exposure to the ToM abilities of a robot could affect humans' trust towards the robot. To this end, participants played a Price Game with a humanoid robot (Pepper) that was presented having either low-level ToM or high-level ToM. Specifically, the participants were asked to accept the price evaluations on common objects presented by the robot. The willingness of the participants to change their own price judgement of the objects (i.e., accept the price the robot suggested) was used as the main measurement of the trust towards the robot. Our experimental results showed that robots possessing a high-level of ToM abilities were trusted more than the robots presented with low-level ToM skills. * for equal contributions in an alphabetical order.

Measuring the Perceived Social Intelligence of Robots

ACM Transactions on Human-Robot Interaction

Robotic social intelligence is increasingly important. However, measures of human social intelligence omit basic skills, and robot-specific scales do not focus on social intelligence. We combined human robot interaction concepts of beliefs, desires, and intentions with psychology concepts of behaviors, cognitions, and emotions to create 20 Perceived Social Intelligence (PSI) Scales to comprehensively measure perceptions of robots with a wide range of embodiments and behaviors. Participants rated humanoid and non-humanoid robots interacting with people in five videos. Each scale had one factor and high internal consistency, indicating each measures a coherent construct. Scales capturing perceived social information processing skills (appearing to recognize, adapt to, and predict behaviors, cognitions, and emotions) and scales capturing perceived skills for identifying people (appearing to identify humans, individuals, and groups) correlated strongly with social competence and constit...

Theory of Mind for a Humanoid Robot

Autonomous Robots, 2002

If we are to build human-like robots that can interact naturally with people, our robots must know not only about the properties of objects but also the properties of animate agents in the world. One of the fundamental social skills for humans is the attribution of beliefs, goals, and desires to other people. This set of skills has often been called a "theory of mind." This paper presents the theories of Leslie and Baron-Cohen [2] on the development of theory of mind in human children and discusses the potential application of both of these theories to building robots with similar capabilities. Initial implementation details and basic skills (such as finding faces and eyes and distinguishing animate from inanimate stimuli) are introduced. I further speculate on the usefulness of a robotic implementation in evaluating and comparing these two models.

We perceive a mind in a robot when we help it

PLOS ONE

People sometimes perceive a mind in inorganic entities like robots. Psychological research has shown that mind perception correlates with moral judgments and that immoral behaviors (i.e., intentional harm) facilitate mind perception toward otherwise mindless victims. We conducted a vignette experiment (N = 129; M age = 21.8 ± 6.0 years) concerning humanrobot interactions and extended previous research's results in two ways. First, mind perception toward the robot was facilitated when it received a benevolent behavior, although only when participants took the perspective of an actor. Second, imagining a benevolent interaction led to more positive attitudes toward the robot, and this effect was mediated by mind perception. These results help predict what people's reactions in future human-robot interactions would be like, and have implications for how to design future social rules about the treatment of robots.

Mind Perception and Social Robots: The Role of Agent Appearance and Action Types

Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 2021

Mind perception is considered to be the ability to attribute mental states to non-human beings. As social robots increasingly become part of our lives, one important question for HRI is to what extent we attribute mental states to these agents and the conditions under which we do so. In the present study, we investigated the effect of appearance and the type of action a robot performs on mind perception. Participants rated videos of two robots in different appearances (one metallic, the other human-like), each of which performed four different actions (manipulating an object, verbal communication, non-verbal communication, and an action that depicts a biological need) on Agency and Experience dimensions. Our results show that the type of action that the robot performs affects the Agency scores. When the robot performs human-specific actions such as communicative actions or an action that depicts a biological need, it is rated to have more agency than when it performs a manipulative action. On the other hand, the appearance of the robot did not have any effect on the Agency or the Experience scores. Overall, our study suggests that the behavioral skills we build into social robots could be quite important in the extent we attribute mental states to them. CCS CONCEPTS • Human-centered computing • Human-computer interaction (HCI) • HCI design and evaluation methods • User studies

Human employees and service robots in the service encounter and the role of attribution of theory of mind

Journal of Retailing and Consumer Services, 2024

A frequently made assumptionsupported in a large number of empirical studiesis that customer satisfaction stemming from a service encounter influences the customer's subsequent word-of-mouth activities. The present study reexamines this association with respect to both human service employees and service robots (which are expected to become more common in service encounters in the near future). First, it is assumed that the customer's attribution of theory of mind to a service agent is an important source of information for the formation of a satisfaction assessment. Indeed, it is assumed that the agent's theory of mind is a prerequisite for understanding the customer's needs. Second, in contrast to many existing studies, word-of-mouth is captured in terms of the valence of what customers actually say (as opposed to various forms of intentions to engage in word-of-mouth, which represent a dominant contemporary operationalization of word-of-mouth). A between-subjects experiment was conducted in which a service agent's identity (service robot vs. human) and service performance (poor vs. good) were the manipulated factors. The results show that both these factors influenced attribution of theory of mind to the agent, and that attribution of theory of mind enhanced customer satisfaction. The results also show that customer satisfaction affected word-of-mouth content in a valence-congruent way.