Reading the Mind in Robots: How Theory of Mind Ability Alters Mental State Attributions During Human-Robot Interactions (original) (raw)

Supplemental material for Of like mind: The (mostly) similar mentalizing of robots and humans

Technology, Mind, and Behavior, 2021

Mentalizing is the process of inferencing others' mental states and it contributes to an inferential system known as Theory of Mind (ToM)-a system that is critical to human interactions as it facilitates sense-making and the prediction of future behaviors. As technological agents like social robots increasingly exhibit hallmarks of intellectual and social agency-and are increasingly integrated into contemporary social life-it is not yet fully understood whether humans hold ToM for such agents. To build on extant research in this domain, five canonical tests that signal implicit mentalizing (white lie detection, intention inferencing, facial affect interpretation, vocal affect interpretation, and false-belief detection) were conducted for an agent (anthropomorphic or machinic robots, or a human) through video-presented (Study 1) and physically copresent interactions (Study 2). Findings suggest that mentalizing tendencies for robots and humans are more alike than different; however, the use of nonliteral language, copresent interactivity, and reliance on agent-class heuristics may reduce tendencies to mentalize robots.

Theory of Mind for a Humanoid Robot

Autonomous Robots, 2002

If we are to build human-like robots that can interact naturally with people, our robots must know not only about the properties of objects but also the properties of animate agents in the world. One of the fundamental social skills for humans is the attribution of beliefs, goals, and desires to other people. This set of skills has often been called a "theory of mind." This paper presents the theories of Leslie and Baron-Cohen [2] on the development of theory of mind in human children and discusses the potential application of both of these theories to building robots with similar capabilities. Initial implementation details and basic skills (such as finding faces and eyes and distinguishing animate from inanimate stimuli) are introduced. I further speculate on the usefulness of a robotic implementation in evaluating and comparing these two models.

Perception of Social Intelligence in Robots Performing False-Belief Tasks

2019

This study evaluated how a robot demonstrating a Theory of Mind (ToM) influenced human perception of social intelligence and animacy in a human-robot interaction. Data was gathered through an online survey where participants watched a video depicting a NAO robot either failing or passing the Sally-Anne false-belief task. Participants (N = 60) were randomly assigned to either the Pass or Fail condition. A Perceived Social Intelligence Survey and the Perceived Intelligence and Animacy subsections of the Godspeed Questionnaire Series (GQS) were used as measures. The GQS was given before viewing the task to measure participant expectations, and again after to test changes in opinion. Our findings show that robots demonstrating ToM significantly increase perceived social intelligence, while robots demonstrating ToM deficiencies are perceived as less socially intelligent.

When Would You Trust a Robot? A Study on Trust and Theory of Mind in Human-Robot Interactions

2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 2020

Trust is a critical issue in human-robot interactions (HRI) as it is the core of human desire to accept and use a non-human agent. Theory of Mind (ToM) has been defined as the ability to understand the beliefs and intentions of others that may differ from one's own. Evidences in psychology and HRI suggest that trust and ToM are interconnected and interdependent concepts, as the decision to trust another agent must depend on our own representation of this entity's actions, beliefs and intentions. However, very few works take ToM of the robot into consideration while studying trust in HRI. In this paper, we investigated whether the exposure to the ToM abilities of a robot could affect humans' trust towards the robot. To this end, participants played a Price Game with a humanoid robot (Pepper) that was presented having either low-level ToM or high-level ToM. Specifically, the participants were asked to accept the price evaluations on common objects presented by the robot. The willingness of the participants to change their own price judgement of the objects (i.e., accept the price the robot suggested) was used as the main measurement of the trust towards the robot. Our experimental results showed that robots possessing a high-level of ToM abilities were trusted more than the robots presented with low-level ToM skills. * for equal contributions in an alphabetical order.

Competing with or Against Cozmo, the Robot: Influence of Interaction Context and Outcome on Mind Perception

International Journal of Social Robotics, 2020

With the rise of integration of robots in our daily lives, people find their own ways of normalizing their interaction with artificial agents, one of which is attributing mind to them. Research has shown that attributing mind to an artificial agent improves the flow of the interaction and alters behavior following it. However, little is known about the the influence of the interaction context and the outcome of the interaction. Addressing this gap in the literature, we explored the influence of the Interaction Context (cooperation vs. competition) and Outcome (win vs. lose) on the attributed levels of mind to an artificial agent. To that end, we used an interactive game that consisted of trivia questions between teams of human participants and the robot Cozmo. We found that in the cooperation condition, those who lost as a team ascribed greater levels of mind to the agent compared to those who won as a team. However, participants who competed with and won against the robot attributed greater levels of mind to the agent compared to those who cooperated and won as a team. These results suggest that people attribute mind to artificial agents in a self-serving way, depending on the interaction context and outcome.

Mind Perception and Social Robots: The Role of Agent Appearance and Action Types

Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 2021

Mind perception is considered to be the ability to attribute mental states to non-human beings. As social robots increasingly become part of our lives, one important question for HRI is to what extent we attribute mental states to these agents and the conditions under which we do so. In the present study, we investigated the effect of appearance and the type of action a robot performs on mind perception. Participants rated videos of two robots in different appearances (one metallic, the other human-like), each of which performed four different actions (manipulating an object, verbal communication, non-verbal communication, and an action that depicts a biological need) on Agency and Experience dimensions. Our results show that the type of action that the robot performs affects the Agency scores. When the robot performs human-specific actions such as communicative actions or an action that depicts a biological need, it is rated to have more agency than when it performs a manipulative action. On the other hand, the appearance of the robot did not have any effect on the Agency or the Experience scores. Overall, our study suggests that the behavioral skills we build into social robots could be quite important in the extent we attribute mental states to them. CCS CONCEPTS • Human-centered computing • Human-computer interaction (HCI) • HCI design and evaluation methods • User studies

A Cognitive Architecture Incorporating Theory of Mind in Social Robots towards Their Personal Assistance at Home

2016

Recent studies show that robots are still far from being long-term companions in our daily lives. With an interdisciplinary approach, this position paper structures around coping with this problem and suggests guidelines on how to develop a cognitive architecture for social robots assuring their long-term personal assistance at home. Following the guidelines, we offer a conceptual cognitive architecture enabling assistant robots to autonomously create cognitive representations of cared-for individuals. Our proposed architecture places Theory of Mind approach in a metacognitive process first to empathize and learn with humans, then to guide robot's high-level decision-making accordingly. These decisions evaluate, regulate and control robot's cognitive process towards understanding, validating and caring for interacted humans and serving them in a personalized way. Hence, robots deploying this architecture will be trustworthy, flexible and generic to any human type and needs; in the end, they will establish a secure attachment with interacted humans. Finally, we present a use-case for our novel cognitive architecture to better visualize our conceptual work.

A Robot by Any Other Frame: Framing and Behaviour Influence Mind Perception in Virtual but not Real-World Environments

arXiv (Cornell University), 2020

Mind perception in robots has been an understudied construct in human-robot interaction (HRI) compared to similar concepts such as anthropomorphism and the intentional stance. In a series of three experiments, we identify two factors that could potentially influence mind perception and moral concern in robots: how the robot is introduced (framing), and how the robot acts (social behaviour). In the first two online experiments, we show that both framing and behaviour independently influence participants' mind perception. However, when we combined both variables in the following real-world experiment, these effects failed to replicate. We hence identify a third factor post-hoc: the online versus real-world nature of the interactions. After analysing potential confounds, we tentatively suggest that mind perception is harder to influence in real-world experiments, as manipulations are harder to isolate compared to virtual experiments, which only provide a slice of the interaction. CCS CONCEPTS • Computer systems organization → Robotics; • Human-centered computing → User studies; HCI theory, concepts and models; Empirical studies in HCI ; Collaborative interaction.