Attribution of Mental State in Strategic Human-Robot Interactions (original) (raw)
Related papers
Behavioral & Experimental Economics eJournal, 2021
This paper presents the results of a behavioral experiment conducted between February 2020 and March 2021 at UniversitA Cattolica del Sacro Cuore, Milan Campus, in which students were matched with either a human or a humanoid robotic partner to play an iterated Prisoner’s Dilemma. The results of a Logit estimation procedure show that subjects are more likely to cooperate with human rather than with robotic partners; that they are more likely to cooperate after receiving a dialogic verbal reaction following a sub-optimal social outcome; and that the effect of the verbal reaction is not dependent on the nature of the partner. Our findings provide new evidence on the effects of verbal communication in strategic frameworks. The results are robust to the exclusion of students of Economics-related subjects, to the inclusion of a set of psychological and behavioral controls, to the way subjects perceive robots’ behavior, and to potential gender biases in human-human interactions.
Anthropomorphism is the tendency to attribute human characteristics to non-human entities. This paper presents exploratory work to evaluate how human responses during the ultimatum game vary according to the level of anthropomorphism of the opponent, which was either a human, a humanoid robot or a computer. Results from an online user study (N=138) show that rejection scores are higher in the case of a computer opponent than in the case of a human or robotic opponent. Participants also took significantly longer to reply to the offer of the computer rather than to the robot. This indicates that players might use similar ways to decide whether to accept or reject offers made by robotic or human opponents which are different in the case of a computer opponent.
International Journal of Social Robotics, 2020
With the rise of integration of robots in our daily lives, people find their own ways of normalizing their interaction with artificial agents, one of which is attributing mind to them. Research has shown that attributing mind to an artificial agent improves the flow of the interaction and alters behavior following it. However, little is known about the the influence of the interaction context and the outcome of the interaction. Addressing this gap in the literature, we explored the influence of the Interaction Context (cooperation vs. competition) and Outcome (win vs. lose) on the attributed levels of mind to an artificial agent. To that end, we used an interactive game that consisted of trivia questions between teams of human participants and the robot Cozmo. We found that in the cooperation condition, those who lost as a team ascribed greater levels of mind to the agent compared to those who won as a team. However, participants who competed with and won against the robot attributed greater levels of mind to the agent compared to those who cooperated and won as a team. These results suggest that people attribute mind to artificial agents in a self-serving way, depending on the interaction context and outcome.
More Social and Emotional Behaviour May Lead to Poorer Perceptions of a Social Robot
Lecture Notes in Computer Science, 2015
In this paper we present a study with an autonomous robot that plays a game against a participant, while expressing some social behaviors. We tried to explore the role of emotional sharing from the robot to the user, in order to understand how it might affect the perception of the robot by its users. To study this, two different conditions were formulated: 1-Sharing Condition (the robot shared its emotional state at the end of each board game); and 2-No Sharing Condition (the robot did not shared its emotions). Participants were randomly assigned to one of the conditions and this study followed a between-subject design methodology. It was expected that in the Sharing Condition participants would feel closer to the robot and would perceive/evaluate it as more humanlike. But results contradicted this expectation and called our attention for the caution that needs to exist when building social behaviours to implement in human-robot interactions (HRI).
No fair!!: an interaction with a cheating robot
Proceeding of the 5th ACM/ …, 2010
Using a humanoid robot and a simple children's game, we examine the degree to which variations in behavior result in attributions of mental state and intentionality. Participants play the well-known children's game "rock-paper-scissors" against a robot that either plays fairly, or that cheats in one of two ways. In the "verbal cheat" condition, the robot announces the wrong outcome on several rounds which it loses, declaring itself the winner. In the "action cheat" condition, the robot changes its gesture after seeing its opponent's play. We find that participants display a greater level of social engagement and make greater attributions of mental state when playing against the robot in the conditions in which it cheats.
Can empathy affect the attribution of mental states to robots?
ACM ICMI , 2023
This paper presents an experimental study showing that the humanoid robot NAO, in a condi;on already validated with regards to its capacity to trigger situa;onal empathy in humans, is able to s;mulate the a=ribu;on of mental states towards itself. Indeed, results show that par;cipants not only experienced empathy towards NAO, when the robot was afraid of losing its memory due to a malfunc;on, but they also a=ributed higher scores to the robot emo;onal intelligence in the A=ribu;on of Mental State Ques;onnaire, in comparison with the users in the control condi;on. This result suggests a possible correla;on between empathy toward the robot and humans' a=ribu;on of mental states to it.
Emotion Attribution to a Non-Humanoid Robot in Different Social Situations
Plos ONE, 2014
In the last few years there was an increasing interest in building companion robots that interact in a socially acceptable way with humans. In order to interact in a meaningful way a robot has to convey intentionality and emotions of some sort in order to increase believability. We suggest that human-robot interaction should be considered as a specific form of inter-specific interaction and that human-animal interaction can provide a useful biological model for designing social robots. Dogs can provide a promising biological model since during the domestication process dogs were able to adapt to the human environment and to participate in complex social interactions. In this observational study we propose to design emotionally expressive behaviour of robots using the behaviour of dogs as inspiration and to test these dog-inspired robots with humans in inter-specific context. In two experiments (wizard-of-oz scenarios) we examined humans' ability to recognize two basic and a secondary emotion expressed by a robot. In Experiment 1 we provided our companion robot with two kinds of emotional behaviour (''happiness'' and ''fear''), and studied whether people attribute the appropriate emotion to the robot, and interact with it accordingly. In Experiment 2 we investigated whether participants tend to attribute guilty behaviour to a robot in a relevant context by examining whether relying on the robot's greeting behaviour human participants can detect if the robot transgressed a predetermined rule. Results of Experiment 1 showed that people readily attribute emotions to a social robot and interact with it in accordance with the expressed emotional behaviour. Results of Experiment 2 showed that people are able to recognize if the robot transgressed on the basis of its greeting behaviour. In summary, our findings showed that dog-inspired behaviour is a suitable medium for making people attribute emotional states to a non-humanoid robot.
Human-robot interactions: A psychological perspective
Human-robot interactions plays a key role in the field of robotics, especially those applications where robots work with humans in a cooperative , semi-independent and/or independent way. The rules of engagement between robots and humans must be explicitly defined to avoid conflicts. In addition, robots must be programmed to behave in humane way, especially in critical situation. Artificial emotions along with artificial intelligence can bring meaningful robotic configurations in this arena. A review of work in this area is presented in the paper along with key ideas for implimentation in various social environment and evaluation schemes for thier performance.
An investigation of social factors related to online mentalizing in a human-robot competitive game
Japanese Psychological Research, 2013
Mentalizing" is the ability to attribute mental states to other agents. The lack of online mentalizing, which is required in actual social contexts, may cause serious social disorders such as autism. However, the mechanism of online mentalizing is still unclear. In this study, we found that behavioral entropy (which indicates the randomness of decision making) was an efficient behavioral index for online mentalizing in a human-human competitive game. Further participants played the game with a humanoid robot; the results indicated that the entropy was significantly higher in participants whose gaze followed the robot's head turn than in those who did not, although the explicit human-likeness of the robot did not correlate with behavioral entropy. These results implied that mentalizing could be divided into two separate processes: an explicit, logical reasoning process and an implicit, intuitive process driven by perception of the other agent's gaze. We hypothesize that the latter is a core process for online mentalizing, and we argue that the social problems of autistic people are caused by dysfunction of this process.
The Effects of an Impolite vs. a Polite Robot Playing Rock-Paper-Scissors
Social Robotics, 2016
There is a growing interest in the Human-Robot Interaction community towards studying the effect of the attitude of a social robot during the interaction with users. Similar to human-human interaction, variations in the robot's attitude may cause substantial differences in the perception of the robot. In this work, we present a preliminary study to assess the effects of the robot's verbal attitude while playing rockpaper-scissors with several subjects. During the game the robot was programmed to behave either in a polite or impolite manner by changing the content of the utterances. In the experiments, 12 participants played with the robot and completed a questionnaire to evaluate their impressions. The results showed that a polite robot is perceived as more likable and more engaging than a rude, defiant robot.