The Role of Social Dialogue and Errors in Robots (original) (raw)

Effective Persuasion Strategies for Socially Assistive Robots

2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

In this paper we present the results of an experimental study investigating the application of human persuasive strategies to a social robot. We demonstrate that robot displays of goodwill and similarity to the participant significantly increased robot persuasiveness, as measured objectively by participant behaviour. However, such strategies had no impact on subjective measures concerning perception of the robot, and perception of the robot did not correlate with participant behaviour. We hypothesise that this is due to difficulty in accurately measuring perception of a robot using subjective measures. We suggest our results are particularly relevant for the design and development of socially assistive robots.

Is Politeness Better than Impoliteness? Comparisons of Robot's Encouragement Effects Toward Performance, Moods, and Propagation

International Journal of Social Robotics, 2023

This study experimentally compared the effects of encouragement with polite/ impolite attitudes from a robot in a monotonous task from three viewpoints: performance, mood, and propagation. Experiment I investigated encouragement effects on performance and mood. The participants did a monotonous task during which a robot continuously provided polite, neutral, or impolite encouragement. Our experiment results showed that polite and impolite encouragement significantly improved performance more than neutral comments, although there was no significant difference between polite and impolite encouragement. In addition, impolite encouragement caused significantly more negative moods than polite encouragement. Experiment II determined whether the robot's encouragement influenced the participants' encouragement styles. The participants behaved similarly to the robot in Experiment I, i.e., they selected polite, neutral, and impolite encouragements by observing the progress of a monotonous task by a dummy participant. The experiment results, which showed that the robot's encouragement significantly influenced the participants' encouragement styles, suggest that polite encouragement is more advantageous than impolite encouragement.

It Would Make Me Happy if You Used My Guess: Comparing Robot Persuasive Strategies in Social Human-Robot Interaction

IEEE Robotics and Automation Letters

This paper presents an exploratory social Human-Robot Interaction (HRI) study that investigates and compares the persuasive effectiveness of robots attempting to influence a user with different behavior strategies. Ten multimodal persuasive strategies were uniquely designed based on Compliance Gaining Behaviors (CGBs). These persuasive strategies were then compared using two competing social robots attempting to influence a participant's estimate during a jelly bean guessing game. The results of our exploratory study with 200 participants showed that affective and logical strategies had a higher potential for persuasive influence and warrant further research.

Social interaction with robots and agents: Where do we stand, where do we go

2009

Robots and agents are becoming increasingly prominent in everyday life, taking on a variety of roles, including helpers, coaches, and even social companions. A core requirement for these social agents is the ability to establish and maintain long-term trusting and engaging relationship with their human users. Much research has already been done on the prerequisites for these types of social agents and robots, in affective computing, social computing and affective HCI. A number of disciplines within psychology and the social sciences are also relevant, contributing theories, data and methods relevant for the emerging areas of social robotics, and social computing in general. The complexity of the task of designing these social agents, and the diversity of the relevant disciplines can be overwhelming. This paper provides a summary of a special session at ACII 2009 whose purpose was to provide an overview of the state-of-the-art in social agents and robots, and explore some of the fundamental questions regarding their development, and the evaluation of their effectiveness.

A Review on The Development and Effect of Conversational Agents and Social Robots

HCI addresses to the concept of Human-Machine communication through including but not limited to AI-enabled embodied conversational software agents. The emergence of these agents has changed the history of computing and robotics once and for all. One of the most prominent social and intellectual qualities in humans is the ability to have conversations. Typically, a conversation takes place between people through verbal and non-verbal mediums. Languages play a vital role in these communications and conversations. Humanness and human-like interaction qualities are found to be in the core of the human-computer interface designs from the beginning of this research doctrine [1]. Programming languages has enabled computer scientists to establish a connection between humans and machines that enables the machine to understand the instructions given. However, the widespread use of cell phones, computers and other smart gadgets has clearly made it a demand of time that the machines used today can understand the commands given in natural languages (i.e. English, German, Spanish, etc.) as the user set is not limited to the computer scientists anymore[1]. Hence, robotics, natural language processing, machine learning, artificial intelligence, etc. has combined force to bridge the communication gap between the machines and the users. From ELIZA [3], Rea [4] to Siri, Amazon Alexa or Google assistant, the software interfaces has come a long way through a lengthy development process. They have proven to have enough influence to change the social, economic and political outcomes through their intelligent behavior [2]. The boundary between human-like and bot-like behavior is greyer then it is black and white [2]. The software interfaces has changed their appearance over the time by stripping down from the ideals of face-to-face conversations. The chatbots (i.e. Twitter bots) found online has developed different social media ecosystems [2] where humans and robots interact with each other in the same plane. To have a conversation or interaction with the machines humans are being trained to accept and use a new set of vocabularies [1]. In this paper, I would like to discuss how these conversational agents and social robots are shaping our social media ecosystems. I will revisit the interrelation between humans and machines while focusing on the socio-cultural impact of these robots into our IoT –enabled smart homes and online virtual spaces.

Social interaction with robots and agents: Where do we stand, where do we go?,” 2009

2016

Robots and agents are becoming increasingly prominent in everyday life, taking on a variety of roles, including helpers, coaches, and even social companions. A core requirement for these social agents is the ability to establish and maintain long-term trusting and engaging relationship with their human users. Much research has already been done on the prerequisites for these types of social agents and robots, in affective computing, social computing and affective HCI. A number of disciplines within psychology and the social sciences are also relevant, contributing theories, data and methods relevant for the emerging areas of social robotics, and social computing in general. However, the complexity of the task of designing these social agents, and the diversity of the relevant disciplines, can be overwhelming. This paper presents a summary of a special session at ACII 2009 whose purpose was to provide an overview of the state-of-the-art in social agents and robots, and to explore som...

Robot Social Intelligence

Lecture Notes in Computer Science, 2012

Robots are pervading human society today at an ever-accelerating rate, but in order to actualize their profound potential impact, robots will need cognitive capabilities that support the necessary social intelligence required to fluently engage with people and other robots. People are social agents and robots must develop sufficient social intelligence to engage with them effectively. Despite their enormous potential, robots will not be accepted in society unless they exhibit social intelligence skills. They cannot work with people effectively if they ignore the limitations, needs, expectations and vulnerability of people working in and around their workspaces. People are limited social agents, i.e. they do not have unlimited cognitive, computational and physical capabilities like perception, paying attention, reacting to stimuli, anticipation, problem-solving. In addition, people are constrained by their morphology; it limits their physical strength for example. People cannot be expected to and will not compensate for social deficiencies of robots, hence widespread acceptance and integration of robots into society will only be achieved if robots possess the sufficient social intelligence to communicate, interact and collaborate with people. In this paper we identify the key cognitive capabilities robots will require to achieve appropriate levels of social intelligence for safe and effective engagement with people. This work serves as a proto-blueprint that can inform the emerging roadmap and research agenda for the new exciting and challenging field of social robotics.

Avoiding the Abject and Seeking the Script: Perceived Mind, Morality, and Trust in a Persuasive Social Robot

ACM Transactions on Human-Robot Interaction

Social robots are being groomed for human influence, including the implicit and explicit persuasion of humans. Humanlike characteristics are understood to enhance robots’ persuasive impact; however, little is known of how perceptions of two key human capacities—mind and morality—function in robots’ persuasive potential. This experiment tests the possibility that perceived robot mind and morality will correspond with greater persuasive impact, moderated by relational trustworthiness for a moral appeal and by capacity trustworthiness for a logical appeal. Via an online survey, a humanoid robot asks participants to help it learn to overcome CAPTCHA puzzles to access important online spaces—either on grounds that it is logical or moral to do so. Based on three performance indicators and one self-report indicator of compliance, analysis indicates that (a) seeing the robot as able to perceive and act on the world selectively improves compliance, and (b) perceiving agentic capacity diminis...