Forms and Frames: Mind, Morality, and Trust in Robots Across Prototypical Interactions (original) (raw)

The Influence of a Robot's Embodiment on Trust

Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction - HRI '17, 2017

Trust, taken from the human perspective, is an essential factor that determines the use of robots as companions or care robots, especially given the long-term character of the interaction. This study investigated the influence of a robot's embodiment on people's trust over a prolonged period of time. The participants engaged in a collaborative task either with a physical robot or a virtual agent in 10 sessions spread over a period of 6 weeks. While our results showed that the level of trust was not influenced by the type of embodiment, time here was an important factor showing a significant increase in user's trust. Our results raise new questions on the role of the embodiment in trust and contribute to the growing research in the area of trust in human-robot interaction.

Interpersonal variation in understanding robots as social actors

Proceedings of the 6th international conference on Human-robot interaction - HRI '11, 2011

In this paper, I investigate interpersonal variation in verbal HRI with respect to the computers-as-social-actors hypothesis. The analysis of a corpus of verbal human-robot interactions shows that only a subgroup of the users treat the robot as a social actor. Thus, taking interpersonal variation into account reveals that not all users transfer social behaviors from human interactions into HRI. This casts doubts on the suggestion that the social responses to computers and robots reported on previously are due to mindlessness. At the same time, participants' understanding of robots as social or non-social actors can be shown to have a considerable influence on their linguistic behavior throughout the dialogs.

FIDES: How Emotions and Small Talks May Influence Trust in an Embodied vs. Non-embodied Robot

2017

Trust is known as a complex social-emotional concept and its formation is highly affected by non-verbal behavior. Social robots, as any other social entities, are supposed to maintain a level of trustworthiness during their interactions with humans. In this sense, we have examined the influence of a set of factors, including emotional representation, performing small talk and embodiment, on the way people infer trustworthiness of a robot. To examine these factors, we have performed different experiments using two robots, NAO and Emys, with and without physical embodiment respectively. To measure trust levels, we assumed two different metrics, a trust questionnaire and the amount of donations the participants would make. The results suggest that these factors influence significantly the level of trust. As, people tend to trust on Emys significantly differently depending on its facial expressions and making or not making small talk, and, people tend to donate differently to NAO when i...

Framing Effects on Judgments of Social Robots' (Im)Moral Behaviors

Frontiers in Robotics and AI, 2021

Frames-discursive structures that make dimensions of a situation more or less salient-are understood to influence how people understand novel technologies. As technological agents are increasingly integrated into society, it becomes important to discover how native understandings (i.e., individual frames) of social robots are associated with how they are characterized by media, technology developers, and even the agents themselves (i.e., produced frames). Moreover, these individual and produced frames may influence the ways in which people see social robots as legitimate and trustworthy agents-especially in the face of (im)moral behavior. This three-study investigation begins to address this knowledge gap by 1) identifying individually held frames for explaining an android's (im)moral behavior, and experimentally testing how produced frames prime judgments about an android's morally ambiguous behavior in 2) mediated representations and 3) face-to-face exposures. Results indicate that people rely on discernible ground rules to explain social robot behaviors; these frames induced only limited effects on responsibility judgments of that robot's morally ambiguous behavior. Evidence also suggests that technophobia-induced reactance may move people to reject a produced frame in favor of a divergent individual frame.

Do You Trust Me? Investigating the Formation of Trust in Social Robots

Progress in Artificial Intelligence, 2019

Human beings live in a society with a complex system of social-emotional relations. Trust is one key concept in this system. It can help to reduce the social complexity, mainly in those cases where it is necessary to cooperate. Thus, the area of social robotics has been studying different approaches to perform cooperative tasks between humans and robots. Here, we examine the influence of a set of factors (gender, emotional representation, making Small Talk and embodiment) that may affect the trustworthiness of a robot. The results showed that these factors influence the level of trust that people put in robots. Specifically, a social robot with embodiment telling a sad story with sad facial expression and gestures has more influence on the trust level of a female subject.

Welcoming Our Robot Overlords: Initial Expectations About Interaction With a Robot

Communication Research Reports, 2014

ABSTRACT Because robots and other electronic agents perform increasingly social functions, people will soon face the possibility of more frequent human–robot interactions. But what kinds of expectations do people bring with them into these potential interactions? Based on the possibility of a human-to-human interaction script, the current research hypothesized that people will be more uncertain about, anticipate less social attraction to, and expect less social presence when they expect to interact with a robot as opposed to another human. An experiment was designed in which people were told they would interact with either a robot or another person, and each of these three expectations was measured. The data were consistent with each of three hypotheses. These findings are discussed, as are avenues for future research.

Do Robot Performance and Behavioral Style affect Human Trust?

International Journal of Social Robotics, 2014

An important aspect of a robot's social behavior is to convey the right amount of trustworthiness. Task performance has shown to be an important source for trustworthiness judgments. Here, we argue that factors such as a robot's behavioral style, can play an important role as well. Our approach to studying the e↵ects of a robot's performance and behavioral style on human trust involves experiments with simulated robots in Video Human-Robot Interaction (VHRI) and Immersive Virtual Environments (IVE). Although VHRI and IVE settings cannot substitute for the genuine interaction with a real robot, they can provide useful complementary approaches to experimental research in social Human Robot Interaction. VHRI enables rapid prototyping of robot behaviors. Simulating Human-Robot Interaction in IVEs can be a useful tool for measuring human responses to robots and help avoid the many constraints caused by realworld hardware. However, there are also di culties with the generalization of results from one setting (e.g., VHRI) to another (e.g. IVE or the real world), which we discuss. In this paper, we use animated robot avatars in VHRI to rapidly identify robot behavioral styles that a↵ect human trust assessment of the robot. In a subsequent study, we use an IVE to measure behavioral interaction between humans and an animated robot avatar equipped with behaviors from the VHRI experiment. Our findings reconfirm that a robot's task performance

Responses to robot social roles and social role framing

2011 International Conference on Collaboration Technologies and Systems (CTS), 2011

Promoting dependents' perceptions of point-of-injury care robots as social actors may elicit feelings of companionship and diminish stress. However, numerous rescuers may control these robots and communicate with dependents through the robot, creating communication and interaction challenges that may be best addressed by creating a pure medium robot expressing no social identity. In addition, setting dependents' expectations regarding the robot's social role may improve perceptions of the robot and trust in the robot's suggestions. In a 3 (role: pure medium vs. social medium vs. social actor) x 2 (framing: framed vs. unframed) between-participants design, participants interacted with a simulation of a robot in a search and rescue context (N=84). Robot social behavior decreased participants' fear, yet made participants feel more isolated. Framing generated increased trust in the robot. Implications for the theory and design of robots and human-robot interaction are discussed.

Mind Perception and Social Robots: The Role of Agent Appearance and Action Types

Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 2021

Mind perception is considered to be the ability to attribute mental states to non-human beings. As social robots increasingly become part of our lives, one important question for HRI is to what extent we attribute mental states to these agents and the conditions under which we do so. In the present study, we investigated the effect of appearance and the type of action a robot performs on mind perception. Participants rated videos of two robots in different appearances (one metallic, the other human-like), each of which performed four different actions (manipulating an object, verbal communication, non-verbal communication, and an action that depicts a biological need) on Agency and Experience dimensions. Our results show that the type of action that the robot performs affects the Agency scores. When the robot performs human-specific actions such as communicative actions or an action that depicts a biological need, it is rated to have more agency than when it performs a manipulative action. On the other hand, the appearance of the robot did not have any effect on the Agency or the Experience scores. Overall, our study suggests that the behavioral skills we build into social robots could be quite important in the extent we attribute mental states to them. CCS CONCEPTS • Human-centered computing • Human-computer interaction (HCI) • HCI design and evaluation methods • User studies

Avoiding the Abject and Seeking the Script: Perceived Mind, Morality, and Trust in a Persuasive Social Robot

ACM Transactions on Human-Robot Interaction

Social robots are being groomed for human influence, including the implicit and explicit persuasion of humans. Humanlike characteristics are understood to enhance robots’ persuasive impact; however, little is known of how perceptions of two key human capacities—mind and morality—function in robots’ persuasive potential. This experiment tests the possibility that perceived robot mind and morality will correspond with greater persuasive impact, moderated by relational trustworthiness for a moral appeal and by capacity trustworthiness for a logical appeal. Via an online survey, a humanoid robot asks participants to help it learn to overcome CAPTCHA puzzles to access important online spaces—either on grounds that it is logical or moral to do so. Based on three performance indicators and one self-report indicator of compliance, analysis indicates that (a) seeing the robot as able to perceive and act on the world selectively improves compliance, and (b) perceiving agentic capacity diminis...