eEVA as a Real-Time Multimodal Agent Human-Robot Interface (original) (raw)

Let's be friends! A rapport-building 3D embodied conversational agent for the Human Support Robot

2021

Partial subtle mirroring of nonverbal behaviors during conversations (also known as mimicking or parallel empathy), is essential for rapport building, which in turn is essential for optimal humanhuman communication outcomes. Mirroring has been studied in interactions between robots and humans, and in interactions between Embodied Conversational Agents (ECAs) and humans. However, very few studies examine interactions between humans and ECAs that are integrated with robots, and none of them examine the e ect of mirroring nonverbal behaviors in such interactions. Our research question is whether integrating an ECA able to mirror its interlocutor’s facial expressions and head movements (continuously or intermittently) with a human-service robot will improve the user’s experience with the support robot that is able to perform useful mobile manipulative tasks (e.g. at home). Our contribution is the complex integration of an expressive ECA, able to track its interlocutor’s face, and to mir...

Jubileo: An Open-Source Robot and Framework for Research in Human-Robot Social Interaction

2022

Human-robot interaction (HRI) is essential to the widespread use of robots in daily life. Robots will eventually be able to carry out a variety of duties in human civilization through effective social interaction. Creating straightforward and understandable interfaces to engage with robots as they start to proliferate in the personal workspace is essential. Typically, interactions with simulated robots are displayed on screens. A more appealing alternative is virtual reality (VR), which gives visual cues more like those seen in the real world. In this study, we introduce Jubileo, a robotic animatronic face with various tools for research and application development in human-robot social interaction field. Jubileo project offers more than just a fully functional open-source physical robot; it also gives a comprehensive framework to operate with a VR interface, enabling an immersive environment for HRI application tests and noticeably better deployment speed.

Using Mixed Reality Agents as Social Interfaces for Robots

Endowing robots with a social interface is often costly and difficult. Virtual characters on the other hand are comparatively cheap and well equipped but suffer from other difficulties, most notably their inability to interact with the physical world. This paper details our wearable solution to combining physical robots and virtual characters into a Mixed Reality Agent (MiRA) through mixed reality visualisation. It describes a pilot study demonstrating our system, and showing how such a technique can offer a viable alternative cost effective approach to enabling a rich social interface for Hu-man-Robot Interaction.

Towards Building Rapport with a Human Support Robot

Springer eBooks, 2022

Human support robots (mobile robots able to perform useful domestic manipulative tasks) might be better accepted by people if they can communicate in ways they naturally understand: e.g. speech, but also facial expressions, postures, among others. Subtle (unconscious) mirroring of nonverbal cues during conversations promotes rapport building, essential for good communication. We investigate whether, as in humanhuman communication, the ability of a robot to mirror its user's head movements and facial expressions in real time can improve the user's experience with it. We describe the technical integration of a Toyota Human Support Robot (HSR) with a facially expressive 3D embodied conversational agent (ECA) (named ECA-HSR). The HSR and the ECA are aware of the user's head movements and facial emotions, and can mirror them, in real time. We then discuss a user study we designed in which participants interacted with ECA-HSR in a simple social dialog task with three conditions: mirroring of user's head movements, mirroring of user's facial emotions, and mirroring of both user's head movements and facial emotions. Our results suggest that interacting with an ECA-HSR that mirrors both the user's head movements and the facial expressions is preferred over the other conditions. Among other insights, the study revealed that the accuracy of open source, real-time recognition of facial expressions of emotion needs improvement for the best user's acceptance.

Where Robots and Virtual Agents Meet: A Survey of Social Interaction Research across Milgram's Reality-Virtuality Continuum

2009

Traditionally, social interaction research has concentrated on either fully virtually embodied agents (e.g. embodied conversational agents) or fully physically embodied agents (e.g. robots). For some time, however, both areas have started augmenting their agents’ capabilities for social interaction using ubiquitous and intelligent environments. We are placing different agent systems for social interaction along Milgram’s Reality-Virtuality Continuum—according to the degree they are embodied in a physical, virtual or mixed reality environment—and show systems that follow the next logical step in this progression, namely social interaction in the middle of Milgram’s continuum, that is, agents richly embodied in the physical and virtual world. This paper surveys the field of social interaction research with embodied agents with a particular view towards their embodiment forms and highlights some of the advantages and issues associated with the very recent field of social interaction with mixed reality agents.

A friendly gesture: Investigating the effect of multimodal robot behavior in human-robot interaction

RO-MAN, 2011 IEEE, 2011

Gesture is an important feature of social interaction, frequently used by human speakers to illustrate what speech alone cannot provide, e.g. to convey referential, spatial or iconic information. Accordingly, humanoid robots that are intended to engage in natural human-robot interaction should produce speech-accompanying gestures for comprehensible and believable behavior. But how does a robot's non-verbal behavior influence human evaluation of communication quality and the robot itself? To address this research question we conducted two experimental studies. Using the Honda humanoid robot we investigated how humans perceive various gestural patterns performed by the robot as they interact in a situational context. Our findings suggest that the robot is evaluated more positively when non-verbal behaviors such as hand and arm gestures are displayed along with speech. These findings were found to be enhanced when the participants were explicitly requested to direct their attention towards the robot during the interaction.

32 Intuitive Multimodal Interaction with Communication Robot

2007

One of the most important motivations for many humanoid robot projects is that robots with a human-like body and human-like senses could in principle be capable of intuitive multimodal communication with people. The general idea is that by mimicking the way humans interact with each other, it will be possible to transfer the efficient and robust communication strategies that humans use in their interactions to the man-machine interface. This includes the use of multiple modalities, such as speech, facial expressions, gestures, body language, etc. If successful, this approach yields a user interface that leverages the evolution of human communication and that is intuitive to naïve users, as they have practiced it since early childhood. We work towards intuitive multimodal communication in the domain of a museum guide robot. This application requires interacting with multiple unknown persons. The testing of communication robots in science museums and on science fairs is popular, becau...

Social interaction with robots and agents: Where do we stand, where do we go?,” 2009

2016

Robots and agents are becoming increasingly prominent in everyday life, taking on a variety of roles, including helpers, coaches, and even social companions. A core requirement for these social agents is the ability to establish and maintain long-term trusting and engaging relationship with their human users. Much research has already been done on the prerequisites for these types of social agents and robots, in affective computing, social computing and affective HCI. A number of disciplines within psychology and the social sciences are also relevant, contributing theories, data and methods relevant for the emerging areas of social robotics, and social computing in general. However, the complexity of the task of designing these social agents, and the diversity of the relevant disciplines, can be overwhelming. This paper presents a summary of a special session at ACII 2009 whose purpose was to provide an overview of the state-of-the-art in social agents and robots, and to explore som...

Social interaction with robots and agents: Where do we stand, where do we go

2009

Robots and agents are becoming increasingly prominent in everyday life, taking on a variety of roles, including helpers, coaches, and even social companions. A core requirement for these social agents is the ability to establish and maintain long-term trusting and engaging relationship with their human users. Much research has already been done on the prerequisites for these types of social agents and robots, in affective computing, social computing and affective HCI. A number of disciplines within psychology and the social sciences are also relevant, contributing theories, data and methods relevant for the emerging areas of social robotics, and social computing in general. The complexity of the task of designing these social agents, and the diversity of the relevant disciplines can be overwhelming. This paper provides a summary of a special session at ACII 2009 whose purpose was to provide an overview of the state-of-the-art in social agents and robots, and explore some of the fundamental questions regarding their development, and the evaluation of their effectiveness.

Multimodal Human-Robot Interaction Framework for a Personal Robot

2006

This paper presents a framework for multimodal human-robot interaction. The proposed framework is being implemented in a personal robot called Maggie, developed at RoboticsLab of the University Carlos III of Madrid for social interaction research. The control architecture of this personal robot is a hybrid control architecture called AD (Automatic-Deliberative) that incorporates an Emotion Control System (ECS) Maggie's main goal is to interact in a natural way and establish a peer-to-peer relationship with humans. To achieve this goal, a set of human-robot interaction skills are developed based on the proposed framework. The human-robot interaction skills imply tactile, visual, remote voice and sound modes. The multi-modal fusion and synchronization are also presented in this paper.