Expressive facial speech synthesis on a robotic platform (original) (raw)

Expressive speech for a virtual talking head

This paper presents our work on building an expressive facial speech synthesis system Eface, which can be used on a social or service robot. Eface aims at enabling a robot to deliver information clearly with empathetic speech and an expressive virtual face. The system is built on two open source software packages: the Festival speech synthesis system, which provides robots the capability to speak with different voices and emotions, and Xface-a 3D talking head, which enables the robot to display various human facial expressions. This paper addresses how to express different speech emotions with Festival and how to integrate the synthesized speech with Xface. We have also implemented Eface on a physical robot and tested it with some service scenarios.

An expressive system for endowing robots or animated characters with affective facial displays

The 2002 conference …, 2002

The expressive system presented in this chapter is designed to link a model of emotion (or other underlying internal states) with socially appropriate expressive behaviors in an intelligent agent or robot. The system was initially developed to provide an affective interface to Joshua Blue, a computer simulation of an embodied mind that includes emotion and is designed to learn from its environment and interact effectively in social contexts. While many of the components of this system are not new, their combination in an expressive system is innovative. The advantages of this implementation include the ability to accommodate different emotion models (or models of other internal states), to easily specify dynamic sequences of expressive behavior, and to directly inspect, continuously monitor, and intervene to change the underlying states in a system as they occur. The system currently uses a real-time 3D facial avatar with a simplified anatomical model, but may be readily modified to direct the expressive behaviors of more complex animated characters or robots.

ExpressionBot: An emotive lifelike robotic face for face-to-face communication

2014 IEEE-RAS International Conference on Humanoid Robots, 2014

This article proposes an emotive lifelike robotic face, called ExpressionBot, that is designed to support verbal and non-verbal communication between the robot and humans, with the goal of closely modeling the dynamics of natural faceto-face communication. The proposed robotic head consists of two major components: 1) a hardware component that contains a small projector, a fish-eye lens, a custom-designed mask and a neck system with 3 degrees of freedom; 2) a facial animation system, projected onto the robotic mask, that is capable of presenting facial expressions, realistic eye movement, and accurate visual speech. We present three studies that compare Human-Robot Interaction with Human-Computer Interaction with a screen-based model of the avatar. The studies indicate that the robotic face is well accepted by users, with some advantages in recognition of facial expression and mutual eye gaze contact.

Parameterized Facial Animation for Socially Interactive Robots

Mensch und Computer 2015 – Tagungsband, 2015

Socially interactive robots with human-like speech synthesis and recognition, coupled with humanoid appearance, are an important subject of robotics and artificial intelligence research. Modern solutions have matured enough to provide simple services to human users. To make the interaction with them as fast and intuitive as possible, researchers strive to create transparent interfaces close to human-human interaction. Because facial expressions play a central role in human-human communication, robot faces were implemented with varying degrees of human-likeness and expressiveness. We propose a way to implement a program that believably animates changing facial expressions and allows to influence them via inter-process communication based on an emotion model. This will can be used to create a screen based virtual face for a robotic system with an inviting appearance to stimulate users to seek interaction with the robot.

Expressive Robotic Face for Interaction

2006

This paper describes the current development status of a robot head with basic interactional abilities. On a theoretical level we propose an explanation for the lack of robustness implicit in the so-called social robots. The fact is that our social abilities are mainly unconscious to us. This lack of knowledge about the form of the solution to these abilities leads to a fragile behaviour. Therefore, the engineering point of view must be seriously taken into account, and not only insights taken from human disciplines like developmental psychology or ethology. Our robot, built upon this idea, does not have a definite task, except to interact with people. Its perceptual abilities include sound localization, omnidirectional vision, face detection, an attention module, memory and habituation. The robot has facial features that can display basic emotional expressions, and it can speak canned text through a TTS. The robot's behavior is controlled by an action selection module, reflexes and a basic emotional module.

Enhancing Human-Robot Interaction by a Robot Face with Facial Expressions and Synchronized Lip Movements

2013

With service robots becoming increasingly elaborate for higher level tasks, human-robot interaction is moving into the focus of robotic research. In this paper we present an animated robot face as a convenient way of interacting with robots. Our robot face can show 7 different facial expression, thus providing a robot with the ability to express emotions. This capability is crucial for robots to be accepted as everyday companions in domestic environments. Aiming towards a more realistic interaction experience our robot face moves its lips synchronously to the synthesized speech. In a broad user study with 100 subjects we test the emotions conveyed by the robot face. The results indicate that our robot face enhances human robot interaction by providing the robot with the ability to express emotions. The presented robot face is highly customizable. It is available for ROS and can be used with any robot that integrates ROS in its architecture. Further information is available at http:/...

Robot social design -Facial expressions

This paper addresses the design of an emotional robot; which will be able to mimic some human expressions using servomotor.This work focuses on imitates the movements of the eyes, mouth. In order to make a face having facial expressions.We did all the modeling giving versatility, in order to approach the human face and imitate their movements.Upon completion of all the mechanisms of emotional robot will be printed using a 3D printing system to complete the assembly.Design the model of an emotional robot; which can imitate the movements of the eyes, mouth, arms and fingers of one person to generate different expressions, such as joy, anger, sadness, and fear

Design and testing of a hybrid expressive face for a humanoid robot

2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2010

The BERT2 social robot, a platform for the exploration of human-robot interaction, is currently being built at the Bristol Robotics Laboratory. This paper describes work on the robot's face, a hybrid face composed of a plastic faceplate and an LCD display, and our implementation of facial expressions on this versatile platform. We report the implementation of two representations of affect space, each of which map the space of potential emotions to specific facial feature parameters and the results of a series of human-robot interaction experiments to characterize the recognizability of the robot's archetypal facial expressions. The tested subjects' recognition of the implemented facial expressions for happy, surprised, and sad was robust (with nearly 100% recognition). Subjects, however, tended to confuse the expressions for disgusted and afraid with other expressions, with correct recognition rates of 21.1% and 52.6% respectively. Future work involves the addition of more realistic eye movements for stronger recognition of certain responses. These results demonstrate that a hybrid face with affect space facial expression implementations can provide emotive conveyance readily recognized by human beings.

Expressive Speech Recognition and Synthesis as Enabling Technologies for Affective Robot-Child Communication

2006

This paper presents our recent and current work on expressive speech synthesis and recognition as enabling technologies for affective robot-child interaction. We show that current expression recognition systems could be used to discriminate between several archetypical emotions, but also that the old adage ”there’s no data like more data” is more than ever valid in this field. A new speech synthesizer was developed that is capable of high quality concatenative synthesis. This system will be used in the robot to synthesize expressive nonsense speech by using prosody transplantation and a recorded database with expressive speech examples. With these enabling components lining up, we are getting ready to start experiments towards hopefully effective child-machine communication of affect and emotion.