Design and implementation of an expressive gesture model for a humanoid robot (original) (raw)
Related papers
Expressive gesture model for humanoid robot
Affective Computing and Intelligent …, 2011
This paper presents an expressive gesture model that generates communicative gestures accompanying speech for the humanoid robot Nao. The research work focuses mainly on the expressivity of robot gestures being coordinated with speech. To reach this objective, we have extended and developed our existing virtual agent platform GRETA to be adapted to the robot. Gestural prototypes are described symbolically and stored in a gestural database, called lexicon. Given a set of intentions and emotional states to communicate the system selects from the robot lexicon corresponding gestures. After that the selected gestures are planned to synchronize speech and then instantiated in robot joint values while taking into account parameters of gestural expressivity such as temporal extension, spatial extension, fluidity, power and repetitivity. In this paper, we will provide a detailed overview of our proposed model.
Journal of Visual Language and Computing
The work describes a module that has been implemented for being included in a social humanoid robot architecture, in particular a storyteller robot, named NarRob. This module gives a humanoid robot the capability of mimicking and acquiring the motion of a human user in real-time. This allows the robot to increase the population of his dataset of gestures. The module relies on a Kinect based acquisition setup. The gestures are acquired by observing the typical gesture displayed by humans. The movements are then annotated by several evaluators according to their particular meaning, and they are organized considering a specific typology in the knowledge base of the robot. The properly annotated gestures are then used to enrich the narration of the stories. During the narration, the robot semantically analyses the textual content of the story in order to detect meaningful terms in the sentences and emotions that can be expressed. This analysis drives the choice of the gesture that accompanies the sentences when the story is read.
Expressive Gestures Displayed by a Humanoid Robot during a Storytelling Application
New Frontiers in …, 2010
Our purpose is to have the humanoid robot, NAO, to read a story aloud through expressive verbal and nonverbal behaviors. These behaviors are linked to the story being told and the emotions to be conveyed. To this aim, we are gathering a repertoire of gestures and body-postures for the robot using a video corpus of story-tellers. The robot's behavior is controlled by the Greta platform that drives agents to communicate through verbal and nonverbal means. In this paper we are presenting our research methodology and overall framework.
Evaluating an Expressive Gesture Model for a Humanoid Robot: Experimental Results
2012
Abstract—This article presents the results obtained from an experiment on our expressive gesture model for humanoid robots. The work took place within a French research project, GVLEX. The goal of this project aims at equipping a humanoid robot with a capacity of producing gestures while telling a story for children. To do that we have extended and developed an existing virtual agent system, named GRETA to adapt to real robots which have certain limits of space and speed movements.
Synchronized Gesture and Speech Production for Humanoid Robots
We present a model that is capable of synchronizing expressive gestures with speech. The model, implemented on a Honda humanoid robot, can generate a full range of gesture types, such as emblems, iconic and metaphoric gestures, deictic pointing and beat gestures. Arbitrary input text is analyzed with a part-of-speech tagger and a text-to-speech engine for timing information of spoken words. In addition, style tags can be optionally added to specify the level of excitement or topic changes. The text, combined with any tags, is then processed by several grammars, one for each gesture type to produce several candidate gestures for each word of the text. The model then selects probabilistically amongst the gesture types based on the desired degree of expressivity. Once a gesture type is selected, it coincides with a particular gesture template, consisting of trajectory curves that define the gesture. Speech timing patterns and style parameters are used to modulate the shape of the curve before it sent to the whole body control system on the robot. Evaluation of the model’s parameters were performed, demonstrating the ability of observers to differentiate varying levels of expressiveness, excitement and speech synchronization. Modification of gesture speed for trajectory tracking found that positive associations like happiness and excitement accompanied faster speeds, with negative associations like sadness or tiredness occurred at slower speeds.
Generating co-speech gestures for the humanoid robot NAO through BML
International Gesture Workshop, Athens
We develop an expressive gesture model based on GRETA platform to generate gestures accompanying speech for different embodiments. This paper presents our ongoing work on an implementation of this model on the humanoid robot NAO. From a specification of multimodal behaviors encoded with the behavior markup language, BML, the system synchronizes and realizes the verbal and nonverbal behaviors on the robot.
Generation and Evaluation of Communicative Robot Gesture
How is communicative gesture behavior in robots perceived by humans? Although gesture is a crucial feature of social interaction, this research question is still largely unexplored in the field of social robotics. Thus the main objective of the present work is to address this issue by shedding light onto how gestural machine behaviors can ultimately be used to design more natural communication in social robots. The chosen approach is twofold: Firstly, the technical challenges encountered when implementing a speech-gesture generation model on a robotic platform have to be tackled. We present a framework that enables the Honda humanoid robot to flexibly produce synthetic speech and co-verbal hand and arm gestures at run-time, while not being limited to a pre-defined repertoire of motor actions. Secondly, the achieved flexibility in robot gesture should be exploited for controlled experiments. In an experimental study using the Honda humanoid robot, we investigated how humans perceive and evaluate various gestural patterns performed by the robot as they interact in a situational context. Our findings reveal that the robot is evaluated more positively when nonverbal behaviors such as hand and arm gestures are displayed along with speech, even if they do not semantically match the spoken utterance.