32 Intuitive Multimodal Interaction with Communication Robot (original) (raw)
Related papers
Intuitive multimodal interaction and predictable behavior for the museum tour guide robot Robotinho
2010
Deploying robots at public places exposes highly complex systems to a variety of potential interaction partners of all ages and with different technical backgrounds. Most of these individuals may have never interacted with a robot before. This raises the need for robots with an intuitive user interface, usable without prior training. Furthermore, predictable robot behavior is essential to allow for cooperative behavior on the human side. Humanoid robots are advantageous for this purpose, as they look familiar to persons without robotic experience. Moreover, they are able to resemble human motions and behaviors, allowing intuitive human-robot-interaction. In this paper, we present our communication robot Robotinho. Robotinho is an anthropomorphic robot equipped with an expressive communication head. Its multimodal dialog system incorporates body language, gestures, facial expressions, and speech. We describe the behaviors used to interact with inexperienced users in a museum tour guide scenario. In contrast to previous work, our robot interacts with the visitors not only at the exhibits, but also while it is navigating to the next exhibit. We evaluated our system in a science museum and report quantitative and qualitative feedback from the users.
32 Intuitive Multimodal Interaction with Communication Robot Fritz
2020
One of the most important motivations for many humanoid robot projects is that robots with a human-like body and human-like senses could in principle be capable of intuitive multimodal communication with people. The general idea is that by mimicking the way humans interact with each other, it will be possible to transfer the efficient and robust communication strategies that humans use in their interactions to the man-machine interface. This includes the use of multiple modalities, such as speech, facial expressions, gestures, body language, etc. If successful, this approach yields a user interface that leverages the evolution of human communication and that is intuitive to naïve users, as they have practiced it since early childhood. We work towards intuitive multimodal communication in the domain of a museum guide robot. This application requires interacting with multiple unknown persons. The testing of communication robots in science museums and on science fairs is popular, because the robots encounter there many new interaction partners, which have a general interest in science and technology. Here, we present the humanoid communication robot Fritz that we developed as successor to the communication robot Alpha (Bennewitz et al., 2005). Fritz uses speech, facial expressions, eye-gaze, and gestures to interact with people. Depending on the audiovisual input, our robot shifts its attention between different persons in order to involve them into an interaction. He performs human-like arm gestures during the conversation and also uses pointing gestures generated with eyes, its head, and arms to direct the attention of its communication partners towards the explained exhibits. To express its emotional state, the robot generates facial expressions and adapts the speech synthesis. The remainder of the chapter is organized as follows. The next section reviews some of the related work. The mechanical and electrical design of Fritz is covered in Sec. 3. Sec. 4 details the perception of the human communication partners. Sec. 5 explains the robot's attentional system. The generation of arm gestures and of facial expressions is presented in Sec. 6 and 7, respectively. Finally, in the experimental section, we discuss experiences made during public demonstrations of our robot. 2. Related Work Many research groups worldwide work on intuitive multimodal communication between humanoid robots and humans. Some example projects are the Leonardo robot at MIT
Intuitive Multimodal Interaction with Communication Robot Fritz
Humanoid Robots, Human-like Machines, 2007
One of the most important motivations for many humanoid robot projects is that robots with a human-like body and human-like senses could in principle be capable of intuitive multimodal communication with people. The general idea is that by mimicking the way humans interact with each other, it will be possible to transfer the efficient and robust communication strategies that humans use in their interactions to the man-machine interface. This includes the use of multiple modalities, such as speech, facial expressions, gestures, body language, etc. If successful, this approach yields a user interface that leverages the evolution of human communication and that is intuitive to naïve users, as they have practiced it since early childhood. We work towards intuitive multimodal communication in the domain of a museum guide robot. This application requires interacting with multiple unknown persons. The testing of communication robots in science museums and on science fairs is popular, because the robots encounter there many new interaction partners, which have a general interest in science and technology. Here, we present the humanoid communication robot Fritz that we developed as successor to the communication robot Alpha (Bennewitz et al., 2005). Fritz uses speech, facial expressions, eye-gaze, and gestures to interact with people. Depending on the audiovisual input, our robot shifts its attention between different persons in order to involve them into an interaction. He performs human-like arm gestures during the conversation and also uses pointing gestures generated with eyes, its head, and arms to direct the attention of its communication partners towards the explained exhibits. To express its emotional state, the robot generates facial expressions and adapts the speech synthesis. The remainder of the chapter is organized as follows. The next section reviews some of the related work. The mechanical and electrical design of Fritz is covered in Sec. 3. Sec. 4 details the perception of the human communication partners. Sec. 5 explains the robot's attentional system. The generation of arm gestures and of facial expressions is presented in Sec. 6 and 7, respectively. Finally, in the experimental section, we discuss experiences made during public demonstrations of our robot. 2. Related Work Many research groups worldwide work on intuitive multimodal communication between humanoid robots and humans. Some example projects are the Leonardo robot at MIT
2013
— The purpose of our research is to develop a humanoid museum guide robot that performs intuitive, multimodal interaction with multiple persons. In this paper, we present a robotic system that makes use of visual perception, sound source localization, and speech recognition to detect, track, and involve multiple persons into interaction. Depending on the audio-visual input, our robot shifts its attention between different persons. In order to direct the attention of its communication partners towards exhibits, our robot performs gestures with its eyes and arms. As we demonstrate in practical experiments, our robot is able to interact with multiple persons in a multimodal way and to shift its attention between different people. Furthermore, we discuss experiences made during a two-day public demonstration of our robot. I.
Towards a humanoid museum guide robot that interacts with multiple persons
… Robots, 2005 5th …, 2005
The purpose of our research is to develop a humanoid museum guide robot that performs intuitive, multimodal interaction with multiple persons. In this paper, we present a robotic system that makes use of visual perception, sound source localization, and speech recognition to detect, track, and involve multiple persons into interaction. Depending on the audio-visual input, our robot shifts its attention between different persons. In order to direct the attention of its communication partners towards exhibits, our robot performs gestures with its eyes and arms. As we demonstrate in practical experiments, our robot is able to interact with multiple persons in a multimodal way and to shift its attention between different people. Furthermore, we discuss experiences made during a two-day public demonstration of our robot.
Human-like Interaction Skills for the Mobile Communication Robot Robotinho
International Journal of Social Robotics (SORO), 2013
The operation of robotic tour guides in public museums leads to a variety of interactions of these complex technical systems with humans of all ages and with different technical backgrounds. Interacting with a robot is a new experience for many visitors. An intuitive user interface, preferable one that resembles the interaction between human tour guides and visitors, simplifies the communication between robot and visitors. To allow for supportive behavior of the guided persons, predictable robot behavior is necessary. Humanoid robots are able to resemble human motions and behaviors and look familiar to human users that have not interacted with robots so far. Hence, they are particularly well suited for this purpose. In this work, we present our anthropomorphic mobile communication robot Robotinho. It is equipped with an expressive communication head to display emotions. Its multimodal dialog system incorporates gestures, facial expression, body language, and speech. We describe the behaviors that we developed for interaction with inexperienced users in a museum tour guide scenario. In contrast to prior work, Robotinho communicated with the guided persons during navigation between exhibits, not only while explaining an exhibit. We report qualitative and quantitative results from evaluations of Robotinho in RoboCup@Home competitions and in a science museum.
Enabling Multimodal Human–Robot Interaction for the Karlsruhe Humanoid Robot
IEEE Transactions on Robotics, 2007
In this paper we present our work in building technologies for natural multimodal human-robot interaction. We present our systems for spontaneous speech recognition, multimodal dialogue processing and visual perception of a user, which includes localization, tracking and identification of the user, recognition of pointing gestures as well as the recognition of a person's head orientation. Each of the components are described in the paper and experimental results are presented. We also present several experiments on multimodal human-robot interaction, such as interaction using speech and gestures, the automatic determination of the addressee during human-human robot interaction, as well on interactive learning of dialogue strategies. The here presented work and components constitute the core building blocks for audiovisual perception of humans and multimodal human-robot interaction used for the humanoid robot developed within the German research project (Sonderforschungsbereich) on humanoid cooperative robots.
Multimodal conversational interaction with a humanoid robot
2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom), 2012
The paper presents a multimodal conversational interaction system for the Nao humanoid robot. The system was developed at the 8th International Summer Workshop on Multimodal Interfaces, Metz, 2012. We implemented WikiTalk, an existing spoken dialogue system for open-domain conversations, on Nao. This greatly extended the robot's interaction capabilities by enabling Nao to talk about an unlimited range of topics. In addition to speech interaction, we developed a wide range of multimodal interactive behaviours by the robot, including facetracking, nodding, communicative gesturing, proximity detection and tactile interrupts. We made video recordings of user interactions and used questionnaires to evaluate the system. We further extended the robot's capabilities by linking Nao with Kinect.
The humanoid museum tour guide Robotinho
2009
Wheeled tour guide robots have already been deployed in various museums or fairs worldwide. A key requirement for successful tour guide robots is to interact with people and to entertain them. Most of the previous tour guide robots, however, focused more on the involved navigation task than on natural interaction with humans. Humanoid robots, on the other hand, offer a great potential for investigating intuitive, multimodal interaction between humans and machines. In this paper, we present our mobile full-body humanoid tour guide robot Robotinho. We provide mechanical and electrical details and cover perception, the integration of multiple modalities for interaction, navigation control, and system integration aspects. The multimodal interaction capabilities of Robotinho have been designed and enhanced according to the questionnaires filled out by the people who interacted with the robot at previous public demonstrations. We present experiences we have made during experiments in which untrained users interacted with the robot.
Fritz-A humanoid communication robot
Robot and Human …, 2007
In this paper, we present the humanoid communication robot Fritz. Our robot communicates with people in an intuitive, multimodal way. Fritz uses speech, facial expressions, eye-gaze, and gestures to interact with people. Depending on the audio-visual input, our robot shifts its attention between different persons in order to involve them into the conversation. He performs human-like arm gestures during the conversation and also uses pointing gestures generated with eyes, head, and arms to direct the attention of its communication partners towards objects of interest. To express its emotional state, the robot generates facial expressions and adapts the speech synthesis. We discuss experiences made during two public demonstrations of our robot.