Robot command, interrogation and teaching via social interaction (original) (raw)

Cognitive Robotics: Command, Interrogation and Teaching in Robot Coaching

Lecture Notes in Computer Science, 2007

The objective of the current research is to develop a generalized approach for human-robot interaction via spoken language that exploits recent developments in cognitive science, particularly notions of grammatical constructions as form-meaning mappings in language, and notions of shared intentions as distributed plans for interaction and collaboration. We demonstrate this approach distinguishing among three levels of human-robot interaction. The first level is that of commanding or directing the behavior of the robot. The second level is that of interrogating or requesting an explanation from the robot. The third and most advanced level is that of teaching the robot a new form of behavior. Within this context, we exploit social interaction by structuring communication around shared intentions that guide the interactions between human and robot. We explore these aspects of communication on distinct robotic platforms, the Event Perceiver and the Sony AIBO robot in the context of four-legged RoboCup soccer league. We provide a discussion on the state of advancement of this work.

On the Communicative Aspect of Human-Robot Joint Action *

2016

Actions performed in the context of a joint activity comprise two aspects: functional and communicative. The functional component achieves the goal of the action, whereas its communicative component, when present, expresses some information to the actor’s partners in the joint activity. The interpretation of such communication requires leveraging information that is public to all participants, known as common ground. Humans cannot help but infer some meaning – whether or not it was intended by the actor – and so robots must be cognizant of how their actions will be interpreted in context. In this position paper, we address the questions of why and how robots can deliberately utilize this communicative channel on top of normal functional actions to work more effectively with human partners. We examine various human-robot interaction domains, including social navigation and collaborative assembly.

Cooperative Human Robot Interaction Systems: IV. Communication of Shared Plans with Naïve Humans using Gaze and Speech

— Cooperation 1 is at the core of human social life. In this context, two major challenges face research on human-robot interaction: The first is to understand the underlying structure of cooperation, and the second is to build, based on this understanding, artificial agents that can successfully and safely interact with humans. Here we take a psychologically grounded and human-centered approach that addresses these two challenges. We test the hypothesis that optimal cooperation between a naïve human and a robot requires that the robot can acquire and execute a joint plan, and that it communicates this joint plan through ecologically valid modalities including spoken language, gesture and gaze. We developed a cognitive system that comprises the human-like control of social actions, the ability to acquire and express shared plans and a language and speech synthesis stage. This cognitive system was driving the actions of an iCub humanoid robot that maintained dyadic interactions with a single human actor. In order to test the psychological validity of our approach we tested 12 naïve subjects in a cooperative task with the robot. We experimentally manipulated the presence of a joint plan (vs. an individual or solo plan), the use of task-oriented gaze and gestures, and the use of language accompanying the unfolding plan. The quality of cooperation was analyzed in terms of proper turn taking, collisions and cognitive errors. Results showed that while successful turn taking could take place in the absence of the explicit use of a joint plan and/or its accompanying cues, the presence of a joint plan yielded significantly greater success. One advantage of the solo plan was that the robot would always be ready to generate actions, and could thus adapt if the human intervened at the wrong time, whereas in the joint plan the robot expected the human to take his/her turn. Interestingly, when the robot represented the action as involving a joint plan, gaze provided a highly potent nonverbal cue that facilitated successful collaboration and reduced errors in the absence of verbal communication. These results support the cooperative stance in human social cognition, and suggest that cooperative robots should employ joint plans, fully communicate them in order to sustain effective collaboration while being ready to adapt if the human makes a midstream mistake.

Human-robot interaction through spoken language dialogue

2000

Abstract The development of robots that are able to accept instructions, via a friendly interface, in terms of concepts that are familiar to a human user remains a challenge. It is argued that designing and building such intelligent robots can be seen as the problem of integrating four main dimensions: human-robot communication, sensory motor skills and perception, decision-making capabilities, and learning.

Communication in Human-Robot Interaction

Current Robotics Reports

Purpose of Review To present the multi-faceted aspects of communication between robot and humans (HRI), putting in evidence that it is not limited to language-based interaction, but it includes all aspects that are relevant in communication among physical beings, exploiting all the available sensor channels. Recent Findings For specific purposes, machine learning algorithms could be exploited when data sets and appropriate algorithms are available. Summary Together with linguistic aspects, physical aspects play an important role in HRI and make the difference with respect to the more limited human-computer interaction (HCI). A review of the recent literature about the exploitation of different interaction channels is presented. The interpretation of signals and the production of appropriate communication actions require to consider psychological, sociological, and practical aspects, which may affect the performance. Communication is just one of the functionalities of an interactive robot and, as all the others, will need to be benchmarked to support the possibility for social robots to reach a real market.

Towards an integrated robotic system for interactive learning in a social context

Intelligent Robots and …, 2006

One ambitious goal in current robotics research is to build robots that can interact with humans in an intuitive way and can do so outside the lab in real world situations and environments such as private homes or public places. Toy robots like, e.g., Sony's AIBO are already being sold successfully for entertainment purposes, but they usually lack sophisticated human-like interaction capabilities preventing non-expert users to instruct them for useful tasks. We have developed a robot that is capable of processing multi-modal instructions and can, therefore, be instructed interactively in a social situation. This paper gives an overview of the components of the system and their integration. The system performance is described in detail based on observations from human-robot interactions and processing times to identify critical system components and further research directions. Finally, we report on first humanrobot interactions with our robot BIRON (BIelefeld Robot companiON) being situated in a real home environment. These interactions demonstrate that our robot is able to interact with users in a real home environment and can thus serve as a basis for comprehensive user studies focussing on embodied interaction for social learning.

Cooperative human robot interaction systems: IV. Communication of shared plans with Naïve humans using gaze and speech

2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013

Cooperation 1 is at the core of human social life. In this context, two major challenges face research on human-robot interaction: the first is to understand the underlying structure of cooperation, and the second is to build, based on this understanding, artificial agents that can successfully and safely interact with humans. Here we take a psychologically grounded and human-centered approach that addresses these two challenges. We test the hypothesis that optimal cooperation between a naïve human and a robot requires that the robot can acquire and execute a joint plan, and that it communicates this joint plan through ecologically valid modalities including spoken language, gesture and gaze. We developed a cognitive system that comprises the humanlike control of social actions, the ability to acquire and express shared plans and a spoken language stage. In order to test the psychological validity of our approach we tested 12 naïve subjects in a cooperative task with the robot. We experimentally manipulated the presence of a joint plan (vs. a solo plan), the use of task-oriented gaze and gestures, and the use of language accompanying the unfolding plan. The quality of cooperation was analyzed in terms of proper turn taking, collisions and cognitive errors. Results showed that while successful turn taking could take place in the absence of the explicit use of a joint plan, its presence yielded significantly greater success. One advantage of the solo plan was that the robot would always be ready to generate actions, and could thus adapt if the human intervened at the wrong time, whereas in the joint plan the robot expected the human to take his/her turn. Interestingly, when the robot represented the action as involving a joint plan, gaze provided a highly potent nonverbal cue that facilitated successful collaboration and reduced errors in the absence of verbal communication. These results support the cooperative stance in human social cognition, and suggest that cooperative robots should employ joint plans, fully communicate them in order to sustain effective collaboration while being ready to adapt if the human makes a midstream mistake.