A Dialogue Control Model Based on Ambiguity Evaluation of Users' Instructions and Stochastic Representation of Experiences (original) (raw)

Human-robot interaction through spoken language dialogue

2000

Abstract The development of robots that are able to accept instructions, via a friendly interface, in terms of concepts that are familiar to a human user remains a challenge. It is argued that designing and building such intelligent robots can be seen as the problem of integrating four main dimensions: human-robot communication, sensory motor skills and perception, decision-making capabilities, and learning.

Going Beyond Literal Command-Based Instructions: Extending Robotic Natural Language Interaction Capabilities

Proceedings of the AAAI Conference on Artificial Intelligence

The ultimate goal of human natural language interaction is to communicate intentions. However, these intentions are often not directly derivable from the semantics of an utterance (e.g., when linguistic modulations are employed to convey polite-ness, respect, and social standing). Robotic architectures withsimple command-based natural language capabilities are thus not equipped to handle more liberal, yet natural uses of linguistic communicative exchanges. In this paper, we propose novel mechanisms for inferring in-tentions from utterances and generating clarification requests that will allow robots to cope with a much wider range of task-based natural language interactions. We demonstrate the potential of these inference algorithms for natural human-robot interactions by running them as part of an integrated cognitive robotic architecture on a mobile robot in a dialogue-based instruction task.

Robot, asker of questions

Robotics and Autonomous Systems, 2003

Collaborative control is a teleoperation system model based on human-robot dialogue. With this model, the robot asks questions to the human in order to obtain assistance with cognition and perception. This enables the human to function as a resource for the robot and help to compensate for limitations of autonomy. To understand how collaborative control influences human-robot interaction, we performed a user study based on contextual inquiry (CI). The study revealed that: (1) dialogue helps users understand problems encountered by the robot and (2) human assistance is a limited resource that must be carefully managed.

A dialogue system for multimodal human-robot interaction

Proceedings of the 15th ACM on International conference on multimodal interaction - ICMI '13, 2013

This paper presents a POMDP-based dialogue system for multimodal human-robot interaction (HRI). Our aim is to exploit a dialogical paradigm to allow a natural and robust interaction between the human and the robot. The proposed dialogue system should improve the robustness and the flexibility of the overall interactive system, including multimodal fusion, interpretation, and decision-making. The dialogue is represented as a Partially Observable Markov Decision Process (POMDPs) to cast the inherent communication ambiguity and noise into the dialogue model. POMDPs have been used in spoken dialogue systems, mainly for tourist information services, but their application to multimodal humanrobot interaction is novel. This paper presents the proposed model for dialogue representation and the methodology used to compute a dialogue strategy. The whole architecture has been integrated on a mobile robot platform and has been tested in a human-robot interaction scenario to assess the overall performances with respect to baseline controllers.

Balancing Efficiency and Coverage in Human-Robot Dialogue Collection

ArXiv, 2018

We describe a multi-phased Wizard-of-Oz approach to collecting human-robot dialogue in a collaborative search and navigation task. The data is being used to train an initial automated robot dialogue system to support collaborative exploration tasks. In the first phase, a wizard freely typed robot utterances to human participants. For the second phase, this data was used to design a GUI that includes buttons for the most common communications, and templates for communications with varying parameters. Comparison of the data gathered in these phases show that the GUI enabled a faster pace of dialogue while still maintaining high coverage of suitable responses, enabling more efficient targeted data collection, and improvements in natural language understanding using GUI-collected data. As a promising first step towards interactive learning, this work shows that our approach enables the collection of useful training data for navigation-based HRI tasks.

The curious robot-structuring interactive robot learning

… , 2009. ICRA'09. …, 2009

If robots are to succeed in novel tasks, they must be able to learn from humans. To improve such humanrobot interaction, a system is presented that provides dialog structure and engages the human in an exploratory teaching scenario. Thereby, we specifically target untrained users, who are supported by mixed-initiative interaction using verbal and non-verbal modalities. We present the principles of dialog structuring based on an object learning and manipulation scenario. System development is following an interactive evaluation approach and we will present both an extensible, eventbased interaction architecture to realize mixed-initiative and evaluation results based on a video-study of the system. We show that users benefit from the provided dialog structure to result in predictable and successful human-robot interaction.

Robot command, interrogation and teaching via social interaction

2005

The development of high performance robot platforms provides complex systems with which humans must interact, and levy serious requirements on the quality and depth of these interactions. At the same time, developments in spoken language technology, and in theories of social cognition and intentional cooperative behavior provide the technical basis and theoretical background respectively for the technical specification of how these systems can work.