Service Robots Dealing with Indirect Speech Acts (original) (raw)

Enabling Robots to Understand Indirect Speech Acts in Task-Based Interactions

Journal of Human-Robot Interaction, 2017

An important open problem for enabling truly taskable robots is the lack of task-general natural language mechanisms within cognitive robot architectures that enable robots to understand typical forms of human directives and generate appropriate responses. In this paper, we first provide experimental evidence that humans tend to phrase their directives to robots indirectly, especially in socially conventionalized contexts. We then introduce pragmatic and dialogue-based mechanisms to infer intended meanings from such indirect speech acts and demonstrate that these mechanisms can handle all indirect speech acts found in our experiment as well as other common forms of requests.

Flexible Command Interpretation on an Interactive Domestic Service Robot

In this paper, we propose a system for robust and flexible command interpretation on a mobile robot in domestic service robotics applications. Existing language processing for instructing a mobile robot often make use of a simple, restricted grammar where precisely pre-defined utterances are directly mapped to system calls. This does not take into account fallibility of human users and only allows for binary processing; either a command is part of the grammar and hence understood correctly, or it is not part of the grammar and gets rejected. We model the language processing as an interpretation process where the utterance needs to be mapped to a robot's capabilities. We do so by casting the processing as a (decision-theoretic) planning problem on interpretatory actions. This allows for a flexible system that can resolve ambiguities and which is also capable of initiating steps to achieve clarification.

Natural Language Interpretation for an Interactive Service Robot in Domestic Domains

Communications in Computer and Information Science, 2013

In this paper, we propose a flexible system for robust natural language interpretation of spoken commands on a mobile robot in domestic service robotics applications. Existing language processing for instructing a mobile robot is often restricted by using a simple grammar where precisely pre-defined utterances are directly mapped to system calls. These approaches do not regard fallibility of human users and they only allow for binary processing of an utterance; either a command is part of the grammar and hence understood correctly, or it is not part of the grammar and gets rejected. We model the language processing as an interpretation process where the utterance needs to be mapped to the robot's capabilities. We do so by casting the processing as a (decisiontheoretic) planning problem on interpretation actions. This allows for a flexible system that can resolve ambiguities and which is also capable of initiating steps to achieve clarification. We show how we evaluated several versions of the system with multiple utterances of different complexity as well as with incomplete and erroneous requests.

Going Beyond Literal Command-Based Instructions: Extending Robotic Natural Language Interaction Capabilities

Proceedings of the AAAI Conference on Artificial Intelligence

The ultimate goal of human natural language interaction is to communicate intentions. However, these intentions are often not directly derivable from the semantics of an utterance (e.g., when linguistic modulations are employed to convey polite-ness, respect, and social standing). Robotic architectures withsimple command-based natural language capabilities are thus not equipped to handle more liberal, yet natural uses of linguistic communicative exchanges. In this paper, we propose novel mechanisms for inferring in-tentions from utterances and generating clarification requests that will allow robots to cope with a much wider range of task-based natural language interactions. We demonstrate the potential of these inference algorithms for natural human-robot interactions by running them as part of an integrated cognitive robotic architecture on a mobile robot in a dialogue-based instruction task.

Learning Interactive Behavior for Service Robots – the Challenge of Mixed-Initiative Interaction

Learning-by-imitation approaches for developing human-robot interaction logic are relatively new, but they have been gaining popularity in the research community in recent years. Learning interaction logic from human-human interaction data provides several benefits over explicit programming, including a reduced level of effort for interaction design and the ability to capture unconscious, implicit social rules that are difficult to articulate or program. In previous work, we have shown a technique capable of learning behavior logic for a service robot in a shopping scenario, based on non-annotated speech and motion data from human-human example interactions. That approach was effective in reproducing reactive behavior, such as question-answer interactions. In our current work (still in progress), we are focusing on reproducing mixed-initiative interactions which include proactive behavior on the part of the robot. We have collected a much more challenging data set featuring high variability of behavior and proactive behavior in response to backchannel utterances. We are currently investigating techniques for reproducing this mixed-initiative behavior and for adapting the robot's behavior to customers with different needs.

Get it right: Improving Comprehensibility with adaptable Speech Expression of a Humanoid Service Robot

Applied Machine Learning and Data Analytics, 2024

As humanoid service robots are becoming more and more perceptible in public service settings for instance as a guide to welcome visitors or to explain a procedure to follow, it is desirable to improve the comprehensibility of complex issues for human customers and to adapt the level of difficulty of the information provided as well as the language used to individual requirements. This work examines a case study using a humanoid social robot Pepper performing support for customers in a public service environment offering advice and information. An application architecture is proposed that improves the intelligibility of the information received by providing the possibility to translate this information into easy language and/or into another spoken language

A Cognitive Approach to Enhancing Human-Robot Interaction for Service Robots

Lecture Notes in Computer Science, 2007

As robots become more intelligent and their application fields continue to grow, the decisions and interaction of robots that share work domains with humans become increasingly important. Traditional robots have received only simple commands, and humans' roles have been limited to supervisor. However, for successive task performance, robots' decision making should be approached via collaboration between the human and robot. Interaction also should be regarded as an issue closely associated with joint work plans rather than a simple function. Interaction between the human and robot, moreover, should be systemized in order to decrease the workload of the human and maximize user satisfaction. Accordingly, we developed several cognitive models: a task model, truth-maintenance model, interaction model, and intention rule-base. These models can manage and initiate the interactions based on their tasks and modify robot's activities by using the result of the interaction. We demonstrated the adaptability and usability of the developed models by applying them to the home-service robot, TRot .

Human-robot interaction through spoken language dialogue

2000

Abstract The development of robots that are able to accept instructions, via a friendly interface, in terms of concepts that are familiar to a human user remains a challenge. It is argued that designing and building such intelligent robots can be seen as the problem of integrating four main dimensions: human-robot communication, sensory motor skills and perception, decision-making capabilities, and learning.

Interaction with Robot Assistants: Commanding ALBERT

International Conference on Intelligent Robots and Systems, 2000

Giving advice to a mobile robot assistant still requires classical user interfaces. A more intuitive way of commanding is achieved by verbal or gesture commands. In this article, we present new approaches and enhancements for established methods that are in use in our laboratory. Our aim is to interact with a robot using natural and direct communication tech- niques, and