Textual Inference and Meaning Representation in Human Robot Interaction (original) (raw)

Abstract Meaning Representation for Human-Robot Dialogue

Language Teaching, 50(2), 189-211., 2017

In this research, we begin to tackle the challenge of natural language understanding (NLU) in the context of the development of a robot dialogue system. We explore the adequacy of Abstract Meaning Representation (AMR) as a conduit for NLU. First, we consider the feasibility of using existing AMR parsers for automatically creating meaning representations for robot-directed transcribed speech data. We evaluate the quality of output of two parsers on this data against a manually annotated gold-standard data set. Second, we evaluate the semantic coverage and distinctions made in AMR overall: how well does it capture the meaning and distinctions needed in our collaborative human-robot dialogue do-main? We find that AMR has gaps that align with linguistic information critical for effective human-robot collaboration in search and navigation tasks, and we present task-specific modifications to AMR to address the deficiencies .

Effective and Robust Natural Language Understanding for Human-Robot Interaction

Robots are slowly becoming part of everyday life, as they are being marketed for commercial applications (viz. telepresence, cleaning or entertainment). Thus, the ability to interact with non-expert users is becoming a key requirement. Even if user utterances can be efficiently recognized and transcribed by Automatic Speech Recognition systems, several issues arise in translating them into suitable robotic actions. In this paper, we will discuss both approaches providing two existing Natural Language Understanding workflows for Human Robot Interaction. First, we discuss a grammar based approach: it is based on grammars thus recognizing a restricted set of commands. Then, a data driven approach, based on a free-from speech recognizer and a statistical semantic parser, is discussed. The main advantages of both approaches are discussed, also from an engineering perspective, i.e. considering the effort of realizing HRI systems, as well as their reusability and robustness. An empirical evaluation of the proposed approaches is carried out on several datasets, in order to understand performances and identify possible improvements towards the design of NLP components in HRI.

Towards Goal Inference for Human-Robot Collaboration

2020

Natural language instructions often leave a speaker’s intent under specified or unclear. We propose a goal inference procedure that extracts user intent using natural language processing techniques. This procedure uses semantic role labeling and synonym generation to extract utterance semantics, and then analyzes a task domain to infer the user’s underlying goal. This procedure is designed as an extension to the MIDCA cognitive architecture that enables human-robot collaboration. In this work, we describe a conceptual model of this procedure, lay out the steps a robot follows to make a goal inference, give an example use case, and describe the procedure’s implementation in a simulated environment. We close with a discussion of the benefits and limitations of this approach. We expect this procedure to improve user satisfaction with agent behavior when compared to planbased dialogue systems.

Natural Language For Human Robot Interaction

2015

Natural Language Understanding (NLU) was one of the main original goals of artificial intelligence and cognitive science. This has proven to be extremely challenging and was nearly abandoned for decades. We describe an implemented system that supports full NLU for tasks of moderate complexity. The natural language interface is based on Embodied Construction Grammar and simulation semantics. The system described here supports human dialog with an agent controlling a simulated robot, but is flexible with respect to both input language and output task.

Exploring natural language understanding in robotic interfaces

International Journal of Advances in Intelligent Informatics, 2017

( pdf file in: http://ijain.org/index.php/IJAIN/article/view/81 ) Natural Language Understanding is a major aspect of the intelligence of robotic systems. A main goal of improving their artificial intelligence is to allow a robot to ask questions, whenever the given instructions are not complete, and also by using implicit information. These enhanced communicational abilities can be based on the voids of an output data structure that corresponds to a systemic-semantic model of language communication, as grammar formalism. In addition, the enhancing process also improves the learning abilities of a robot. Accordingly, the presented herein experimental project was conducted by using a simulated (by a plain PC) robot and a simple constructed language that facilitated semantic orientation.

Augmenting Abstract Meaning Representation for Human-Robot Dialogue

Proceedings of the First International Workshop on Designing Meaning Representations

We detail refinements made to Abstract Meaning Representation (AMR) that make the representation more suitable for supporting a situated dialogue system, where a human remotely controls a robot for purposes of search and rescue and reconnaissance. We propose 36 augmented AMRs that capture speech acts, tense and aspect, and spatial information. This linguistic information is vital for representing important distinctions, for example whether the robot has moved, is moving, or will move. We evaluate two existing AMR parsers for their performance on dialogue data. We also outline a model for graph-to-graph conversion, in which output from AMR parsers is converted into our refined AMRs. The design scheme presented here, though task-specific, is extendable for broad coverage of speech acts using AMR in future task-independent work.

Generating Grammars for Natural Language Understanding from Knowledge about Actions and Objects

International Conference on Robotics and Biomimetics (ROBIO), 2015

Many applications in the fields of Service Robotics and Industrial Human-Robot Collaboration, require interaction with a human in a potentially unstructured environment. In many cases, a natural language interface can be helpful, but it requires powerful means of knowledge representation and processing, e.g., using ontologies and reasoning. In this paper we present a framework for the automatic generation of natural language grammars from ontological descriptions of robot tasks and interaction objects, and their use in a natural language interface. Robots can use it locally or even share this interface component through the RoboEarth framework in order to benefit from features such as referent grounding, ambiguity resolution, task identification, and task assignment.

Dialogue-AMR: Abstract Meaning Representation for Dialogue

International Conference on Language Resources and Evaluation, 2020

This paper describes a schema that enriches Abstract Meaning Representation (AMR) in order to provide a semantic representation for facilitating Natural Language Understanding (NLU) in dialogue systems. AMR offers a valuable level of abstraction of the propositional content of an utterance; however, it does not capture the illocutionary force or speaker's intended contribution in the broader dialogue context (e.g., make a request or ask a question), nor does it capture tense or aspect. We explore dialogue in the domain of human-robot interaction, where a conversational robot is engaged in search and navigation tasks with a human partner. To address the limitations of standard AMR, we develop an inventory of speech acts suitable for our domain, and present "Dialogue-AMR", an enhanced AMR that represents not only the content of an utterance, but the illocutionary force behind it, as well as tense and aspect. To showcase the coverage of the schema, we use both manual and automatic methods to construct the "DialAMR" corpus-a corpus of human-robot dialogue annotated with standard AMR and our enriched Dialogue-AMR schema. Our automated methods can be used to incorporate AMR into a larger NLU pipeline supporting human-robot dialogue.

A Semantic Approach to Enhance Human-Robot Interaction in AmI Environments

This paper presents a semantic approach for human-robot interaction in ambient intelligence environments (AmI). This approach is intended to provide a natural way for multimodal interactions between human and artificial agents embodied in cognitive companion robots or in any AmI smart device. It is applied for building cognitive assistance services based on the semantic observation and communication. The main contribution of this paper concerns the proposition of a semantic module, that allows, on the one hand, converting natural language dialogues to formal n-ary ontology knowledge, and on the other hand, making semantic inference on this knowledge to drive the dialogues and trigger actions. The target ontology language is NKRL (Narrative Knowledge Representation Language). This latter provides a semantic formal basis allowing narrative representation and reasoning on complex contexts by building spatio-temporal relationships between events. A scenario dedicated to the monitoring a...