Grounding the Interaction: Anchoring Situated Discourse in Everyday Human-Robot Interaction (original) (raw)

Dialogue in situated environments: A symbolic approach to perspective-aware grounding, clarification and reasoning for robots

Interacting with a robot in a shared space requires not only a shared model (i.e., environment entities must be described with the same symbols), but also a model of mutual knowledge: both the robot and the human need to figure out what the other knows, sees or can do in the environment to appropriately behave. We present here our efforts to let the robot grounds verbal interaction with a human in a physical, situated environment. We propose a knowledge-oriented architecture, where perceptions from different point of views (from the robot itself, from the human, etc.) are turned into symbolic facts that are stored in different cognitive models and reused in a newly designed module for dialogue grounding.

Dialogue in situated environments: A symbolic approach to perspective-aware grounding, clarification and reasoning for robot

Interacting with a robot in a shared space requires not only a shared model (i.e., environment entities must be described with the same symbols), but also a model of mutual knowledge: both the robot and the human need to figure out what the other knows, sees or can do in the environment to appropriately behave. We present here our efforts to let the robot grounds verbal interaction with a human in a physical, situated environment. We propose a knowledge-oriented architecture, where perceptions from different point of views (from the robot itself, from the human, etc.) are turned into symbolic facts that are stored in different cognitive models and reused in a newly designed module for dialogue grounding.

Anchoring interaction through symbolic knowledge

This paper presents how extraction, representation and use of symbolic knowledge from real-world perception and humanrobot verbal and non-verbal interaction can actually enable a grounded and shared model of the world that is suitable for later high-level tasks like dialogue understanding, symbolic task planning or reactive supervision. We show how the anchoring process itself fundamentally relies on both the situated and embodied nature of human-robot interactions. We present an implementation, including a specialized symbolic knowledge representation system based on Description Logics, and experiments on several robotic platforms that demonstrate these cognitive capabilities.

Grounded situation models for robots: Bridging language, perception, and action

AAAI-05 Workshop on Modular Construction of …, 2005

"Our long-term objective is to develop robots that engage in natural language-mediated cooperative tasks with humans. To support this goal, we are developing an amodal representation called a grounded situation model (GSM), as well as a modular architecture in which the GSM resides in a centrally located module. We present an implemented system that allows of a range of conversational and assistive behavior by a manipulator robot. The robot updates beliefs about its physical environment and body, based on a mixture of linguistic, visual and proprioceptive evidence. It can answer basic questions about the present or past and also perform actions through verbal interaction. Most importantly, a novel contribution of our approach is the robot’s ability for seamless integration of both language and sensor-derived information about the situation: For example, the system can acquire parts of situations either by seeing them or by “imagining” them through descriptions given by the user: “There is a red ball at the left”. These situations can later be used to create mental imagery, thus enabling bidirectional translation between perception and language. This work constitutes a step towards robots that use situated natural language grounded in perception and action."

Robot, tell me what you know about...?: Expressing robot's knowledge through interaction

Explicitly showing the robot's knowledge about the states of the world and the agents' capabilities in such states is essential in human robot interaction. This way, the human partner can better understand the robot's intentions and beliefs in order to provide missing information that may eventually improve the interaction. We present our current approach for modeling the robot's knowledge from a symbolic point of view based on an ontology. This knowledge is fed by two sources: direct interaction with the human, and geometric reasoning. We present an interactive task scenario where we exploit the robot's knowledge to interact with the human while showing its internal geometric reasoning when possible.

Coupling Robot Perception and Online Simulation for Grounding Conversational Semantics

language

How can we build robots that engage in fluid spoken conversations with people, moving beyond canned responses and towards actual understanding? Many difficult questions arise regarding the nature of word meanings, and how those meanings are grounded in the world of the robot. We introduce an architecture that provides the basis for grounding word meanings in terms of robot perception, action, and memory. The robot’s perceptual system drives an online simulator that maintains a virtual version of the physical environment in synchronization with the robot’s noisy and changing perceptual input. The simulator serves as a “mental model” that enables object permanence and virtual shifts of perspective. This architecture provides a rich set of data structures and procedures that serve as a basis set for grounding lexical semantics, a step towards situated, conversational robots.

Referential Grounding for Situated Human-Robot Communication

2014

We present a dialogue system and reference handling component for efficient and natural referential grounding dialogues from 2D images. Using a probabilistic representation of qualitative concepts, the system uses flexible concept assignment in reference handling for bridging conceptual gaps between the system and the user, and engages in clarification dialogues based on an evaluation of miscommunication risk.

Knowledge representation for robots through human-robot interaction

The representation of the knowledge needed by a robot to perform complex tasks is restricted by the limitations of perception. One possible way of overcoming this situation and designing "knowledgeable" robots is to rely on the interaction with the user. We propose a multi-modal interaction framework that allows to effectively acquire knowledge about the environment where the robot operates. In particular, in this paper we present a rich representation framework that can be automatically built from the metric map annotated with the indications provided by the user. Such a representation, allows then the robot to ground complex referential expressions for motion commands and to devise topological navigation plans to achieve the target locations.

Continual processing of situated dialogue in human-robot collaborative activities

2010

This paper presents an implemented approach of processing situated dialogue between a human and a robot. The focus is on task-oriented dialogue, set in the larger context of human-robot collaborative activity. The approach models understanding and production of dialogue to include intension (what is being talked about), intention (the goal of why something is being said), and attention (what is being focused on). These dimensions are directly construed in terms of assumptions and assertions on situated multi-agent belief models. The approach is continual in that it allows for interpretations to be dynamically retracted, revised, or deferred. This makes it possible to deal with the inherent asymmetry in how robots and humans tend to understand dialogue, and the world in which it is set. The approach has been fully implemented, and integrated into a cognitive robot. The paper discusses the implementation, and illustrates it in a collaborative learning setting.

Reasoning with Grounded Self-Symbols for Human-Robot Interaction

We discuss Perry's notion of the essential indexical and the requirement that robots interacting with humans (and other robots) be able to reason about themselves in a grounded way. We describe a general approach for achieving this, and progress we have made in doing so.