Mediating between Qualitative and Quantitative Representations for Task-Orientated Human-Robot Interaction (original) (raw)

0101 Mediating between qualitative and quantitative representations for task-orientated human-robot interaction

2019

In human-robot interaction (HRI) it is essential that the robot interprets and reacts to a human’s utterances in a manner that reflects their intended meaning. In this paper we present a collection of novel techniques that allow a robot to interpret and execute spoken commands describing manipulation goals involving qualitative spatial constraints (e.g. “put the red ball near the blue cube”). The resulting implemented system integrates computer vision, potential field models of spatial relationships, and action planning to mediate between the continuous real world, and discrete, qualitative representations used for symbolic reasoning.

Spatial representation and reasoning for human-robot collaboration

PROCEEDINGS OF THE NATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2007

How should a robot represent and reason about spatial information when it needs to collaborate effectively with a human? The form of spatial representation that is useful for robot navigation may not be useful in higher-level reasoning or working with humans as a team member. To explore this question, we have extended previous work on how children and robots learn to play hide and seek to a human-robot team covertly approaching a moving target. We used the cognitive modeling system, ACT-R, with an added spatial ...

Grounded situation models for robots: Bridging language, perception, and action

AAAI-05 Workshop on Modular Construction of …, 2005

"Our long-term objective is to develop robots that engage in natural language-mediated cooperative tasks with humans. To support this goal, we are developing an amodal representation called a grounded situation model (GSM), as well as a modular architecture in which the GSM resides in a centrally located module. We present an implemented system that allows of a range of conversational and assistive behavior by a manipulator robot. The robot updates beliefs about its physical environment and body, based on a mixture of linguistic, visual and proprioceptive evidence. It can answer basic questions about the present or past and also perform actions through verbal interaction. Most importantly, a novel contribution of our approach is the robot’s ability for seamless integration of both language and sensor-derived information about the situation: For example, the system can acquire parts of situations either by seeing them or by “imagining” them through descriptions given by the user: “There is a red ball at the left”. These situations can later be used to create mental imagery, thus enabling bidirectional translation between perception and language. This work constitutes a step towards robots that use situated natural language grounded in perception and action."

Spatial Understanding as a Common Basis for Human-Robot Collaboration

2017

We are developing a robotic cognitive architecture to be embedded in autonomous robots that can safely interact and collaborate with people on a wide range of physical tasks. Achieving true autonomy requires increasing the robot’s understanding of the dynamics of its world (physical understanding), and particularly the actions of people (cognitive understanding). Our system’s cognitive understanding arises from the Soar cognitive architecture, which constitutes the reasoning and planning component. The system’s physical understanding stems from its central representation, which is a 3D virtual world that the architecture synchronizes with the environment in real time. The virtual world provides a common representation between the robot and humans, thus improving trust between them and promoting effective collaboration.