Robots that change their world: Inferring Goals from Semantic Knowledge (original) (raw)

Robot planning with a semantic map

2013 IEEE International Conference on Robotics and Automation

Context is an important factor for domestic service robots to consider when interpreting their environments to perform tasks. In people's homes, rooms are laid out in a specific arrangement to enable comfortable and efficient living; for example, the living room is central to the house, and the dining room is adjacent to the kitchen. The identity of the objects in a room are a strong cue for determining that room's purpose. This paper will present a planner for an autonomous mobile robot system which uses room connectivity topology and object understanding as context for an object search task in a domestic environment.

Robot task planning using semantic maps

Robotics and Autonomous Systems, 2008

Task planning for mobile robots usually relies solely on spatial information and on shallow domain knowledge, such as labels attached to objects and places. Although spatial information is necessary for performing basic robot operations (navigation and localization), the use of deeper domain knowledge is pivotal to endow a robot with higher degrees of autonomy and intelligence. In this paper, we focus on semantic knowledge, and show how this type of knowledge can be profitably used for robot task planning. We start by defining a specific type of semantic maps, which integrates hierarchical spatial information and semantic knowledge. We then proceed to describe how these semantic maps can improve task planning in two ways: extending the capabilities of the planner by reasoning about semantic information, and improving the planning efficiency in large domains. We show several experiments that demonstrate the effectiveness of our solutions in a domain involving robot navigation in a domestic environment.

Using semantic information for improving efficiency of robot task planning

… Semantic Information in …, 2007

Abstract— The use of semantic information in robotics is an emergent field of research. As a supplement to other types of information, like geometrical or topological, semantics can improve mobile robot reasoning or knowledge inference, and can also facilitate human-robot ...

Towards Semantically Intelligent Robots

Approaches are needed for providing advanced autonomous wheeled robots with a sense of self, immediate ambience, and mission. The following list of abilities would form the desired feature set of such approaches: self-localization, detection and correction of course deviation errors, faster and more reliable identification of friend or foe, simultaneous localization and mapping in uncharted environments without necessarily depending on external assistance, and being able to serve as web services.

Agent in a Box: A Framework for Autonomous Mobile Robots with Beliefs, Desires, and Intentions

Electronics, 2021

This paper provides the Agent in a Box for developing autonomous mobile robots using Belief-Desire-Intention (BDI) agents. This framework provides the means of connecting the agent reasoning system to the environment, using the Robot Operating System (ROS), in a way that is flexible to a variety of application domains which use different sensors and actuators. It also provides the needed customisation to the agent’s reasoner for ensuring that the agent’s behaviours are properly prioritised. Behaviours which are common to all mobile robots, such as for navigation and resource management, are provided. This allows developers for specific application domains to focus on domain-specific code. Agents implemented using this approach are rational, mission capable, safety conscious, fuel autonomous, and understandable. This method was used for demonstrating the capability of BDI agents to control robots for a variety of application domains. These included simple grid environments, a simulat...

Learning to understand tasks for mobile robots

2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No.04CH37583)

We propose a way to represent the environment by storing observations taken in that environment, together with their task-related 'values'. This representation allows for robots being taught by human instructors based on rewards and punishments. We show that the robot is able to learn to execute different tasks. The results of training can be interpreted to gain understanding about the environment in which the robot has to operate. So instead of first modeling the environment and using this to execute the task, we first start by learning to execute the task and use this to obtain knowledge about the environment.

Monitoring the execution of robot plans using semantic knowledge

Robotics and Autonomous Systems, 2008

Even the best laid plans can fail, and robot plans executed in real world domains tend to do so often. The ability of a robot to reliably monitor the execution of plans and detect failures is essential to its performance and its autonomy. In this paper, we propose a technique to increase the reliability of monitoring symbolic robot plans. We use semantic domain knowledge to derive implicit expectations of the execution of actions in the plan, and then match these expectations against observations. We present two realizations of this approach: a crisp one, which assumes deterministic actions and reliable sensing, and uses a standard knowledge representation system (LOOM); and a probabilistic one, which takes into account uncertainty in action effects, in sensing, and in world states. We perform an extensive validation of these realizations through experiments performed both in simulation and on real robots.

Robot task planning and explanation in open and uncertain worlds

A long-standing goal of AI is to enable robots to plan in the face of uncertain and incomplete information, and to handle task failure intelligently. This paper shows how to achieve this. There are two central ideas. The first idea is to organize the robot's knowledge into three layers: instance knowledge at the bottom, commonsense knowledge above that, and diagnostic knowledge on top. Knowledge in a layer above can be used to modify knowledge in the layer(s) below. The second idea is that the robot should represent not just how its actions change the world, but also what it knows or believes. There are two types of knowledge effects the robot's actions can have: epistemic effects (I believe X because I saw it) and assumptions (I'll assume X to be true). By combining the knowledge layers with the models of knowledge effects, we can simultaneously solve several problems in robotics: (i) task planning and execution under uncertainty; (ii) task planning and execution in open worlds; (iii) explaining task failure; (iv) verifying those explanations. The paper describes how the ideas are implemented in a three-layer architecture on a mobile robot platform. The robot implementation was evaluated in five different experiments on object search, mapping, and room categorization.

Integrating Robot Task Planner with Common-sense Knowledge Base to Improve the Efficiency of Planning

Procedia Computer Science, 2013

This paper presents a developed approach for intelligently generating symbolic plans by mobile robots acting in domestic environments, such as offices and houses. The significance of the approach lies in developing a new framework that consists of the new modeling of high-level robot actions and then their integration with common-sense knowledge in order to support a robotic task planner. This framework will enable interactions between the task planner and the semantic knowledge base directly. By using common-sense domain knowledge, the task planner will take into consideration the properties and relations of objects and places in its environment, before creating semantically related actions that will represent a plan. This plan will accomplish the user order. The robot task planner will use the available domain knowledge to check the next related actions to the current one and the action's conditions met will be chosen. Then the robot will use the immediately available knowledge information to check whether the plan outcomes are met or violated.

Goal-Driven Autonomy with Semantically-Annotated Hierarchical Cases

Lecture Notes in Computer Science, 2015

We present LUiGi-H a goal-driven autonomy (GDA) agent. Like other GDA agents it introspectively reasons about its own expectations to formulate new goals. Unlike other GDA agents, LUiGi-H uses cases consisting of hierarchical plans and semantic annotations of the expectations of those plans. Expectations indicate conditions that must be true when parts of the plan are executed. Using an ontology, semantic annotations are defined via inferred facts enabling LUiGi-H to reason with GDA elements at different levels of abstraction. We compared LUiGi-H against an ablated version, LUiGi, that uses non-hierarchal cases. Both agents have access to the same base-level (i.e. non-hierarchical plans), while only LUiGi-H makes use of hierarchical plans. In our experiments, LUiGi-H outperforms LUiGi.