Determining Mobile Robot Actions which Require Sensor Interaction (original) (raw)

Determining Robot Actions For Tasks Requiring Sensor Interaction

The performance of non-trivial tasks by a mobile robot has been a long term objective of robotic research. One of the major stumbling blocks to this goal is the conversion of the high-level planning goals and commands into the ac- tuator and sensor processing controls. In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the robot's actuators. Most non-trivial tasks require the robot to interact with its environment; thus necessitating coordination of sensor processing and actuator control to accomplish the task. Our contention is that the transformation from the high level description of the task to the primitive actions should be performed primarily at execution time, when knowledge about the environment can be obtained through sensors. We propose to produce the detailed plan of primitive actions by using a collection of low-level planning components that contain domain specific knowledge and knowledge about...

Execution, Knowledge and the Selective Use of Sensors and Actuators

In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the robot's actuators. Our contention is that the transformation from the high level description of the task to the primitive actions should be performed primarily at execution time, when knowledge about the environment can be obtained through sensors. Our theory is based on the premise that proper application of knowledge increases the robustness of plan execution. We propose to produce the detailed plan of primitive actions and execute it by using primitive components that contain domain speci c knowledge and knowledge about the available sensors and actuators. These primitives perform signal and control processing as well as serve as an interface to high-level planning processes. In this work, importance is placed on determining what information is relevant to achieve the goal as well as determining the details necessary to utilize the sensors and actuators.

Achieving Goals Through Interaction With Sensors And Actuators

In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the robot's actuators. Our contention is that the transformation from the high level description of the task to the primitive actions should be performed primarily at execution time, when knowledge about the environment can be obtained through sen- sors. Our theory is based on the premise that proper applica- tion of knowledge increases the robustness of plan execution. We propose to produce the detailed plan of primitive actions and execute it by using primitive components that contain domain specific knowledge and knowledge about the available sensors and actuators. These primitives perform signal and con- trol processing as well as serve as an interface to high-level plan- ning processes. In this work, importance is placed on determin- ing what information is relevant to achieve the goal as well as determining the details necessary to utilize the sensors ...

Planning with sensing for a mobile robot

1997

We present an attempt to reconcile the theoretical work on reasoning about action with the realization of agents, in particular mobile robots. Specifically, we present a logical framework for representing dynamic systems based on description logics, which allows for the formalization of sensing actions. We address the generation of conditional plans by defining a suitable reasoning method in which a plan is extracted from a constructive proof of a query expressing a given goal. We also present an implementation of such a logical framework, which has been tested on the mobile robot "Tino".

Planning Information Processing and Sensing Actions

The goal of the CoSy project is to create cognitive robots to serve as a testbed of theories on how humans work , and to identify problems and techniques relevant to producing general-purpose humanlike domestic robots. Given the constraints on the resources available at the robot's disposal and the complexity of the tasks that the robot has to execute during cognitive interactions with other agents or humans, it is essential that the robot perform just those tasks that are necessary for it to achieve its goal. In this paper we describe our attempts at creating such a system that enables a mobile robot to plan its information processing and sensing actions. We build on an existing planning framework, which is based on Continual Planning [3]. Continual planning combines planning, plan execution and plan monitoring. Unlike classical planning approaches, here it is not necessary to model all contingencies in advance -the agent acts as soon as it has a feasible plan, in an attempt to gather more information that would help resolve the uncertainty on the rest of the plan. We describe how the system addresses challenges such as state representation, conflict resolution and uncertainty. A few experimental results are provided to highlight both the advantages and disadvantages of the current approach, and to motivate directions of further research. All algorithms are implemented and tested in the playmate scenario.

A unified framework for planning and execution-monitoring of mobile robots

2011

We present an original integration of high level planning and execution with incoming perceptual information from vision, SLAM, topological map segmentation and dialogue. The task of the robot system, implementing the integrated model, is to explore unknown areas and report detected objects to an operator, by speaking loudly. The knowledge base of the planner maintains a graph-based representation of the metric map that is dynamically constructed via an unsupervised topological segmentation method, and augmented with information about the type and position of detected objects, within the map, such as cars or containers. According to this knowledge the cognitive robot can infer strategies in so generating parametric plans that are instantiated from the perceptual processes. Finally, a model-based approach for the execution and control of the robot system is proposed to monitor, concurrently, the low level status of the system and the execution of the activities, in order to achieve the goal, instructed by the operator.

An Analysis of Sensor-Based Task Planning

1995

We present a planner which can plan to perform sensor operations to allow an agent to gather the information necessary to complete planning and achieve its goals in the face of missing or uncertain environmental information.

Logical sensor/actuator : knowledge-based robotic plan execution

Journal of Experimental & Theoretical Artificial Intelligence, 1997

Complex tasks are usually described as high-level goals, leaving out the details on how to achieve them. However, to control a robot, the task must be described in terms of primitive commands for the robot. Having the robot move itself to and through an unknown, and possibly narrow, doorway is an example of such a task. We show how the transformation from high-level goals to primitive commands can be performed at execution time and we propose an architecture based on recon gurable objects that contain domain knowledge and knowledge about the sensors and actuators available. We illustrate our approach using actual data from a real robot.

Improving Efficiency in Mobile Robot Task Planning Through World Abstraction

IEEE Transactions on Robotics, 2004

Task planning in mobile robotics should be performed efficiently due to real time requirements of robot-environment interaction. Its computational efficiency depends both on the number of operators (actions the robot can perform without planning) and the size of the world states (descriptions of the world before and after the application of operators). Thus, in real robotic applications where both components can be large, planning may turn inefficient and even unsolvable. In the AI literature on planning, little attention has been put into efficient management of large-scale world descriptions. In real large-scale situations, conventional AI planners (in spite of the most modern improvements) may consume intractable amounts of storage and computing time due to the huge amount of information. This paper proposes a new approach to task planning called Hierarchical Task Planning through World Abstraction (HPWA) that, by arranging hierarchically the world representation, becomes a good complement of STRIPS-style planners, improving significantly their computational efficiency. Broadly speaking, our approach works by firstly solving the task planning problem in a highly abstracted model of the environment of the robot, and then refines the solution under more detailed models where irrelevant world elements can be ignored due to the results previously obtained at abstracted levels.