iTASC: a tool for multi-sensor integration in robot manipulation (original) (raw)

Constraint-based Task Specification and Estimation for Sensor-Based Robot Systems in the Presence of Geometric Uncertainty

The International Journal of Robotics Research, 2007

This paper introduces a systematic constraint-based approach to specify complex tasks of general sensorbased robot systems consisting of rigid links and joints. The approach integrates both instantaneous task specification and estimation of geometric uncertainty in a unified framework. Major components are the use of feature coordinates, defined with respect to object and feature frames, which facilitate the task specification, and the introduction of uncertainty coordinates to model geometric uncertainty. While the focus of the paper is on task specification, an existing velocity based control scheme is reformulated in terms of these feature and uncertainty coordinates. This control scheme compensates for the effect of time varying uncertainty coordinates. Constraint weighting results in an invariant robot behavior in case of conflicting constraints with heterogeneous units.

Unified Constraint-Based Task Specification for Complex Sensor-Based Robot Systems

Proceedings of the 2005 IEEE International Conference on Robotics and Automation, 2005

This paper presents a unified task specification formalism and a unified control scheme for the lowest control level of sensor-based robot tasks. The formalism is based on: (i) the integration of any sensor that provides (direct or indirect) distance (and time derivatives) and force information; (ii) the possibility to use multiple "Tool Centre Points", e.g. defined relative to the robot end effector, other links or the environment; (iii) the integration of optimization functions for underconstrained as well as overconstrained specifications with linear constraints; (iv) the integration of on-line estimators; and (v) compatibility with all major lowlevel control approaches.

Demonstration of multi-sensor integration in industrial manipulation (poster)

2006

This poster accompanies a video contribution to the IEEE International Conference on Robotics and Automation 2006. Multi-sensor integration is still a matter of research in many areas. In previous works of the authors scalable and modular hardware and software architectures for manipulation control were described in the open literature. A major aim was to create a system, which is open for any kind and any number of sensors while providing a unified programming interface. Manipulation primitives constitute such an interface that enables programmers to specify sensor-guided and sensorguarded motion commands in an intuitive way. The hardware, the software, and this programming interface are shortly introduced. The complete system has been implemented as prototype for industrial use, i.e. all results are practical ones and not only simulation results. To highlight the potential for practical use, this poster depicts a highly sophisticated but concrete application: playing Jenga. A manipulator was equipped with two CCD cameras, a distance sensor, a 6D force/torque sensor, and a 6D acceleration sensor to explore a new field of manipulation.

Task planning and action coordination in integrated sensor-based robots

IEEE Transactions on Systems, Man, and Cybernetics

Intelligent robots interact with the real world by employing their advanced sensory mechanisms to perceive their environment, and using their effectors and tools to change the state of their environment. Some of the important capabilities which endow "intelligence" to those robots include, (1) planning, i.e., given a goal, the ability to generate a set of task plans which will lead to achieving the goal, (2) coordination and execution of the perceptual actions, i.e., the abilities to coordinate sensors, acquire sensory data, and process, interpret and transform the sensory information, and (3) coordination and execution of the motor actions, i.e., the ability to navigate in their environments and the ability to coordinate effectors and tools and manipulate objects to accomplish assigned tasks. In this paper we introduce a System Architecture for Sensor-based Intelligent Robots (SASIR). The system architecture consists of Percept ion, Motor, Task Planner, Knowledge-Base, User Interface and Supervisor modules. SASIR is constructed using a frame data structure, which provides a suitable and flexible scheme for representation and manipulation of the world model, the sensor derived information, as well as for describing the actions required for the execution of a specific task. The experimental results show the basic validity of the general architecture as well as the robust and successful performance of two working systems: (1) the Autonomous Spill Cleaning (ASC) Robotic System, and (2), ROBOSIGHT, which is capable of a range of autonomous inspection and manipulation tasks. Simulation and animation techniques were employed, in addition to the real-world testing, during the system development. The system components were successfully transported to another research laboratory involving a different type of robot, different sensors, and a different physical environment.

Developing robotic systems with multiple sensors

IEEE Transactions on Systems, Man, and Cybernetics, 1990

Intelligent robots should be able to acquire, interpret, and integrate information from a variety of sensor modalities. Research presented in the paper deals with the development of robotic systems that utilize multisensory information to perform a variety of inspection and manipulation tasks. Design of multisensor systems is a complex and difficult task requiring resolution of several subtasks, including sensory modality selection, processing and analysis methods for information acquired by individual sensors, and integration of independently derived information in gn accurate and efficient manner. A general approach for the integration of vision, range, proximity, and touch sensory data to derive a better estimate of the position and orientation (pose) of an object appearing in the work space is presented. Efficient and robust methods for analyzing vision and range data to derive an interpretation of input images are discussed. Vision information analysis includes a model-based object recognition module and an image-to-world coordinate transformation module to identify the three-dimensional (3-D) coordinates of the recognized objects. The range information processing includes modules for preprocessing, segmentation, and 3-D primitive extraction. The multisensory information integration approach represents sensory information in a sensor-independent form and formulates an optimization problem to find a minimum error solution to the problem. The capabilities of the multisensor robotic system are demonstrated by performing a number of experiments using an industrial robot equipped with several different sensor types.

Design of Task-Level Robot Control Systems

1992

Task-level control problems are ones that involve significant coordination of planning, perception and action. Robot systems capable of performing complex tasks are typically composed of many concurrent components. The design problem is to develop a control system that safely and reliably achieves its given tasks. We focus on two aspects of this problem: eliminating unwanted interactions amongs behaviors and dealing with uncertainty arising from incomplete models of the effects of actuators and sensors. Our basic approach to both problems involves combining causal (symbolic) and decision-theoretic reasoning techniques: the causal reasoning help frame the problem, focusing on relevant features, and the decision-theoretic techniques help in making optimal choices.

Addressing pose uncertainty in manipulation planning using Task Space Regions

2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009

We present an efficient approach to generating paths for a robotic manipulator that are collision-free and guaranteed to meet task specifications despite pose uncertainty. We first describe how to use Task Space Regions (TSRs) to specify grasping and object placement tasks for a manipulator. We then show how to modify a set of TSRs for a certain task to take into account pose uncertainty. A key advantage of this approach is that if the pose uncertainty is too great to accomplish a certain task, we can quickly reject that task without invoking a planner. If the task is not rejected we run the IKBiRRT planner, which trades-off exploring the robot's C-space with sampling from TSRs to compute a path. Finally, we show several examples of a 7-DOF WAM arm planning paths in a cluttered kitchen environment where the poses of all objects are uncertain.

Integration of task and motion planning for robotics

We describe a research project in which we explore the effectiveness of an approach for integrated symbolic and geometric planning in robotics. We target to solve an assembling-like problem with two robot arms. The scenario we propose involves two Barrett Technology's WAM robots that work cooperatively to solve a game for kids. This experiment has a double purpose: setting out a practical challenge that guides our work; and acting as a means to visually validate and show the obtained results. We also cover Project Management aspects such as the temporal planning and the economic, social and environmental analysis.

The International Journal of Robotics

Dexterous manipulation with a highly redundant movement system is one of the hallmarks of human motor skills. From numerous behavioral studies, there is strong evidence that humans employ compliant task space control, i.e. they focus control only on task variables while 738 THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH / June 2008 erational space controllers. We address formulations at the velocity, acceleration, and force levels. First, we formulate all controllers in a common notational framework, including quaternion-based orientation control, and discuss some of their theoretical properties. Second, we present experimental comparisons of these approaches on a seven-degree-of-freedom anthropomorphic robot arm with several benchmark tasks. As an aside, we also introduce a novel parameter estimation algorithm for rigid body dynamics, which ensures physical consistency, as this issue was crucial for our successful robot implementations. Our extensive empirical results demonstrate that one of the simplified acceleration-based approaches can be advantageous in terms of task performance, ease of parameter tuning, and general robustness and compliance in the face of inevitable modeling errors.

A Control Architecture for Compliant Execution of Manipulation Tasks

2006

This paper deals with the problem of dependable physical interaction through manipulation in partially-known everyday human environments. We present a modular software architecture that allows the definition and compliant execution of manipulation tasks under the task frame formalism. We show the details of several software modules implemented within this architecture, that enable higher levels of adaptability and robustness, as well as the incremental incorporation of more complex skills in a modular fashion. The whole system is validated making a real robot arm with a three-finger hand perform a complex manipulation task: taking a book out of a bookshelf. Results show how the presented framework is suitable for easily defining and performing a great variety of manipulation tasks