Constraint-based Task Specification and Estimation for Sensor-Based Robot Systems in the Presence of Geometric Uncertainty (original) (raw)
Related papers
Unified Constraint-Based Task Specification for Complex Sensor-Based Robot Systems
Proceedings of the 2005 IEEE International Conference on Robotics and Automation, 2005
This paper presents a unified task specification formalism and a unified control scheme for the lowest control level of sensor-based robot tasks. The formalism is based on: (i) the integration of any sensor that provides (direct or indirect) distance (and time derivatives) and force information; (ii) the possibility to use multiple "Tool Centre Points", e.g. defined relative to the robot end effector, other links or the environment; (iii) the integration of optimization functions for underconstrained as well as overconstrained specifications with linear constraints; (iv) the integration of on-line estimators; and (v) compatibility with all major lowlevel control approaches.
iTASC: a tool for multi-sensor integration in robot manipulation
2008 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, 2008
iTASC (acronym for 'instantaneous task specification and control) [1] is a systematic constraint-based approach to specify complex tasks of general sensor-based robot systems. iTASC integrates both instantaneous task specification and estimation of geometric uncertainty in a unified framework. Automatic derivation of controller and estimator equations follows from a geometric task model that is obtained using a systematic task modeling procedure. The approach applies to a large variety of robot systems (mobile robots, multiple robot systems, dynamic human-robot interaction, etc.), various sensor systems, and different robot tasks.
A new methodology for dealing with uncertainty in robotic tasks
XIV. Int. Symp. on Comp. & Inf. Sci., Kuşadası, …, 1999
A new method that evaluates the execution of robotic operations at the task level is proposed. Traditional robot controllers use variety of feedback loops at motion level to solve dynamical control and motion level planning problems asssuming that the original task plan is never modified. However, a drastical change in the operation environment of the robot might as well affect the sequence of tasks to be executed. In such a case, modifying motion plan may not be sufficient to adapt to the unexpected change in the environment. In this study, a planning environment that allows the original task plan to be modified or entirely changed according to the changing environmental conditions is proposed. A heuristic approach is taken to decide on whether the original plan should be modified or an entirely new plan should be generated. A simple case on pick-and-place sequencing of blocks-world is studied to demonstrate the idea.
Contingent Task and Motion Planning under Uncertainty for Human–Robot Interactions
Applied Sciences
Manipulation planning under incomplete information is a highly challenging task for mobile manipulators. Uncertainty can be resolved by robot perception modules or using human knowledge in the execution process. Human operators can also collaborate with robots for the execution of some difficult actions or as helpers in sharing the task knowledge. In this scope, a contingent-based task and motion planning is proposed taking into account robot uncertainty and human–robot interactions, resulting a tree-shaped set of geometrically feasible plans. Different sorts of geometric reasoning processes are embedded inside the planner to cope with task constraints like detecting occluding objects when a robot needs to grasp an object. The proposal has been evaluated with different challenging scenarios in simulation and a real environment.
Constrained motion control of robots with task constraints identification by US sensing
… , 1992., Proceedings of the 1992 IEEE …, 1992
The ~d c of ultrasonic, (US) range sensing in constrained motion control of d o t s b investigated. A US sensing system is integmted into a fimx/position control scheme and allows on-line ideutijcatioa of the motion constraints during contact with a partially unknown environment. Measurement of the relative distance of the end effector to the surface is used to obtain a fast but safe approach motion. A planar three-dimensional task space is considered in a case study; thus two translational/force componenta and one rotation/torque component are of interest. Ezperimental results on a PUMA-560 manipulator show the eflectivenass of the proposed scheme.
Robot motion planning with uncertainty in control and sensing
Artificial Intelligence, 1991
In this paper, we consider the problem of planning motion strategies in the presence of uncertainty in both control and sensing for simple robots described in a two-dimensional configuration space. We consider the preimage backchaining approach to this problem, which was first proposed by Lozano-Perez, . Although attractive, the approach raises several difficult computational issues. One • of them, which is directly addressed in this paper, is preimage computation. We describe two practical methods for computing preimages, which we call backprojection from sticking edges and backprojection from goal kernel. In the last sections of the paper, we discuss non-implemented improvements of this planner and present additional results.
Planning robot motion strategies under geometric uncertainty constraints
Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS'94), 2000
This paper presents an approach to plan robust motion strategies of a robot navigating through an environment composed of polygonal obstacles subject to geometric uncertainty constraints. The contacts of the robot against the obstacles are used to reduce its position and orientation uncertainties and to guide the robot to its goal configuration. We describe a motion planner based on this approach which generates both, free space motions and compliant motions (contact based motions). An explicit geometric model for the uncertainty is used to estimate the reachability of the goal and the intermediate subgoals (these are generated when the final goal is not reachable from the current robot configuration). Two functions are combined to explore the valid space : (1) a contact-based potential field function is used to generate continuous motions pulling the robot towards target attractors; (2) an exploration function is used to determine possible subgoals allowing to progress towards the ultimate goal when local minima of the previous function occur.
Constrained motion task control of robotic manipulators
Mechanism and Machine Theory, 1994
Almeract-Controi of robotic manipulator during constrained motion task execution is the subject of this brief paper. Our previous work in this area addressed control of manipulators during constrained motion subject only to kinematic position constraints. This paper addresses the problem of manipulator constrained motion control for the case of arbitrary kinematic constraints acting on the system as well am dynamic parameter uncertainty. Hence, this brief paper represents a generalization of our previous results. A method is presented which permits the asymptotic regulation of both generalized forces and position of the manipulator. Regulation of these outputs is achieved in the presence of constant unmodelled disturbances which may act on the system. The control system synthesis is based on a general theory associated with the control of descriptor variable systems. The linearized manipulator dynamics is decomposed into a "slow" and "fast" subsystem. The slow subsystem, corresponding to the manipulator states that lie within the subspace of constrained states, is stabilized to yield an asymptotically stable system. The dynamics of the fast subsystem may be ignored, as shown in the paper. Synthesized from a linearized model of the manipulator dynamics, the method is valid only in a neighborhood about the point of linearization. It is assumed in this paper that the kinematics of the manipulator and the contact environment are known. A numerical example serves to illustrate the method presented here.
Redefinition of the robot motion-control problem
IEEE Control Systems Magazine
The objective of this paper is a redefinition of the robot control problem, based on (1) realistic models for the industrial robot as a controlled plant, (2) endeffector trajectories consistent with manufacturing applications. and (3) the need for end-effector sensing to compensate for uncertainties inherent to most robotic manufacturing applications. Based on extensive analytical and experimental studies. robot dynamic models are presented that have been validated over the frequency range 0 to 50 Hz. These models exhibit a strong influence of drive system flexibility, producing lightly damped poles in the neighborhood of 8 Hz, 14 Hz, and 4 0 Hz, all unmodeled by the conventional rigid-body multiple-link robot dynamic approach. The models presented also quantify the significance of nonlinearities in the drive system, in addition to those well known in the linkage itself. Simulations of robot dynamics and motion controls demonstrate that existing controls coupled with effective path planning produce dynamic path errors that are acceptable for most manufacturing applications. Major benefits are projected, with examples cited. for use of end-effector sensors for position, force, and process control. Models of Robot Dynamics for Control System Design Nearly all models for robot dynamics presented in the literature are based on the assumption that the arm is a linkage of connected rigid bodies 111. Using the Newton-Euler, Lagrange, Kane, or other approaches, the kinematics and dynamics of multiple-link robot arms are derived and reduced to the following form: H(q)q + C(q, 4) + G (q) A K(q)'M = T (1) In this equation, q is a vector of robot joint angles; H is a moment of inertia matrix; C is An early version of this paper was presented at the 1984 Conference on Decision and Control, Las Vegas, NV, Dec. 12-14, 1984. Larry M. Sweet is Manager, Knowledge-Based Systems Branch, General Electric Company, Corporate Research and Development, Schenectady, NY 12301. Malcolm C. Good was on sabbatical at General Electric when this work was performed.
Constraint-Based Task Specification and Control for Visual Servoing Application Scenarios
auto, 2012
This paper reformulates image-based visual servoing as a constraint-based robot task, in order to integrate it seamlessly with other task constraints in image space, in Cartesian space, in the joint space of the robot, or in the “image space” of any other sensor (e. g. force, distance). This approach allows us to fuse various kinds of sensor data. The integration takes place via the specification of generic “feature coordinates”, defined in the different task spaces. Independent control loops are defined to control the individual feature coordinate setpoints, in each of these task spaces. The outputs of the control loops are instantaneously combined into joint velocity setpoints for a velocity-controlled robot that executes the task. The paper includes experimental results for different application scenarios.