High-level Reasoning and Low-level Learning for Grasping: A Probabilistic Logic Pipeline (original) (raw)

1High-level Reasoning and Low-level Learning for Grasping: A Probabilistic Logic Pipeline

2016

Abstract—While grasps must satisfy the grasping stability cri-teria, good grasps depend on the specific manipulation scenario: the object, its properties and functionalities, as well as the task and grasp constraints. In this paper, we consider such infor-mation for robot grasping by leveraging manifolds and symbolic object parts. Specifically, we introduce a new probabilistic logic module to first semantically reason about pre-grasp configurations with respect to the intended tasks. Further, a mapping is learned from part-related visual features to good grasping points. The probabilistic logic module makes use of object-task affordances and object/task ontologies to encode rules that generalize over similar object parts and object/task categories. The use of probabilistic logic for task-dependent grasping contrasts with current approaches that usually learn direct mappings from visual perceptions to task-dependent grasping points. We show the benefits of the full probabilistic logi...

Semantic and geometric reasoning for robotic grasping: a probabilistic logic approach

Autonomous Robots

While any grasp must satisfy the grasping stability criteria, good grasps depend on the specific manipulation scenario: the object, its properties and functionalities, as well as the task and grasp constraints. We propose a probabilistic logic approach for robot grasping, which improves grasping capabilities by leveraging semantic object parts. It provides the robot with semantic reasoning skills about the most likely object part to be grasped, given the task constraints and object properties, while also dealing with the uncertainty of visual perception and grasp planning. The probabilistic logic framework is taskdependent. It semantically reasons about pre-grasp configurations with respect to the intended task and employs object-task affordances and object/task ontologies to encode rules that generalize over similar object parts and object/task categories. The use of probabilistic logic for task-dependent grasping contrasts with current approaches that usually learn direct mappings from visual perceptions to task-dependent grasping points. The logic-based module receives data from a low-level module that extracts semantic objects parts, and sends information to the low-level grasp planner. These three modules define our probabilistic logic framework, which is able to perform robotic grasping in realistic kitchen-related scenarios.

Relational Affordance Learning for Task-Dependent Robot Grasping

2017

Robot grasping depends on the specific manipulation scenario: the object, its properties, task and grasp constraints. Object-task affordances facilitate semantic reasoning about pre-grasp configurations with respect to the intended tasks, favoring good grasps. We employ probabilistic rule learning to recover such object-task affordances for task-dependent grasping from realistic video data.

Towards Robust Grasps: Using the Environment Semantics for Robotic Object Affordances

2018

Artificial Intelligence is essential to achieve a reliable human-robot interaction, especially when it comes to manipulation tasks. Most of the state-of-the-art literature explores robotics grasping methods by focusing on the target object or the robot's morphology, without including the environment. When it comes to human cognitive development approaches, these physical qualities are not only inferred from the object, but also from the semantic characteristics of the surroundings. The same analogy can be used in robotic affordances for improving objects grasps, where the perceived physical qualities of the objects give valuable information about the possible manipulation actions. This work proposes a framework able to reason on the object affordances and grasping regions. Each calculated grasping area is the result of a sequence of concrete ranked decisions based on the inference of different highly related attributes. The results show that the system is able to infer on good grasping areas depending on its affordance without having any a-priori knowledge on the shape nor the grasping points.

Reasoning about grasping

National Conference on Artificial Intelligence, 1988

The promise of robots for the future is that of intelligent, autonomous machines functioning in a variety of tasks and situations. If this promise is to be met, then it is vital that robots be capable of grasping and manipulating a wide range of objects in the execution of highly variable tasks. A current model of human grasping divides the grasp into two stages, a precontact stage and a postcontact stage. In this paper, we present a rule-based reasoning system and an object representation paradigm for a robotic system which utilizes this model to reason about grasping during the precontact stage. Sensed object features and their spatial relations are used to invoke a set of hand preshapes and reach parameters for the robot arm/hand. The system has been implemented in PROLOG and results are presented to illustrate how the system functions.

Knowledge-based reasoning from human grasp demonstrations for robot grasp synthesis

Robotics and Autonomous Systems, 2014

Humans excel when dealing with everyday manipulation tasks, being able to learn new skills, and to adapt to different complex environments. This results from a lifelong learning, and also observation of other skilled humans. To obtain similar dexterity with robotic hands, cognitive capacity is needed to deal with uncertainty. By extracting relevant multisensor information from the environment (objects), knowledge from previous grasping tasks can be generalized to be applied within different contexts. Based on this strategy, we show in this paper that learning from human experiences is a way to accomplish our goal of robot grasp synthesis for unknown objects. In this article we address an artificial system that relies on knowledge from previous human object grasping demonstrations. A learning process is adopted to quantify probabilistic distributions and uncertainty. These distributions are combined with preliminary knowledge towards inference of proper grasps given a point cloud of an unknown object. In this article, we designed a method that comprises a twofold process: object decomposition and grasp synthesis. The decomposition of objects into primitives is used, across which similarities between past observations and new unknown objects can be made. The grasps are associated with the defined object primitives, so that feasible object regions for grasping can be determined. The hand pose relative to the object is computed for the pre-grasp and the selected grasp. We have validated our approach on a real robotic platform-a dexterous robotic hand. Results show that the segmentation of the object into primitives allows to identify the most suitable regions for grasping based on previous learning. The proposed approach provides suitable grasps, better than more time consuming analytical and geometrical approaches, contributing for autonomous grasping. University of Coimbra, working in the field of computer vision, sensor fusion, and mobile robotics. His current research interests focus on inertial sensor data integration in computer vision systems, Bayesian models for multimodal perception of 3D structure and motion, and real-time performance using GPUs and reconfigurable hardware. He has participated in several national and European projects, most recently in BACS, Bayesian Approach to Cognitive Systems, and HANDLE.

Learning objects and grasp affordances through autonomous exploration

Computer Vision …, 2009

We describe a system for autonomous learning of visual object representations and their grasp affordances on a robot-vision system. It segments objects by grasping and moving 3D scene features, and creates probabilistic visual representations for object detection, recognition and pose estimation, which are then augmented by continuous characterizations of grasp affordances generated through biased, random exploration. Thus, based on a careful balance of generic prior knowledge encoded in (1) the embodiment of the system, (2) a vision system extracting structurally rich information from stereo image sequences as well as (3) a number of built-in behavioral modules on the one hand, and autonomous exploration on the other hand, the system is able to generate object and grasping knowledge through interaction with its environment.

Unsupervised learning of predictive parts for cross-object grasp transfer

2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2013

We present a principled solution to the problem of transferring grasps across objects. Our approach identifies, through autonomous exploration, the size and shape of object parts that consistently predict the applicability of a grasp across multiple objects. The robot can then use these parts to plan grasps onto novel objects. By contrast to most recent methods, we aim to solve the part-learning problem without the help of a human teacher. The robot collects training data autonomously by exploring different grasps on its own. The core principle of our approach is an intensive encoding of low-level sensorimotor uncertainty with probabilistic models, which allows the robot to generalize the noisy autonomously-generated grasps. Object shape, which is our main cue for predicting grasps, is encoded with surface densities, that model the spatial distribution of points that belong to an object's surface. Grasp parameters are modeled with grasp densities, that correspond to the spatial distribution of object-relative gripper poses that lead to a grasp. The size and shape of grasp-predicting parts are identified by sampling the cross-object correlation of local shape and grasp parameters. We approximate sampling and integrals via Monte Carlo methods to make our computer implementation tractable. We demonstrate the applicability of our method in simulation. A proof of concept on a real robot is also provided.

Probabilistic Models of Object Geometry for Grasp Planning

Robotics: Science and Systems IV, 2008

Robot manipulators generally rely on complete knowledge of object geometry in order to plan motions and compute successful grasps. However, manipulating real-world objects poses a substantial modelling challenge. New instances of known object classes may vary from learned models. Objects that are not perfectly rigid may appear in new configurations that do not match any of the known geometries. In this paper we describe an algorithm for learning generative probabilistic models of object geometry for the purposes of manipulation; these models capture both non-rigid deformations of known objects and variability of objects within a known class. Given a single image of partially occluded objects, the model can be used to recognize objects based on the visible portion of each object contour, and then estimate the complete geometry of the object to allow grasp planning. We provide two main contributions: a probabilistic model of shape geometry and a graphical model for performing correspondence between shape descriptions. We show examples of learned models from image data and demonstrate how the learned models can be used by a manipulation planner to grasp objects in cluttered visual scenes.