Unsupervised learning of affordance relations on a humanoid robot (original) (raw)

A system for learning basic object affordances using a self-organizing map

2008

When a cognitive system encounters particular objects, it needs to know what effect each of its possible actions will have on the state of each of those objects in order to be able to make effective decisions and achieve its goals. Moreover, it should be able to generalize effectively so that when it encounters novel objects, it is able to estimate what effect its actions will have on them based on its experiences with previously encountered similar objects. This idea is encapsulated by the term "affordance", e.g. "a ball affords being rolled to the right when pushed from the left." In this paper, we discuss the development of a cognitive vision platform that uses a robotic arm to interact with household objects in an attempt to learn some of their basic affordance properties. We outline the various sensor and effector module competencies that were needed to achieve this and describe an experiment that uses a self-organizing map to integrate these modalities in a working affordance learning system.

Learning intermediate object affordances: Towards the development of a tool concept

4th International Conference on Development and Learning and on Epigenetic Robotics, 2014

Inspired by the extraordinary ability of young infants to learn how to grasp and manipulate objects, many works in robotics have proposed developmental approaches to allow robots to learn the effects of their own motor actions on objects, i.e., the objects affordances. While holding an object, infants also promote its contact with other objects, resulting in object-object interactions that may afford effects not possible otherwise. Depending on the characteristics of both the held object (intermediate) and the acted object (primary), systematic outcomes may occur, leading to the emergence of a primitive concept of tool. In this paper we describe experiments with a humanoid robot exploring object-object interactions in a playground scenario and learning a probabilistic causal model of the effects of actions as functions of the characteristics of both objects. The model directly links the objects' 2D shape visual cues to the effects of actions. Because no object recognition skills are required, generalization to novel objects is possible by exploiting the correlations between the shape descriptors. We show experiments where an affordance model is learned in a simulated environment, and is then used on the real robotic platform, showing generalization abilities in effect prediction. We argue that, despite the fact that during exploration no concept of tool is given to the system, this very concept may emerge from the knowledge that intermediate objects lead to significant effects when acting on other objects.

Using a SOFM to learn Object Affordances

2004

Learning affordances can be defined as learning action potentials, i.e., learning that an object exhibiting certain regularities offers the possibility of performing a particular action. We propose a method to endow an agent with the capability of acquiring this knowledge by relating the object invariants with the potentiality of performing an action via interaction episodes with each object. We introduce a biologically inspired model to test this learning hypothesis and a set of experiments to check its validity in a Webots simulator with a Khepera robot in a simple environment. The experiment set aims to show the use of a GWR network to cluster the sensory input of the agent; furthermore, that the aforementioned algorithm for neural clustering can be used as a starting point to build agents that learn the relevant functional bindings between the cues in the environment and the internal needs of an agent.

Learning Predictive Features in Affordance based Robotic Perception Systems

2006

This work is about the relevance of Gibson's concept of affordances [1] for visual perception in interactive and autonomous robotic systems. In extension to existing functional views on visual feature representations [9], we identify the importance of learning in perceptual cueing for the anticipation of opportunities for interaction of robotic agents. We investigate how the originally defined representational concept for the perception of affordances -in terms of using either optical flow or heuristically determined 3D features of perceptual entities -should be generalized to using arbitrary visual feature representations. In this context we demonstrate the learning of causal relationships between visual cues and predictable interactions, using both 3D and 2D information. In addition, we emphasize a new framework for cueing and recognition of affordance-like visual entities that could play an important role in future robot control architectures. We argue that affordancelike perception should enable systems to react on environment stimuli both more efficient and autonomous, and provide a potential to plan on the basis of responses on more complex perceptual configurations. We verify the concept with a concrete implementation for affordance learning, applying state-of-the-art visual descriptors that were extracted from a simulated robot scenario and prove that these features were successfully selected for their relevance in predicting opportunities of robot interaction.

Using learned affordances for robotic behavior development

2008

Abstract" Developmental robotics" proposes that, instead of trying to build a robot that shows intelligence once and for all, what one must do is to build robots that can develop. These robots should be equipped with behaviors that are simple but enough to bootstrap the system. Then, as the robot interacts with its environment, it should display increasingly complex behaviors. In this paper, we propose such a development scheme for a mobile robot.

Goal emulation and planning in perceptual space using learned affordances

2011

Abstract In this paper, we show that through self-interaction and self-observation, an anthropomorphic robot equipped with a range camera can learn object affordances and use this knowledge for planning. In the first step of learning, the robot discovers commonalities in its action-effect experiences by discovering effect categories. Once the effect categories are discovered, in the second step, affordance predictors for each behavior are obtained by learning the mapping from the object features to the effect categories.

Software Model of Autonomous Object Affordances Learning

2008

Abstract Learning to recognize affordances is an essential skill essential for safe autonomous operation and intelligent planning. In this thesis, we present a general learning algorithm for affordances that combines an active learning approach with decision tree induction–smart exploration with rule extraction. Our framework constructs a mental model of objects' affordances both through knowledge discovery and knowledge transfer scenarios in both propositional and relational domains.

Learning Object Affordances: From Sensory-Motor Coordination to Imitation

IEEE Transactions on Robotics, 2008

Affordances encode relationships between actions, objects and effects. They play an important role on basic cognitive capabilities such as prediction and planning. We address the problem of learning affordances through the interaction of a robot with the environment, a key step to understand the world properties and develop social skills. We present a general model for learning object affordances using Bayesian networks integrated within a general developmental architecture for social robots. Since learning is based on a probabilistic model, the approach is able to deal with uncertainty, redundancy and irrelevant information. We demonstrate successful learning in the real world by having an humanoid robot interacting with objects. We demonstrate the benefits of the acquired knowledge in imitation games.

Unsupervised learning of basic object affordances from object properties

2009

Affordance learning has, in recent years, been generating heightened interest in both the cognitive vision and developmental robotics communities. In this paper we describe the development of a system that uses a robotic arm to interact with household objects on a table surface while observing the interactions using camera systems. Various computer vision methods are used to derive, firstly, object property features from intensity images and range data gathered before interaction and, subsequently, result features derived from video sequences gathered during and after interaction. We propose a novel affordance learning algorithm that automatically discretizes the result feature space in an unsupervised manner to form affordance classes that are then used as labels to train a supervised classifier in the object property feature space. This classifier may then be used to predict affordance classes, grounded in the result space, of novel objects based on object property observations.