Software Model of Autonomous Object Affordances Learning (original) (raw)
Related papers
What Can I Not Do? Towards an Architecture for Reasoning about and Learning Affordances
Proceedings of the International Conference on Automated Planning and Scheduling
This paper describes an architecture for an agent to learn and reason about affordances. In this architecture, Answer Set Prolog, a declarative language, is used to represent and reason with incomplete domain knowledge that includes a representation of affordances as relations defined jointly over objects and actions. Reinforcement learning and decision-tree induction based on this relational representation and observations of action outcomes are used to interactively and cumulatively (a) acquire knowledge of affordances of specific objects being operated upon by specific agents; and (b) generalize from these specific learned instances. The capabilities of this architecture are illustrated and evaluated in two simulated domains, a variant of the classic Blocks World domain, and a robot assisting humans in an office environment.
Learning intermediate object affordances: Towards the development of a tool concept
4th International Conference on Development and Learning and on Epigenetic Robotics, 2014
Inspired by the extraordinary ability of young infants to learn how to grasp and manipulate objects, many works in robotics have proposed developmental approaches to allow robots to learn the effects of their own motor actions on objects, i.e., the objects affordances. While holding an object, infants also promote its contact with other objects, resulting in object-object interactions that may afford effects not possible otherwise. Depending on the characteristics of both the held object (intermediate) and the acted object (primary), systematic outcomes may occur, leading to the emergence of a primitive concept of tool. In this paper we describe experiments with a humanoid robot exploring object-object interactions in a playground scenario and learning a probabilistic causal model of the effects of actions as functions of the characteristics of both objects. The model directly links the objects' 2D shape visual cues to the effects of actions. Because no object recognition skills are required, generalization to novel objects is possible by exploiting the correlations between the shape descriptors. We show experiments where an affordance model is learned in a simulated environment, and is then used on the real robotic platform, showing generalization abilities in effect prediction. We argue that, despite the fact that during exploration no concept of tool is given to the system, this very concept may emerge from the knowledge that intermediate objects lead to significant effects when acting on other objects.
Learning affordance concepts: some seminal ideas
Inspired by the pioneering work of J. J. Gibson, we provide a workable characterisation of the notion of affordance and we explore a possible architecture for an agent that is able to autonomously acquire affordance concepts.1
A system for learning basic object affordances using a self-organizing map
2008
When a cognitive system encounters particular objects, it needs to know what effect each of its possible actions will have on the state of each of those objects in order to be able to make effective decisions and achieve its goals. Moreover, it should be able to generalize effectively so that when it encounters novel objects, it is able to estimate what effect its actions will have on them based on its experiences with previously encountered similar objects. This idea is encapsulated by the term "affordance", e.g. "a ball affords being rolled to the right when pushed from the left." In this paper, we discuss the development of a cognitive vision platform that uses a robotic arm to interact with household objects in an attempt to learn some of their basic affordance properties. We outline the various sensor and effector module competencies that were needed to achieve this and describe an experiment that uses a self-organizing map to integrate these modalities in a working affordance learning system.
Towards learning basic object affordances from object properties
2008
The capacity for learning to recognize and exploit environmental affordances is an important consideration for the design of current and future developmental robotic systems. We present a system that uses a robotic arm, camera systems and self-organizing maps to learn basic affordances of objects.
Learning Predictive Features in Affordance based Robotic Perception Systems
2006
This work is about the relevance of Gibson's concept of affordances [1] for visual perception in interactive and autonomous robotic systems. In extension to existing functional views on visual feature representations [9], we identify the importance of learning in perceptual cueing for the anticipation of opportunities for interaction of robotic agents. We investigate how the originally defined representational concept for the perception of affordances -in terms of using either optical flow or heuristically determined 3D features of perceptual entities -should be generalized to using arbitrary visual feature representations. In this context we demonstrate the learning of causal relationships between visual cues and predictable interactions, using both 3D and 2D information. In addition, we emphasize a new framework for cueing and recognition of affordance-like visual entities that could play an important role in future robot control architectures. We argue that affordancelike perception should enable systems to react on environment stimuli both more efficient and autonomous, and provide a potential to plan on the basis of responses on more complex perceptual configurations. We verify the concept with a concrete implementation for affordance learning, applying state-of-the-art visual descriptors that were extracted from a simulated robot scenario and prove that these features were successfully selected for their relevance in predicting opportunities of robot interaction.
Learning affordances for categorizing objects and their properties
2010
Abstract In this paper, we demonstrate that simple interactions with objects in the environment leads to a manifestation of the perceptual properties of objects. This is achieved by deriving a condensed representation of the effects of actions (called effect prototypes in the paper), and investigating the relevance between perceptual features extracted from the objects and the actions that can be applied to them.
Unsupervised learning of basic object affordances from object properties
2009
Affordance learning has, in recent years, been generating heightened interest in both the cognitive vision and developmental robotics communities. In this paper we describe the development of a system that uses a robotic arm to interact with household objects on a table surface while observing the interactions using camera systems. Various computer vision methods are used to derive, firstly, object property features from intensity images and range data gathered before interaction and, subsequently, result features derived from video sequences gathered during and after interaction. We propose a novel affordance learning algorithm that automatically discretizes the result feature space in an unsupervised manner to form affordance classes that are then used as labels to train a supervised classifier in the object property feature space. This classifier may then be used to predict affordance classes, grounded in the result space, of novel objects based on object property observations.
Learning Object Affordances: From Sensory-Motor Coordination to Imitation
IEEE Transactions on Robotics, 2008
Affordances encode relationships between actions, objects and effects. They play an important role on basic cognitive capabilities such as prediction and planning. We address the problem of learning affordances through the interaction of a robot with the environment, a key step to understand the world properties and develop social skills. We present a general model for learning object affordances using Bayesian networks integrated within a general developmental architecture for social robots. Since learning is based on a probabilistic model, the approach is able to deal with uncertainty, redundancy and irrelevant information. We demonstrate successful learning in the real world by having an humanoid robot interacting with objects. We demonstrate the benefits of the acquired knowledge in imitation games.