Simulations of dynamical interactions for social learning (original) (raw)

Learning and communication via imitation: an autonomous robot perspective

Systems, Man and …, 2001

This paper proposes a neural network architecture designed to exhibit learning and communication capabilities via imitation. Our architecture allows a "proto imitation" behavior using the "perception ambiguity" inherent to real environments. In the perspective of turn-taking and gestural communication between two agents, new experiments on movement synchronization in an interaction game are presented. Synchronization is obtained as a global attractor depending on the coupling between agents' dynamic. We also discuss the non-supervised context of the imitation process and we present new experiments in which the same architecture is able to learn perception-action associations without any explicit reinforcement. The learning is based on the ability to detect novelty or irregularities in the communication rhythm.

An experimental comparison of imitation paradigms used in social robotics

2004

We study and contrast particular issues arising in two social learning paradigms that are widely used in robotics research: (i) following or matched-dependent behaviour and (ii) static observational learning. Experiments are carried out with physical Khepera robots whose controllers include motor schemas and new neural network based methods for model agent-centred perception of angle and distance. The robots are trained to perceive the dynamic movement of a human or robot demonstrator carrying a light source. The robots learn the behaviour either through perception from a static location or while following. The differences and implications of the results of both the following and observation mechanisms are compared and contrasted.

From spreading of behavior to dyadic interaction - A robot learns what to imitate

International Journal of Intelligent Systems, 2011

Imitation learning is a promising way to learn new behavior in robotic multiagent systems and in human-robot interaction. However, imitating agents should be able to decide autonomously which behavior, observed in others, is interesting to copy. This paper shows a method for extraction of meaningful chunks of information from a continuous sequence of observed actions by using a simple recurrent network (Elman Net). Results show that, independently of the high level of task-specific noise, Elman nets can be used for learning through prediction a reoccurring action patterns, observed in another robotic agent. We conclude that this primarily robot to robot interaction study can be generalized to human-robot interaction and show how we use these results for recognizing emotional behaviors in human-robot interaction scenarios. The limitations of the proposed approach and the future directions are discussed. C

Imitation: Learning and Communication

Proceedings of the Sixth …, 2000

This paper focuses on our works on imitation in autonomous robots. In a rst part, we take into account recent studies in the eld of developmental psychology and consider the two functions of imitation (learning and communication) that these studies have stressed. In a second part, we propose the idea that a proto imitative behavior can be induced in a mobile robot via a limitation of its visual perception. We provide a robotic implementation in which a mobile robot recognizes its teacher's image. Finally, we discuss imitation in a non-supervised context and we propose a simple system which learns motor associations with non explicit reinforcement on the basis of its ability to detect novelty.

Gesture Learning by Imitation Architecture for a Social Robot

Concepts, Methodologies, Tools, and Applications

Learning by imitation allows people to teach social robots new tasks using natural and intuitive interaction channels. Vision is the main of these channels. This chapter describes a learning-by-imitation architecture that uses stereo vision to perceive, recognize, learn, and imitate social gestures. This description is based on the identification of a set of generic components, which can be found in any learning by imitation architecture. It highlights the main contribution of the proposed architecture: the use of an inner human model to help perceiving, recognizing and learning human gestures. This allows different robots to share the same perceptual and knowledge modules. Experimental results show that the proposed architecture is able to meet the requirements of learning by imitation scenarios. It can also be integrated in complete software structures for social robots, which involve complex attention mechanisms and decision layers.

Synchrony and perception in robotic imitation across embodiments

Social robotics opens up the possibility of individualized social intelligence in member robots of a community, and allows us to harness not only individual learning by the individual robot, but also the acquisition of new skills by observing other members of the community (robot, human, or virtual). We describe ALICE (Action Learning for Imitation via Correspondences between Embodiments), an implemented generic mechanism for solving the correspondence problem between differently embodied robots. ALICE enables a robotic agent to learn a behavioral repertoire suitable to performing a task by observing a model agent, possibly having a different type of body, joints, different number of degrees of freedom, etc. Previously we demonstrated that the character of imitation achieved will depend on the granularity of subgoal matching, and on the metrics used to evaluate success. In this work, we implement ALICE for simple robotic arm agents in simulation using various metrics for evaluating ...

Learning through Observation and Imitation: An Overview of the ConSCIS Architecture

2008

Imitation in robotics is seen as a powerful means to reduce the complexity of robot programming. It allows users to instruct robots by simply showing them how to execute a given task. Through imitation robots can learn from their environment and adapt to it just as human newborns do. In order to be useful as human companions, robots must act for a purpose by achieving goals and fullfiling human expectations. But, what is the goal behind the surface of the demonstrated behavior? How to extract, encode and reuse eventual regularities observed? These questions are indispensable for the development of cognitive agents capable of being human companions in everyday life. In this paper we present ConSCIS, a framework for robot teaching through observation and imitation inspired by recent findings in cognitive sciences, biology and neuroscience. In ConSCIS we regard imitation as the process of manipulating high-level symbols in order to achieve goals and intentions hidden in the observation of task. The architecture has been tested both in simulation and on an anthropomorphic robot platform.

Relational Learning by Imitation

Lecture Notes in Computer Science, 2009

Imitative learning can be considered an essential task of humans development. People use instructions and demonstrations provided by other human experts to acquire knowledge. In order to make an agent capable of learning through demonstrations, we propose a relational framework for learning by imitation. Demonstrations and domain specific knowledge are compactly represented by a logical language able to express complex relational processes. The agent interacts in a stochastic environment and incrementally receives demonstrations. It actively interacts with the human by deciding the next action to execute and requesting demonstration from the expert based on the current learned policy. The framework has been implemented and validated with experiments in simulated agent domains.

Towards learning by interacting

Creating Brain-Like …, 2009

Traditional robotics has treated the question of learning as a one way process: learning algorithms, especially imitation learning approaches, are based on observing and analysing the environment before carrying out the 'learned' action. In such scenarios the learning situation is restricted to uni-directional communication.

Goals and Actions: Learning by Imitation

2003

Imitation is a powerful form of learning used in many animal societies. Imitation encourages the social interaction and the cultural transfer. Hence, Imitation represents many advantages, which have inspired the roboticists to reach imitation in order to approach social intelligent social intelligent robotics systems. The paper presents a mechanism of imitation, which permits a robot to acquire new behaviours through the extraction of the goals from the perceived actions. Experimental results with a simulator are also presented to demonstrate such a mechanism.