What should a robot learn from an infant? Mechanisms of action interpretation and observational learning in infancy (original) (raw)

One-year-old infants use teleological representations of actions productively

Cognitive Science, 2003

Two experiments investigated whether infants represent goal-directed actions of others in a way that allows them to draw inferences to unobserved states of affairs (such as unseen goal states or occluded obstacles). We measured looking times to assess violation of infants' expectations upon perceiving either a change in the actions of computer-animated figures or in the context of such actions. The first experiment tested whether infants would attribute a goal to an action that they had not seen completed. The second experiment tested whether infants would infer from an observed action the presence of an occluded object that functions as an obstacle. The looking time patterns of 12-month-olds indicated that they were able to make both types of inferences, while 9-month-olds failed in both tasks. These results demonstrate that, by the end of the first year of life, infants use the principle of rational action not only for the interpretation and prediction of goal-directed actions, but also for making productive inferences about unseen aspects of their context. We discuss the underlying mechanisms that may be involved in the developmental change from 9 to 12 months of age in the ability to infer hypothetical (unseen) states of affairs in teleological action representations.

NEUROSCIENCES APPLIED TO ACTION INTERPRETATION Epistemological conflicting perspectives for infant social learning

InCircolo. Rivista di Filosofia e culture, 2017

In the last decades neurosciences provided so much important contributions to philosophy of mind that nowadays the latter is inconceivable without the former in every topic this philosophical branch deals with. The studies connected to action understanding provided great advances in the field of developmental psychology for what concerns social learning abilities grounded on imitation. All information received by the infants are transmitted through actions. It would be impossible to conceive infant imitation without action interpretation. According to Meltzoff's " like-me " hypothesis, imitation is possible in human infants already at birth in virtue of an identification mechanism with the adults supported by mirror neurons (MNs) based simulation system. However, if we split the types of actions in two general categories, instrumental and communicative actions, we will see, according to an alternative account, how infants modulate differently the comprehension of observed scenarios, depending on whether they are passive observers (in the case of instrumental actions) or actively involved (in the case of communicative actions). Such a recognition of action features seems to be evident through different degrees of motor activation, as ERP techniques applied to infants and young adults revealed. Neuroscientific evidences highlight the crucial role of brain areas connected to motor activation for action interpretation, but at the same time they allow both a bottom-up process and a top-down process interpretation whereby the motor activation is seen as a product of action understanding rather than its determining causal factor. The aim of the present study is to examine such epistemological conflicting perspectives underlying action interpretation, and their repercussions on different social learning theories.

Six-and-a-half-month-old children positively attribute goals to human action and to humanoid-robot motion

Cognitive Development

Recent infant studies indicate that goal attribution (understanding of goal-directed action) is present very early in infancy. We examined whether 6.5-month-olds attribute goals to agents and whether infants change the interpretation of goal-directed action according to the kind of agent. We conducted three experiments using the visual habituation paradigm. In Experiment 1, we investigated whether 6.5-month-olds attribute goals to human action. In Experiment 2, we investigated whether 6.5-month-olds attribute goals to humanoid-robot motion. In Experiment 3, we tested whether infants attribute goals to a moving box. The agent used in Experiment 3 had no human-like appearance. The results of the three experiments show that infants positively attribute goals to both human action (Experiment 1) and humanoid motion (Experiment 2) but not to a moving box (Experiment 3). These results suggest that 6.5-month-olds tend to interpret certain actions in terms of goals, their reasoning about the...

Action observation and robotic agents: Learning and anthropomorphism

Neuroscience & Biobehavioral Reviews, 2011

The 'action observation network' (AON), which is thought to translate observed actions into motor codes required for their execution, is biologically tuned: it responds more to observation of human, than nonhuman, movement. This biological specificity has been taken to support the hypothesis that the AON underlies various social functions, such as theory of mind and action understanding, and that, when it is active during observation of non-human agents like humanoid robots, it is a sign of ascription of human mental states to these agents. This review will outline evidence for biological tuning in the AON, examining the features which generate it, and concluding that there is evidence for tuning to both the form and kinematic profile of observed movements, and little evidence for tuning to belief about stimulus identity. It will propose that a likely reason for biological tuning is that human actions, relative to nonbiological movements, have been observed more frequently while executing corresponding actions. If the associative hypothesis of the AON is correct, and the network indeed supports social functioning, sensorimotor experience with non-human agents may help us to predict, and therefore interpret, their movements.

Guest Editorial Behavior Understanding and Developmental Robotics

IEEE Transactions on Autonomous Mental Development, 2014

The scientific, technological and application challenges that arise from the mutual interaction of developmental robotics and computational human behavior understanding give rise to two different perspectives. Robots need to be capable to learn dynamically and incrementally how to interpret, and thus understand multimodal human behavior, which means behavior analysis can be performed for developmental robotics. On the other hand, behavior analysis can also be performed through developmental robotics, since developmental social robots can offer stimulating opportunities for improving scientific understanding of human behavior, and especially to allow a deeper analysis of the semantics and structure of human behavior. The contributions to the Special Issue explore these two perspectives.

Children perseverate to a human's actions but not to a robot's actions

Developmental Science, 2000

Previous research has shown that young children commit perseverative errors from their observation of another person's actions. The present study examined how social observation would lead children to perseverative tendencies, using a robot. In Experiment 1, preschoolers watched either a human model or a robot sorting cards according to one dimension (e.g. shape), after which they were asked to sort according to a different dimension (e.g. colour). The results showed that children's behaviours in the task were significantly influenced by the human model's actions but not by the robot's actions. Experiment 2 excluded the possibility that children's behaviours were not affected by the robot's actions because they did not observe its actions. We concluded that children's perseverative errors from social observation resulted, in part, from their socio-cognitive ability.

Can motionese tell infants and robots” what to imitate”

Proceedings of the 4th International Symposium …, 2007

An open question in imitating actions by infants and robots is how they know "what to imitate." We suggest that parental modifications in their actions, called motionese, can help infants and robots to detect the meaningful structure of the actions. Parents tend to modify their infant-directed actions, e.g., put longer pauses between actions and exaggerate actions, which are assumed to help infants to understand the meaning and the structure of the actions. To investigate how such modifications contribute to the infants' understanding of the actions, we analyzed parental actions from an infantlike viewpoint by applying a model of saliency-based visual attention. Our model of an infant-like viewpoint does not suppose any a priori knowledge about actions or objects used in the actions, or any specific capability to detect a parent's face or his/her hands. Instead, it is able to detect and gaze at salient locations, which are standing out from the surroundings because of the primitive visual features, in a scene. The model thus demonstrates what low-level aspects of parental actions are highlighted in their action sequences and could attract the attention of young infants and robots. Our quantitative analysis revealed that motionese can help them (1) to receive immediate social feedback on the actions, (2) to detect the initial and goal states of the actions, and (3) to look at the static features of the objects used in the actions. We discuss these results addressing the issue of "what to imitate."

Is early differentiation of human behavior a precursor to the 1-year-old's understanding of intentional action? Comment on Legerstee, Barna, and DiAdamo (2000)

Developmental Psychology, 2001

In a recent issue of Developmental Psychology, M. Legerstee, J. Barna, and C. DiAdamo (2000) reported a study showing that 6-month-olds expect people to talk to persons rather than to inanimate objects and to manipulate inanimates rather than persons. They interpreted this ability as a "precursor" to later understanding of intentionality. The present article takes issue with the authors' 2 different levels of interpretation that contradict each other and raise problems in their own right. It is suggested that M. Legerstee et al.'s finding is most parsimoniously explained by associative learning and may not constitute a precursor to later understanding of intentionality in any well-defined sense of the term. The present article argues for the importance of differentiating between associative and inferential processes and reviews evidence that the understanding of goal-directed action around 9 months of age involves principle-based inferences. Legerstee, Barna, and DiAdamo (2000) reported a welldesigned habituation study with proper controls and clear-cut results indicating that 6-month-old infants expect people to talk to persons rather than to inanimate objects and to physically manipulate inanimate objects rather than persons. In the habituation phase, infants observed a human model either talk to or manipulate something behind an occluder. In the test phase, the occluder was removed and the infants were presented with either a human person or an inanimate object behind the occluder. In the talking condition, infants dishabituated more when they saw the inanimate object, whereas in the manipulation condition, they looked longer when the revealed object was a person. I find the results of the Legerstee et al.'s study convincing and uncontroversial and believe that it certainly contributes to our understanding of how early in life infants develop differential expectations about the stimulus conditions in which different types of human actions typically take place. The different levels of interpretation, however, that the authors proposed for their result raise a number of problematic issues that I discuss in this article. The general claim that Legerstee et al. (2000) made is that their study contributes significantly to our understanding of how infants come to comprehend intentional actions of others at the end of the first year of life (e.g., Tomasello, 1999). In arguing for this claim, they developed two rather different interpretations for their results that (a) contradict each other and (b) raise a number of problematic questions in their own right. I refer to the two accounts as the "stronger" and the "weaker" interpretations and discuss each in turn.

Learning From Their Own Actions: The Unique Effect of Producing Actions on Infants' Action Understanding

Child Development, 2014

Prior research suggests that infants´ action production affects their action understanding, but little is known about the aspects of motor experience that render these effects. In Study 1, the relative contributions of self-produced (n = 30) and observational (n = 30) action experience on 3-month-old infants' action understanding was assessed using a visual habituation paradigm. In Study 2, generalization of training to a new context was examined (n = 30). Results revealed a unique effect of active over observational experience. Further, findings suggest that benefits of trained actions do not generalize broadly, at least following brief training.