How can multimodal cues from child-directed interaction reduce learning complexity in robots? (original) (raw)

Can motionese tell infants and robots” what to imitate”

Proceedings of the 4th International Symposium …, 2007

An open question in imitating actions by infants and robots is how they know "what to imitate." We suggest that parental modifications in their actions, called motionese, can help infants and robots to detect the meaningful structure of the actions. Parents tend to modify their infant-directed actions, e.g., put longer pauses between actions and exaggerate actions, which are assumed to help infants to understand the meaning and the structure of the actions. To investigate how such modifications contribute to the infants' understanding of the actions, we analyzed parental actions from an infantlike viewpoint by applying a model of saliency-based visual attention. Our model of an infant-like viewpoint does not suppose any a priori knowledge about actions or objects used in the actions, or any specific capability to detect a parent's face or his/her hands. Instead, it is able to detect and gaze at salient locations, which are standing out from the surroundings because of the primitive visual features, in a scene. The model thus demonstrates what low-level aspects of parental actions are highlighted in their action sequences and could attract the attention of young infants and robots. Our quantitative analysis revealed that motionese can help them (1) to receive immediate social feedback on the actions, (2) to detect the initial and goal states of the actions, and (3) to look at the static features of the objects used in the actions. We discuss these results addressing the issue of "what to imitate."

Toward designing a robot that learns actions from parental demonstrations

2016

— How to teach actions to a robot as well as how a robot learns actions is an important issue to be discussed in designing robot learning systems. Inspired by human parent-infant interaction, we hypothesize that a robot equipped with infant-like abilities can take advantage of parental proper teaching. Parents are known to significantly alter their infant-directed actions versus adult-directed ones, e.g. make more pauses between movements, which is assumed to aid the infants’ understanding of the actions. As a first step, we analyzed parental actions using a primal attention model. The model based on visual saliency can detect likely important locations in a scene without employing any knowledge about the actions or the environment. Our statistical analysis revealed that the model was able to extract meaningful structures of the actions, e.g. the initial and final state of the actions and the significant state changes in them, which were highlighted by parental action modifications....

People modify their tutoring behavior in robot-directed interaction for action learning

… and Learning, 2009 …, 2009

In developmental research, tutoring behavior has been identified as scaffolding infants' learning processes. It has been defined in terms of child-directed speech (Motherese), childdirected motion (Motionese), and contingency. In the field of developmental robotics, research often assumes that in humanrobot interaction (HRI), robots are treated similar to infants, because their immature cognitive capabilities benefit from this behavior. However, according to our knowledge, it has barely been studied whether this is true and how exactly humans alter their behavior towards a robotic interaction partner. In this paper, we present results concerning the acceptance of a robotic agent in a social learning scenario obtained via comparison to adults and 8-11 months old infants in equal conditions. These results constitute an important empirical basis for making use of tutoring behavior in social robotics. In our study, we performed a detailed multimodal analysis of HRI in a tutoring situation using the example of a robot simulation equipped with a bottom-up saliency-based attention model . Our results reveal significant differences in hand movement velocity, motion pauses, range of motion, and eye gaze suggesting that for example adults decrease their hand movement velocity in an Adult-Child Interaction (ACI), opposed to an Adult-Adult Interaction (AAI) and this decrease is even higher in the Adult-Robot Interaction (ARI). We also found important differences between ACI and ARI in how the behavior is modified over time as the interaction unfolds. These findings indicate the necessity of integrating top-down feedback structures into a bottom-up system for robots to be fully accepted as interaction partners.

Understanding a child's play for robot interaction by sequencing play primitives using Hidden Markov Models

2010 IEEE International Conference on Robotics and Automation, 2010

In this paper, we discuss a methodology to build a system for a robot playmate that extracts and sequences low-level play primitives during a robot-child interaction scenario. The motivation is to provide a robot with basic knowledge of how to manipulate toys in an equivalent manner as a human does-as a first step in engaging children in cooperative play. Our approach involves the extraction of play primitives based on observation of motion gradient vectors computed from the image sequence. Hidden Markov Models (HMMs) are then used to recognize 14 different play primitives during play. Experimental results from a data set of 100 play scenarios including child subjects demonstrate 86.88% accuracy recognizing and sequencing the play primitives. I. INTRODUCTION HY do children need playmates? Interactive play in childhood is closely linked to children's social, physical, and cognitive development [1]. However, due to many social factors, children are often left alone, spending hours of time watching television, playing video games, and computers, which threatens to undermine the process of play, with grim implications for the intellectual and emotional health of children [2, 3]. Simple toys, such as those depicted in Fig. 1, can accelerate a child's imagination as they build their own scenes, knock them down, and start over. Along with the toys, playmates are also an important source for building collaboration and cooperation, controlling impulses, reducing aggression, and having better overall emotional and social adjustments [4, 5]. Children with development delays can benefit from a robotic toy, which can yield better attention [6, 7]. Robots also have shown the potential in assisting physically challenged children [8], and in engaging children in imitation base play [9]. Although robots are shown to be of use in these various children-robot interaction scenarios, robots, in these venues, are positioned more as tools rather than partners or playmates. Long-term interaction and the effectiveness of robot usage in interactive play therefore has not reached its full potential. The effect of playing has shown to have a lasting effect due to the dynamic nature of interacting with the world [10]. With respect to playing with others, a shared interest arises between playmates to make the play continuously entertaining, thus engaging the mind, and creating

A Framework for Robot Learning During Child-Robot Interaction with Human Engagement as Reward Signal

2018

Using robots as therapeutic or educational tools for children with autism requires robots to be able to adapt their behavior specifically for each child with whom they interact. In particular, some children may like to be looked into the eyes by the robot while some may not. Some may like a robot with an extroverted behavior while others may prefer a more introverted behavior. Here we present an algorithm to adapt the robot's expressivity parameters of action (mutual gaze duration, hand movement expressivity) in an online manner during the interaction. The reward signal used for learning is based on an estimation of the child's mutual engagement with the robot, measured through non-verbal cues such as the child's gaze and distance from the robot. We first present a pilot joint attention task where children with autism interact with a robot whose level of expressivity is predetermined to progressively increase, and show results suggesting the need for online adaptation of expressivity. We then present the proposed learning algorithm and some promising simulations in the same task. Altogether, these results suggest a way to enable robot learning based on non-verbal cues and to cope with the high degree of nonstationarities that can occur during interaction with children.

What should a robot learn from an infant? Mechanisms of action interpretation and observational learning in infancy

Connection Science, 2003

The paper provides a summary of our recent research on preverbal infants (using violation-of-expectation and observational learning paradigms) demonstrating that oneyear-olds interpret and draw systematic inferences about other's goal-directed actions, and can rely on such inferences when imitating other's actions or emulating their goals. To account for these findings it is proposed that oneyear-olds apply a non-mentalistic action interpretational system, the 'teleological stance' that represents actions by relating relevant aspects of reality (action, goal-state, and situational constraints) through the principle of rational action, which assumes that actions function to realize goal-states by the most efficient means available in the actor's situation. The relevance of these research findings and the proposed theoretical model for how to realize the goal of epigenetic robotics of building a 'socially relevant' humanoid robot is discussed.

Can we talk to robots? Ten-month-old infants expected interactive humanoid robots to be talked to by persons

Cognition, 2005

As technology advances, many human-like robots are being developed. Although these humanoid robots should be classified as objects, they share many properties with human beings. This raises the question of how infants classify them. Based on the looking-time paradigm used by [Legerstee, M., . Precursors to the development of intention at 6 months: understanding people and their actions. Developmental Psychology, 36, 5, 627-634.], we investigated whether 10-month-old infants expected people to talk to a humanoid robot. In a familiarization period, each infant observed an actor and an interactive robot behaving like a human, a non-interactive robot remaining stationary, and a non-interactive robot behaving like a human. In subsequent test trials, the infants were shown another actor talking to the robot and to the actor. We found that infants who had previously observed the interactive robot showed no difference in looking-time between the two types of test events. Infants in the other conditions, however, looked longer at the test event where the second experimenter talked to the robot rather than where the second experimenter talked to the person. These results suggest that infants interpret the interactive robot as a communicative agent and the non-interactive robot as an object. Our findings imply that infants categorize interactive humanoid robots as a kind of human being. q

Infant-like Social Interactions between a Robot and a Human Caregiver

Adaptive Behavior, 2000

This paper presents an autonomous robot designed to interact socially with human "parents". A human infant's emotions and drives play an important role in generating meaningful interactions with the caretaker, regulating these interactions to maintain an environment suitable for the learning process, and assisting the caretaker in satisfying the infant's drives. For our purposes, the ability to regulate how intensely the caretaker engages the robot is vital to successful learning in a social context.