A Robot that Uses Arousal to Detect Learning Challenges and Seek Help (original) (raw)
Related papers
Frontiers in neurorobotics, 2014
In the context of our work in developmental robotics regarding robot-human caregiver interactions, in this paper we investigate how a "baby" robot that explores and learns novel environments can adapt its affective regulatory behavior of soliciting help from a "caregiver" to the preferences shown by the caregiver in terms of varying responsiveness. We build on two strands of previous work that assessed independently (a) the differences between two "idealized" robot profiles-a "needy" and an "independent" robot-in terms of their use of a caregiver as a means to regulate the "stress" (arousal) produced by the exploration and learning of a novel environment, and (b) the effects on the robot behaviors of two caregiving profiles varying in their…
Eliciting caregiving behavior in dyadic human-robot attachment-like interactions
ACM Transactions on Interactive Intelligent Systems, 2012
Based on research in developmental robotics and psychology findings in attachment theory in young infants, we designed an arousal-based model controlling the behaviour of a Sony AIBO robot during the exploration of a children play mat. When the robot experiences too many new perceptions, the increase of arousal triggers calls for attention from its human caregiver. The caregiver can choose to either calm the robot down by providing it with comfort, or to leave the robot coping with the situation on its own. When the arousal of the robot has decreased, the robot moves on to further explore the play mat. We present here the results of two experiments using this arousal-driven control architecture. In the first setting, we show that such a robotic architecture allows the human caregiver to influence greatly the learning outcomes of the exploration episode, with some similarities to a primary caregiver during early childhood. In a second experiment, we tested how human adults behaved in a similar setup with two different robots: one needy, often demanding attention, and one more independent, requesting far less care or assistance. Our results show that human adults recognise each profile of the robot for what they have been designed, and behave accordingly to what would be expected, caring more for the needy robot than the other. Additionally, the subjects exhibited a preference and more positive affect whilst interacting and rating the robot we designed as needy. This experiment leads us to the conclusion that our architecture and setup succeeded in eliciting positive and caregiving behaviour from adults of different age groups and technological background. Finally, the consistency and reactivity of the robot during this dyadic interaction appeared crucial for the enjoyment and engagement of the human partner.
Infant-like Social Interactions between a Robot and a Human Caregiver
Adaptive Behavior, 2000
This paper presents an autonomous robot designed to interact socially with human "parents". A human infant's emotions and drives play an important role in generating meaningful interactions with the caretaker, regulating these interactions to maintain an environment suitable for the learning process, and assisting the caretaker in satisfying the infant's drives. For our purposes, the ability to regulate how intensely the caretaker engages the robot is vital to successful learning in a social context.
Robot uses emotions to detect and learn the unknown
Biologically Inspired Cognitive Architectures, 2013
Humans can perceive and learn new information from novel, previously unknown to them kinds of experiences, which can be very challenging for an artificial system. Here, a cognitive architecture is presented that uses its emotional intelligence to learn new concepts from previously unknown kinds of experiences. The underlying principle is that emotional appraisals of experience expressed internally as several MoNADs help the architecture to detect conceptual novelty and facilitate the generation and learning of new concepts. With the goal of measuring effects of emotional cognition on learning, the architecture was implemented in a robot and studied in a number of paradigms involving variable color settings. The key findings are the following. Initially, the dynamic state of the model neural network does not converge to some attractor when it receives an unknown kind of input. On the other hand, it quickly converges to an attractor in response to a familiar input. With time, the system develops the ability to learn previously unknown categories and concepts as new MoNAD. It is proposed that the model simulates a subliminal response of a human brain to an unknown situation. The findings have broad implications for future emotional artificial intelligence.
Learning to Interact with the Caretaker: A Developmental Approach
2007
To build autonomous robots able to live and interact with humans in a real-world dynamic and uncertain environment, the design of architectures permitting robots to develop attachment bonds to humans and use them to build their own model of the world is a promising avenue, not only to improve human-robot interaction and adaptation to the environment, but also as a way to develop further cognitive and emotional capabilities. In this paper we present a neural architecture to enable a robot to develop an attachment bond with a person or an object, and to discover the correct sensorimotor associations to maintain a desired affective state of well-being using a minimum amount of prior knowledge about the possible interactions with this object.
Emotions and Learning in a Developing Robot
Emotions, Qualia, and Consciousness, 2001
The role of emotion has been underestimated in the field of robotics. We claim that emotions have a twofold aspect relevant to the building of a purposeful active robot: a cognitive aspect and a phenomenological one. We need to understand both these aspects. With regard with the first of them, it is possible to split it by at least two other relevant points. First, emotions could be the basis for binding between internal values and different external situations following the somatic marker theory. Second, emotions could play a crucial role, during development, both for taking difficult decisions whose effects are not immediately verifiable both for the creation of more complex behavioral functions. Thus emotions can be seen, from a cognitive point of view, as a reinforcement stimulus and in this respect, they can be modeled in an artificial being. Inasmuch, emotions can be seen as a medium for linking rewards and values to external situations. Besides, we would like to accept the division between feelings and emotions. Emotions are, in James' words, the body theatre in which several emotions are represented and feelings are the mental perception of them. We could say that feelings are the qualia of the external (even if bodily) events we could call emotions. We are using this model of emotions in the development of our project: Babibot. We stress the importance of emotions during development as endogenous teaching devices.
Rapid, Autonomous Learning of Visual Object Categories in Robots and Infants
Abstract John Watson hypothesized that human infants learn to visually identify caregivers based on social contingency. In an effort to test the computational feasibility of this hypothesis, we implemented a robotic system that learned to locate humans within a visual field without any supervised teacher, using only social contingency information to detect the presence of people.
Perception and human interaction for developmental learning of objects and affordances
In this paper we describe a cognitive architecture for humanoids interacting with objects and caregivers in a developmental robotics scenario. The architecture is foundational to the MACSi project: it is designed to support experiments to make a humanoid robot gradually enlarge its repertoire of known objects and skills combining autonomous learning, social guidance and intrinsic motivation. This complex learning process requires the capability to learn affordances first. Here, we present the general framework for achieving these goals, focusing on the elementary action, perception and interaction modules. Preliminary experiments performed on the humanoid robot iCub are also discussed.
Toward Teaching a Robot 'Infant'Using Emotive Communication Acts
1998
This paper presents ongoing work towards building an autonomous robot that learns in a social context. The mode of social interaction is that of a caretaker-infant pair where a human acts as the caretaker for the robot. By placing our robot, Kismet, in an environment with a human caretaker who actively assists and guides Kismet's learning, this work explores robot learning in a similar environment to that of a developing infant. In doing so, this approach attempts to take advantage of this special sort of environment and the social interactions it affords in facilitating and constraining learning. This paper proposes an approach where emotive channels of communication are employed during social robot-human interactions to shape and guide what the robot learns.