Concept Acquisition through Linguistic Human-Robot Interaction (original) (raw)

Human-Robot Interaction in Concept Acquisition: a computational model

… and Learning, 2009. ICDL …, 2009

This paper presents a discussion and simulation results which support the case for interaction during the acquisition of conceptual knowledge. Taking a developmental perspective, we first review a number of relevant insights on word-meaning acquisition in young children and specifically focus on concept learning supported by linguistic input. We present a computational model implementing a number of acquisition strategies, which enable a learning agent to actively steer the learning process. This is contrasted to a one way learning method, where the learner does not actively influence the learning experience. We present results demonstrating how dyadic interaction between a teacher and learner may result in a better acquisition of concepts.

Robot learning of lexical semantics from sensorimotor interaction and the unrestricted speech of human tutors

2010

This paper describes a HRI case study which demonstrates how a humanoid robot can use simple heuristics to acquire and use vocabulary in the context of being shown a series of shapes presented to it by a human and how the interaction style of the human changes as the robot learns and expresses its learning through speech. The case study is based on findings on how adults use childdirected speech when socially interacting with infants. The results indicate that humans are generally willing to engage with a robot in a similar manner to their engagement with a human infant and use similar styles of interaction varying as the shared understanding between them becomes more apparent. The case study also demonstrates that a rudimentary form of shared intentional reference can sufficiently bias the learning procedure. As a result, the robot associates humantaught lexical items for a series of presented shapes with its own sensorimotor experience, and is able to utter these words, acquired from the particular tutor, appropriately in an interactive, embodied context exhibiting apparent reference and discrimination.

Revisiting Human-Robot Teaching and Learning Through the Lens of Human Concept Learning

2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI)

When interacting with a robot, humans form conceptual models (of varying quality) which capture how the robot behaves. These conceptual models form just from watching or interacting with the robot, with or without conscious thought. Some methods select and present robot behaviors to improve human conceptual model formation; nonetheless, these methods and HRI more broadly have not yet consulted cognitive theories of human concept learning. These validated theories offer concrete design guidance to support humans in developing conceptual models more quickly, accurately, and flexibly. Specifically, Analogical Transfer Theory and the Variation Theory of Learning have been successfully deployed in other fields, and offer new insights for the HRI community about the selection and presentation of robot behaviors. Using these theories, we review and contextualize 35 prior works in human-robot teaching and learning, and we assess how these works incorporate or omit the design implications of these theories. From this review, we identify new opportunities for algorithms and interfaces to help humans more easily learn conceptual models of robot behaviors, which in turn can help humans become more effective robot teachers and collaborators.

Acquiring Vocabulary through Human Robot Interaction: A Learning Architecture for Grounding Words with Multiple Meanings

2010 AAAI Fall Symposium Series, 2010

This paper presents a robust methodology for grounding vocabulary in robots. A social language grounding experiment is designed, where, a human instructor teaches a robotic agent the names of the objects present in a visually shared environment. Any system for grounding vocabulary has to incorporate the properties of gradual evolution and lifelong learning. The learning model of the robot is adopted from an ongoing work on developing systems that conform to these properties. Significant modifications have been introduced to the adopted model, especially to handle words with multiple meanings. A novel classification strategy has been developed for improving the performance of each classifier for each learned category. A set of six new nearest-neighbor based classifiers have also been integrated into the agent architecture. A series of experiments were conducted to test the performance of the new model on vocabulary acquisition. The robot was shown to be robust at acquiring vocabulary and has the potential to learn a far greater number of words (with either single or multiple meanings).

The ITALK Project: A Developmental Robotics Approach to the Study of Individual, Social, and Linguistic Learning

Topics in Cognitive Science, 2014

This article presents results from a multidisciplinary research project on the integration and transfer of language knowledge into robots as an empirical paradigm for the study of language development in both humans and humanoid robots. Within the framework of human linguistic and cognitive development, we focus on how three central types of learning interact and co‐develop: individual learning about one's own embodiment and the environment, social learning (learning from others), and learning of linguistic capability. Our primary concern is how these capabilities can scaffold each other's development in a continuous feedback cycle as their interactions yield increasingly sophisticated competencies in the agent's capacity to interact with others and manipulate its world. Experimental results are summarized in relation to milestones in human linguistic and cognitive development and show that the mutual scaffolding of social learning, individual learning, and linguistic ca...

Interactive Language Learning by Robots: The Transition from Babbling to Word Forms

PLoS ONE, 2012

The advent of humanoid robots has enabled a new approach to investigating the acquisition of language, and we report on the development of robots able to acquire rudimentary linguistic skills. Our work focuses on early stages analogous to some characteristics of a human child of about 6 to 14 months, the transition from babbling to first word forms. We investigate one mechanism among many that may contribute to this process, a key factor being the sensitivity of learners to the statistical distribution of linguistic elements. As well as being necessary for learning word meanings, the acquisition of anchor word forms facilitates the segmentation of an acoustic stream through other mechanisms. In our experiments some salient one-syllable word forms are learnt by a humanoid robot in real-time interactions with naive participants. Words emerge from random syllabic babble through a learning process based on a dialogue between the robot and the human participant, whose speech is perceived by the robot as a stream of phonemes. Numerous ways of representing the speech as syllabic segments are possible. Furthermore, the pronunciation of many words in spontaneous speech is variable. However, in line with research elsewhere, we observe that salient content words are more likely than function words to have consistent canonical representations; thus their relative frequency increases, as does their influence on the learner. Variable pronunciation may contribute to early word form acquisition. The importance of contingent interaction in real-time between teacher and learner is reflected by a reinforcement process, with variable success. The examination of individual cases may be more informative than group results. Nevertheless, word forms are usually produced by the robot after a few minutes of dialogue, employing a simple, real-time, frequency dependent mechanism. This work shows the potential of human-robot interaction systems in studies of the dynamics of early language acquisition.

Acquisition of Abstract Words for Cognitive Robots

Jurnal Teknologi, 2015

word learning and comprehension is a very crucial and important issue because of its application and problematic nature. This problem does not just belong to the cognitive robotic field, as it also has significance in neuroscience and cognitive science. There are many issues like symbol grounding problem and sensory motor processing within grounded cognition framework and conceptual knowledge representation methods that have to be addressed and solved for the acquisition of abstract words in cognitive robots. This paper explains these concepts and matters, and also elucidates how these are linked to this problem. In this paper, first symbol grounding problem is discussed, and after that an overview of grounded cognition be given along with detail of methods/ideas that suggest how abstract word representation could use sensory motor system. Finally, the computation methods used for the representation of conceptual knowledge are discussed. Two cognitive robotics models based on Neural network and Semantic network that grounds abstract words are presented and compared via simulation experiment to find out the pros and cons of computation methods for this problem. The aim of this paper is to explore the building blocks of cognitive robotic model at theoretical and experimental level, for grounding of abstract words.

Embodied Language Acquisition: A Proof of Concept

2009

For robots to interact with humans at the language level, it becomes fundamental that robots and humans share a common language. In this paper, a social language grounding paradigm is adopted to teach a robotic arm basic vocabulary about objects in its environment. A human user, acting as an instructor, teaches the names of the objects present in their shared field of view. The robotic agent grounds these words by associating them to visual category descriptions. A component-based object representation is presented. An instance based approach is used for category representation. An instance is described by its components and geometric relations between them. Each component is a color blob or an aggregation of neighboring color blobs. The categorization strategy is based on graph matching. The learning/grounding capacity of the robot is assessed over a series of semi-automated experiments and the results are reported.

Automatic language acquisition by an autonomous robot

2003

There is no such thing as a disembodied mind. We posit that cognitive development can only occur through interaction with the physical world. To this end, we are developing a robotic platform for the purpose of studying cognition. We suggest that the central component of cognition is a memory which is primarily associative, one where learning occurs as the correlation of events from diverse inputs. We also believe that human-like cognition requires a well-integrated sensorymotor system, to provide these diverse inputs. As implemented in our robot, this system includes binaural hearing, stereo vision, tactile sense, and basic proprioceptive control. On top of these abilities, we are implementing and studying various models of processing, learning and decision making. Our goal is to produce a robot that will learn to carry out simple tasks in response to natural language requests. The robot's understanding of language will be learned concurrently with its other cognitive abilities. We have already developed a robust system and conducted a number of experiments on the way to this goal, some details of which appear in this paper. This is a progress report of what we believe will be a long term project with significant implications.

Linguistic Primitives: A New Model for Language Development in Robotics

Often in robotics natural language processing is used simply to improve the human-machine interaction. However, language is not only a powerful communication tool: it is deeply linked to the inner organization of the mind, and it guides its development. The aim of this paper is to take a first step towards a model of language which can be integrated with the diverse abilities of the robot, thus leading to its cognitive development, and eventually speeding up its learning capacity. To this end we propose and implement the Language Primitives Model (LPM) to imitate babbling, a phase in the learning process that characterizes a few months old babies. LPM is based on the same principles dictated by the Motor Primitives model. The obtained results positively compare with experimental data and observations about children, so confirming this interest of the new model.