Leolani: A Reference Machine with a Theory of Mind for Social Communication (original) (raw)

A communicative robot to learn about us and the world

2019

We describe a model for a robot that learns about the world and her com-panions through natural language communication. The model supports open-domain learning, where the robot has a drive to learn about new con-cepts, new friends, and new properties of friends and concept instances. The robot tries to fill gaps, resolve uncertainties and resolve conflicts. The absorbed knowledge consists of everything people tell her, the situations and objects she perceives and whatever she finds on the web. The results of her interactions and perceptions are kept in an RDF triple store to enable reasoning over her knowledge and experiences. The robot uses a theory of mind to keep track of who said what, when and where. Accumulating knowledge results in complex states to which the robot needs to respond. In this paper, we look into two specific aspects of such complex knowl-edge states: 1) reflecting on the status of the knowledge acquired through a new notion of thoughts and 2) defining the conte...

Robot Learning Theory of Mind through Self-Observation: Exploiting the Intentions-Beliefs Synergy

arXiv (Cornell University), 2022

In complex environments, where the human sensory system reaches its limits, our behaviour is strongly driven by our beliefs about the state of the world around us. Accessing others' beliefs, intentions, or mental states in general, could thus allow for more effective social interactions in natural contexts. Yet these variables are not directly observable. Theory of Mind (TOM), the ability to attribute to other agents' beliefs, intentions, or mental states in general, is a crucial feature of human social interaction and has become of interest to the robotics community. Recently, new models that are able to learn TOM have been introduced. In this paper, we show the synergy between learning to predict low-level mental states, such as intentions and goals, and attributing high-level ones, such as beliefs. Assuming that learning of beliefs can take place by observing own decision and beliefs estimation processes in partially observable environments and using a simple feedforward deep learning model, we show that when learning to predict others' intentions and actions, faster and more accurate predictions can be acquired if beliefs attribution is learnt simultaneously with action and intentions prediction. We show that the learning performance improves even when observing agents with a different decision process and is higher when observing beliefs-driven chunks of behaviour. We propose that our architectural approach can be relevant for the design of future adaptive social robots that should be able to autonomously understand and assist human partners in novel natural environments and tasks.

Theory of Mind for a Humanoid Robot

Autonomous Robots, 2002

If we are to build human-like robots that can interact naturally with people, our robots must know not only about the properties of objects but also the properties of animate agents in the world. One of the fundamental social skills for humans is the attribution of beliefs, goals, and desires to other people. This set of skills has often been called a "theory of mind." This paper presents the theories of Leslie and Baron-Cohen [2] on the development of theory of mind in human children and discusses the potential application of both of these theories to building robots with similar capabilities. Initial implementation details and basic skills (such as finding faces and eyes and distinguishing animate from inanimate stimuli) are introduced. I further speculate on the usefulness of a robotic implementation in evaluating and comparing these two models.

Leolani: A Robot That Communicates and Learns about the Shared World

2019

People and robots make mistakes and should therefore recognize and communicate about their “imperfectness” when they collaborate. In previous work [3, 2], we described a female robot model Leolani(L) that supports open-domain learning through natural language communication, having a drive to learn new information and build social relationships. The absorbed knowledge consists of everything people tell her and the situations and objects she perceives. For this demo, we focus on the symbolic representation of the resulting knowledge. We describe how L can query and reason over her knowledge and experiences as well as access the Semantic Web. As such, we envision L to become a semantic agent which people could naturally interact with.1.

A Cognitive Architecture Incorporating Theory of Mind in Social Robots towards Their Personal Assistance at Home

2016

Recent studies show that robots are still far from being long-term companions in our daily lives. With an interdisciplinary approach, this position paper structures around coping with this problem and suggests guidelines on how to develop a cognitive architecture for social robots assuring their long-term personal assistance at home. Following the guidelines, we offer a conceptual cognitive architecture enabling assistant robots to autonomously create cognitive representations of cared-for individuals. Our proposed architecture places Theory of Mind approach in a metacognitive process first to empathize and learn with humans, then to guide robot's high-level decision-making accordingly. These decisions evaluate, regulate and control robot's cognitive process towards understanding, validating and caring for interacted humans and serving them in a personalized way. Hence, robots deploying this architecture will be trustworthy, flexible and generic to any human type and needs; in the end, they will establish a secure attachment with interacted humans. Finally, we present a use-case for our novel cognitive architecture to better visualize our conceptual work.

Reading a robot's mind: a model of utterance understanding based on the theory of mind mechanism

Advanced Robotics, 2000

The purpose of this paper is to construct a methodology for smooth communications between humans and robots. Here, focus is on a mindreading mechanism, which is indispensable in humanhuman communications. We propose a model of utterance understanding based on this mechanism. Concretely speaking, we apply the model of a mindreading system (Baron-Cohen 1996) to a model of human-robot communications. Moreover, we implement a robot interface system that applies our proposed model. Psychological experiments were carried out to explore the validity of the following hypothesis: by reading a robot's mind, a human can estimate the robot's intention with ease, and, moreover, the person can even understand the robot's unclear utterances made by synthesized speech sounds. The results of the experiments statistically supported our hypothesis.

Sharing Experiences to Help a Robot Present Its Mind and Sociability

International Journal of Social Robotics

Many social robots have emerged in public places to serve people. For these services, the robots are assumed to be able to present internal aspects (i.e., mind, sociability) to engage and interact with people over the long term. In this paper, we propose a novel dialogue structure called experience-based dialogue to help a robot present and maintain a good interaction over the long term. This dialogue structure contains a piece of knowledge and a story about how the robot gained this knowledge, which are used to compose the robot's experience-related utterances for sharing experiences of interacting with previous users other than just the current user and help it present its internal aspects. We conducted an experiment to test the effects of our proposed dialogue structure and measure them with some published subjective scales. The results showed that experience-based dialogue can help a robot obtain better evaluations in terms of perceived intelligence, sociability, mind, anthropomorphism, animacy, likability, level of acceptance, and positive user reaction.

Toward social cognition in robotics: extracting and internalizing meaning from perception

One of the long-term objectives of artificial cognition is that robots will increasingly be capable of interacting with their human counterparts in open-ended tasks that can change over time. To achieve this end, the robot should be able to acquire and internalize new knowledge from human-robot interaction, on-line. This implies that the robot should attend and perceive the available cues, both verbal and nonverbal, that contain information about the inner qualities of the human counterparts. Social cognition focuses on the perceiver's ability to build cognitive representations of actors (emotions, intentions,. . .) and their contexts. These representations should provide meaning to the sensed inputs and mediate the behavioural responses of the robot within this social scenario. This paper describes how the abilities for building such as cognitive representations are currently endowing in the cognitive software architecture RoboCog. It also presents a first set of complete experiments, involving different user profiles. These experiments show the promising possibilities of the proposal, and reveal the main future improvements to be addressed.

Transferring Adaptive Theory of Mind to Social Robots: Insights from Developmental Psychology to Robotics

Lecture Notes in Computer Science, 2019

Despite the recent advancement in the social robotic field, important limitations restrain its progress and delay the application of robots in everyday scenarios. In the present paper, we propose to develop computational models inspired by our knowledge of human infants' social adaptive abilities. We believe this may provide solutions at an architectural level to overcome the limits of current systems. Specifically, we present the functional advantages that adaptive Theory of Mind (ToM) systems would support in robotics (i.e., mentalizing for belief understanding, proactivity and preparation, active perception and learning) and contextualize them in practical applications. We review current computational models mainly based on the simulation and teleological theories, and robotic implementations to identify the limitations of ToM functions in current robotic architectures and suggest a possible future developmental pathway. Finally, we propose future studies to create innovative computational models integrating the properties of the simulation and teleological approaches for an improved adaptive ToM ability in robots with the aim of enhancing human-robot interactions and permitting the application of robots in unexplored environments, such as disasters and construction sites. To achieve this goal, we suggest directing future research towards the modern cross-talk between the fields of robotics and developmental psychology.