How? Why? What? Where? When? Who? Grounding Ontology in the Actions of a Situated Social Agent (original) (raw)
Related papers
A communicative robot to learn about us and the world
2019
We describe a model for a robot that learns about the world and her com-panions through natural language communication. The model supports open-domain learning, where the robot has a drive to learn about new con-cepts, new friends, and new properties of friends and concept instances. The robot tries to fill gaps, resolve uncertainties and resolve conflicts. The absorbed knowledge consists of everything people tell her, the situations and objects she perceives and whatever she finds on the web. The results of her interactions and perceptions are kept in an RDF triple store to enable reasoning over her knowledge and experiences. The robot uses a theory of mind to keep track of who said what, when and where. Accumulating knowledge results in complex states to which the robot needs to respond. In this paper, we look into two specific aspects of such complex knowl-edge states: 1) reflecting on the status of the knowledge acquired through a new notion of thoughts and 2) defining the conte...
2020
Social robots and artificial agents should be able to interact with the user in the most natural way possible. This work describes the basic principles of a conversation system designed for social robots and artificial agents, which relies on knowledge encoded in the form of an Ontology. Given the knowledge-driven approach, the possibility of expanding the Ontology in run-time, during the verbal interaction with the users is of the utmost importance: this paper also deals with the implementation of a system for the run-time expansion of the knowledge base, thanks to a crowdsourcing approach.
Toward social cognition in robotics: extracting and internalizing meaning from perception
One of the long-term objectives of artificial cognition is that robots will increasingly be capable of interacting with their human counterparts in open-ended tasks that can change over time. To achieve this end, the robot should be able to acquire and internalize new knowledge from human-robot interaction, on-line. This implies that the robot should attend and perceive the available cues, both verbal and nonverbal, that contain information about the inner qualities of the human counterparts. Social cognition focuses on the perceiver's ability to build cognitive representations of actors (emotions, intentions,. . .) and their contexts. These representations should provide meaning to the sensed inputs and mediate the behavioural responses of the robot within this social scenario. This paper describes how the abilities for building such as cognitive representations are currently endowing in the cognitive software architecture RoboCog. It also presents a first set of complete experiments, involving different user profiles. These experiments show the promising possibilities of the proposal, and reveal the main future improvements to be addressed.
Toward robots as embodied knowledge media
2006
We describe attempts to have robots behave as embodied knowledge media that will permit knowledge to be communicated through embodied interactions in the real world. The key issue here is to give robots the ability to associate interactions with information content while interacting with a communication partner. Toward this end, we present two contributions in this paper. The first concerns the formation and maintenance of joint intention, which is needed to sustain the communication of knowledge between humans and robots.
Towards self-explaining social robots. Verbal explanation strategies for a needs-based architecture
2019
Stange S, Buschmeier H, Hassan T, Ritter C, Kopp S. Towards self-explaining social robots. Verbal explanation strategies for a needs-based architecture. Presented at the AAMAS 2019 Workshop on Cognitive Architectures for HRI: Embodied Models of Situated Natural Language Interactions (MM-Cog), Montréal, Canada.In order to establish long-term relationships with users, social companion robots and their behaviors need to be comprehensible. Purely reactive behavior such as answering questions or following commands can be readily interpreted by users. However, the robot's proactive behaviors, included in order to increase liveliness and improve the user experience, often raise a need for explanation. In this paper, we provide a concept to produce accessible “why-explanations” for the goal-directed behavior an autonomous, lively robot might produce. To this end we present an architecture that provides reasons for behaviors in terms of comprehensible needs and strategies of the robot, a...
Leolani: A Robot That Communicates and Learns about the Shared World
2019
People and robots make mistakes and should therefore recognize and communicate about their “imperfectness” when they collaborate. In previous work [3, 2], we described a female robot model Leolani(L) that supports open-domain learning through natural language communication, having a drive to learn new information and build social relationships. The absorbed knowledge consists of everything people tell her and the situations and objects she perceives. For this demo, we focus on the symbolic representation of the resulting knowledge. We describe how L can query and reason over her knowledge and experiences as well as access the Semantic Web. As such, we envision L to become a semantic agent which people could naturally interact with.1.
When the robot puts itself in your shoes. Managing and exploiting human and robot beliefs.
Abstract—We have designed and implemented new spatiotemporal reasoning skills for a cognitive robot, which explicitly reasons about human beliefs on object positions. It enables the robot to build symbolic models reflecting each agent's perspective on the world. Using these models, the robot has a better understanding of what humans say and do, and is able to reason on what human should know to achieve a given goal. These new capabilities are also demonstrated experimentally.
Grounding the Interaction: Anchoring Situated Discourse in Everyday Human-Robot Interaction
International Journal of Social Robotics, 2011
This paper presents how extraction, representation and use of symbolic knowledge from real-world perception and human-robot verbal and non-verbal interaction can actually enable a grounded and shared model of the world that is suitable for later high-level tasks such as dialogue understanding. We show how the anchoring process itself relies on the situated nature of human-robot interactions. We present an integrated approach, including a specialized symbolic knowledge representation system based on Description Logics, and case studies on several robotic platforms that demonstrate these cognitive capabilities.