Towards an Integrated Robot with Multiple Cognitive Functions (original) (raw)
Related papers
ORO, a knowledge management module for cognitive architectures in robotics
2010
This paper presents an embedded cognitive kernel, along with a common-sense ontology, designed for robotics. We believe that a direct and explicit integration of cognition is a compulsory step to enable human-robots interaction in semantic-rich human environments like our houses. The OpenRobots Ontology (ORO) kernel allows to turn previously acquired symbols into concepts linked to each other. It enables in turn reasoning and the implementation of other advanced cognitive functions like events, categorization, memory management and reasonning on parallel cognitive models. We validate this framework on several cognitive scenarii that have been implemented on three different robotic architectures.
ORO, a knowledge management platform for cognitive architectures in robotics
2010
Abstract—This paper presents an embeddable knowledge processing framework, along with a common-sense ontology, designed for robotics. We believe that a direct and explicit integration of cognition is a compulsory step to enable humanrobots interaction in semantic-rich human environments like our houses. The OpenRobots Ontology (ORO) kernel allows to turn previously acquired symbols into concepts linked to each other.
Layered Cognitive Architectures: Where Cognitive Science Meets Robotics
2000
Although overlooked in recent years as tools for research, cognitive software architectures are designed to bring computational models of reasoning to bear on real-world physical systems. This paper makes a case for using the executives in these architectures as research tools to explore the connection between cognitive science and intelligent robotics.
Artificial Cognition for Human-robot Interaction
INTERNATIONAL JOURNAL ON INTEGRATED EDUCATION, 2018
Human-robot interaction can increase the challenges of artificial intelligence. Many domains of AI and its effect is laid down, which is mainly called for their integration, modelling of human cognition and human, collecting and representing knowledge, use of this knowledge in human level, maintaining decision making processes and providing these decisions towards physical action eligible to and in coordination with humans. A huge number of AI technologies are abstracted from task planning to theory of mind building, from visual processing to symbolic reasoning and from reactive control to action recognition and learning. Specific human-robot interaction is focused on this case. Multi-model and situated communication can support human-robot collaborative task achievement. Present study deals with the process of using artificial intelligence (AI) for human-robot interaction.
Experiences with CiceRobot, a museum guide cognitive robot
2005
The paper describes CiceRobot, a robot based on a cognitive architecture for robot vision and action. The aim of the architecture is to integrate visual perception and actions with knowledge representation, in order to let the robot to generate a deep inner understanding of its environment. The principled integration of perception, action and of symbolic knowledge is based on the introduction of an intermediate representation based on Gärdenfors conceptual spaces.
8th Workshop on Behavior Adaptation and Learning for Assistive Robotics, 2024
One way to improve Human-Robot Interaction (HRI) and increase trust, acceptance and mutual understanding is to make the behavior of a social robot more comprehensible and understandable for humans. This is particularly important if humans and machines are to work together as partners. To be able to do this, both must have the same basic understanding of the task and the current situation. We created a model within a cognitive architecture connected to the robot. The cognitive model processed relevant conversational data during a dialog with a human to create a mental model of the situation. The dialog parts of the robot were generated with a Large Language Model (LLM) from OpenAI using suitable prompts. An ACT-R model evaluated the data received by the robot according to predefined criteria-in our example application, hierarchical relationships were established and rememberedand provided feedback to the LLM via the application for prompt augmentation with the purpose of adapting or finetuning the request. Initial tests indicated that this approach may have advantages for dialogic tasks and can compensate for weaknesses in terms of a deeper understanding or "blind spots" on the part of the LLM.