Constructing human-robot interaction with standard cognitive architecture (original) (raw)

What am I? - Complementing a robot's task-solving capabilities with a mental model using a cognitive architecture

8th Workshop on Behavior Adaptation and Learning for Assistive Robotics, 2024

One way to improve Human-Robot Interaction (HRI) and increase trust, acceptance and mutual understanding is to make the behavior of a social robot more comprehensible and understandable for humans. This is particularly important if humans and machines are to work together as partners. To be able to do this, both must have the same basic understanding of the task and the current situation. We created a model within a cognitive architecture connected to the robot. The cognitive model processed relevant conversational data during a dialog with a human to create a mental model of the situation. The dialog parts of the robot were generated with a Large Language Model (LLM) from OpenAI using suitable prompts. An ACT-R model evaluated the data received by the robot according to predefined criteria-in our example application, hierarchical relationships were established and rememberedand provided feedback to the LLM via the application for prompt augmentation with the purpose of adapting or finetuning the request. Initial tests indicated that this approach may have advantages for dialogic tasks and can compensate for weaknesses in terms of a deeper understanding or "blind spots" on the part of the LLM.

How to use a cognitive architecture for a dynamic person model with a social robot in human collaboration

Workshop Robots for Humans 2024, 2024

The use of cognitive architectures is promising in order to achieve more human-like reactions and behavior in social robots. For example, ACT-R can be used to create a dynamic cognitive person model of a human cooperation partner of the robot. A proof-of-concept for a direct and easy-to-implement integration of ACT-R with the humanoid social robot Pepper is described in this work. An exemplary setup of the system consisting of cognitive architecture and robot application and the type of connection between ACT-R and the robot is explained. Furthermore, an idea is outlined of how the cognitive person model of the human cooperation partner in ACT-R is updated with dynamic data from the real world using the example of emotion recognition by the robot.

A Cognitive and Affective Architecture for Social Human-Robot Interaction

Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts - HRI'15 Extended Abstracts, 2015

Robots find new applications in our daily life where they interact more and more closely with their human user. Despite a long history of research, existing cognitive architectures are too generic and hence not tailored enough to meet specific needs of social HRI. In particular, interaction-oriented architectures require handling emotions, language, social norms, etc. In this paper, we present an overview of a Cognitive and Affective Interaction-Oriented Architecture for social human-robot interactions, called CAIO. This architecture is in the line of BDI (Belief, Desire, Intention) architecture that comes from action philosophy of Bratman. CAIO integrates complex emotions and planning technics. It aims to contribute to cognitive architectures for HRI by enabling the robot to reason on mental states (including emotions) of the interlocutors, and to act physically, emotionally and verbally.

Social Human-Robot Interaction: A New Cognitive and Affective Interaction-Oriented Architecture

Lecture Notes in Computer Science, 2016

In this paper, we present CAIO, a Cognitive and Affective Interaction-Oriented architecture for social human-robot interactions (HRI), allowing robots to reason on mental states (including emotions), and to act physically, emotionally and verbally. We also present a short scenario and implementation on a Nao robot. 2 Related works Cognitive architectures have been subject to research for a long time, and good reviews exist (see for example [30, 11]). They mostly fall in three categories: biologically-inspired, philosophically-inspired, and Artificial Intelligence architectures. We illustrate these categories with some of the major and well-known architectures.

Toward a Cognitive Approach to Human-Robot Dialogue

A theory of language sufficient for building conversationally-adequate human-robot dialogue systems must account for the communicative act as a whole, from the inferential mechanism of intersubjective joint attention-sharing up through the conceptualization processes that respond to those inferences. However, practitioners of AI have in the past tended to adopt linguistic theories that either emphasize or tacitly assume the modularity of linguistic mental processes that is, their isolation from the pressures and influences of other cognitive processes. These assumptions have precluded satisfactory mod-eling of human language use. An adequate theory of language will account naturally and holistically (without ad hoc computational machinery) for discourse structure, referential flexibility, lexical non-compositionality, deixis, pragmatic effects , gesture, and intonation. This paper makes the argument that certain theories in the field of cognitive linguistics already exhibit these desiderata, and briefly describes work to implement one.

Social robot architecture: A framework for explicit social interaction

2005

Abstract This paper details the Social Robot Architecture, a framework for explicit human-robot and robot-robot social interaction. The core mechanisms for realizing a robust robot control architecture involving a synthesis of reactive, deliberative, and social reasoning mechanisms are presented and discussed. In addition, the Virtual Robotic Workbench is briefly described which demonstrates the coherent integration of both physical and virtual social robots in the human social space.

A cognitive robotics approach to comprehending human language and behaviors

Proceeding of the ACM/IEEE international conference on Human-robot interaction - HRI '07, 2007

The ADAPT project is a collaboration of researchers in linguistics, robotics and artificial intelligence at three universities. We are building a complete robotic cognitive architecture for a mobile robot designed to interact with humans in a range of environments, and which uses natural language and models human behavior. This paper concentrates on the HRI aspects of ADAPT, and especially on how ADAPT models and interacts with humans.

Toward a Context-Aware Human–Robot Interaction Framework Based on Cognitive Development

IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2019

The purpose of this paper was to understand how an agent's performance is affected when interaction workflows are incorporated in its information model and decision-making process. Our expectation was that this incorporation could reduce errors and faults on agent's operation, improving its interaction performance. We based this expectation on the existing challenges in designing and implementing artificial social agents, where an approach based on predefined user scenarios and action scripts is insufficient to account for uncertainty in perception or unclear expectations from the user. Therefore, we developed a framework that captures the expected behavior of the agent into descriptive scenarios and then translated these into the agent's information model and used the resulting representation in probabilistic planning and decision making to control interaction. Our results indicated an improvement in terms of specificity while maintaining precision and recall, suggesting that the hypothesis being proposed in our approach is plausible. We believe the presented framework will contribute to the field of cognitive robotics, e.g., by improving the usability of artificial social companions, thus overcoming the limitations imposed by approaches that use predefined static models for an agent's behavior resulting in non-natural interaction.

Artificial cognition for social human–robot interaction: An implementation

Artificial Intelligence, 2017

Human-Robot Interaction challenges Artificial Intelligence in many regards: dynamic, partially unknown environments that were not originally designed for robots; a broad variety of situations with rich semantics to understand and interpret; physical interactions with humans that requires fine, low-latency yet socially acceptable control strategies; natural and multi-modal communication which mandates common-sense knowledge and the representation of possibly divergent mental models. This article is an attempt to characterise these challenges and to exhibit a set of key decisional issues that need to be addressed for a cognitive robot to successfully share space and tasks with a human. We identify first the needed individual and collaborative cognitive skills: geometric reasoning and situation assessment based on perspective-taking and affordance analysis; acquisition and representation of knowledge models for multiple agents (humans and robots, with their specificities); situated, natural and multi-modal dialogue; human-aware task planning; human-robot joint task achievement. The article discusses each of these abilities, presents working implementations, and shows how they combine in a coherent and original deliberative architecture for human-robot interaction. Supported by experimental results, we eventually show how explicit knowledge management, both symbolic and geometric, proves to be instrumental to richer and more natural human-robot interactions by pushing for pervasive, human-level semantics within the robot's deliberative system.

Some essential skills and their combination in an architecture for a cognitive and interactive robot

ArXiv, 2016

The topic of joint actions has been deeply studied in the context of Human-Human interaction in order to understand how humans cooperate. Creating autonomous robots that collaborate with humans is a complex problem, where it is relevant to apply what has been learned in the context of Human-Human interaction. The question is what skills to implement and how to integrate them in order to build a cognitive architecture, allowing a robot to collaborate efficiently and naturally with humans. In this paper, we first list a set of skills that we consider essential for Joint Action, then we analyze the problem from the robot's point of view and discuss how they can be instantiated in human-robot scenarios. Finally, we open the discussion on how to integrate such skills into a cognitive architecture for human-robot collaborative problem solving and task achievement.