From Communicative Strategies to Cognitive Modelling (original) (raw)
Related papers
Toward a Cognitive Approach to Human-Robot Dialogue
A theory of language sufficient for building conversationally-adequate human-robot dialogue systems must account for the communicative act as a whole, from the inferential mechanism of intersubjective joint attention-sharing up through the conceptualization processes that respond to those inferences. However, practitioners of AI have in the past tended to adopt linguistic theories that either emphasize or tacitly assume the modularity of linguistic mental processes that is, their isolation from the pressures and influences of other cognitive processes. These assumptions have precluded satisfactory mod-eling of human language use. An adequate theory of language will account naturally and holistically (without ad hoc computational machinery) for discourse structure, referential flexibility, lexical non-compositionality, deixis, pragmatic effects , gesture, and intonation. This paper makes the argument that certain theories in the field of cognitive linguistics already exhibit these desiderata, and briefly describes work to implement one.
Constructing human-robot interaction with standard cognitive architecture
2019
This paper discusses how to extend cognitive models with an explicit interaction model. The work is based on the Standard Model of Cognitive Architecture which is extended by an explicit model for (spoken) interactions following the Constructive Dialogue Modelling (CDM) approach. The goal is to study how to integrate a cognitively appropriate framework into an architecture which allows smooth communication in human-robot interactions, and the starting point is to model construction of shared understanding of the dialogue context and the partner’s intentions. Implementation of conversational interaction is considered important in the context of social robotics which aim to understand and respond to the user’s needs and affective state. The paper describes integration of the architectures but not experimental work towards this goal.
A cognitive robotics approach to comprehending human language and behaviors
Proceeding of the ACM/IEEE international conference on Human-robot interaction - HRI '07, 2007
The ADAPT project is a collaboration of researchers in linguistics, robotics and artificial intelligence at three universities. We are building a complete robotic cognitive architecture for a mobile robot designed to interact with humans in a range of environments, and which uses natural language and models human behavior. This paper concentrates on the HRI aspects of ADAPT, and especially on how ADAPT models and interacts with humans.
8th Workshop on Behavior Adaptation and Learning for Assistive Robotics, 2024
One way to improve Human-Robot Interaction (HRI) and increase trust, acceptance and mutual understanding is to make the behavior of a social robot more comprehensible and understandable for humans. This is particularly important if humans and machines are to work together as partners. To be able to do this, both must have the same basic understanding of the task and the current situation. We created a model within a cognitive architecture connected to the robot. The cognitive model processed relevant conversational data during a dialog with a human to create a mental model of the situation. The dialog parts of the robot were generated with a Large Language Model (LLM) from OpenAI using suitable prompts. An ACT-R model evaluated the data received by the robot according to predefined criteria-in our example application, hierarchical relationships were established and rememberedand provided feedback to the LLM via the application for prompt augmentation with the purpose of adapting or finetuning the request. Initial tests indicated that this approach may have advantages for dialogic tasks and can compensate for weaknesses in terms of a deeper understanding or "blind spots" on the part of the LLM.
Natural Language For Human Robot Interaction
2015
Natural Language Understanding (NLU) was one of the main original goals of artificial intelligence and cognitive science. This has proven to be extremely challenging and was nearly abandoned for decades. We describe an implemented system that supports full NLU for tasks of moderate complexity. The natural language interface is based on Embodied Construction Grammar and simulation semantics. The system described here supports human dialog with an agent controlling a simulated robot, but is flexible with respect to both input language and output task.
Human-robot interaction through spoken language dialogue
2000
Abstract The development of robots that are able to accept instructions, via a friendly interface, in terms of concepts that are familiar to a human user remains a challenge. It is argued that designing and building such intelligent robots can be seen as the problem of integrating four main dimensions: human-robot communication, sensory motor skills and perception, decision-making capabilities, and learning.
Emergent verbal behaviour in human-robot interaction
… (CogInfoCom), 2011 2nd International …, 2011
The paper describes emergent verbal behaviour that arises when speech components are added to a robotics simulator. In the existing simulator the robot performs its activities silently. When speech synthesis is added, the first level of emergent verbal behaviour is that the robot produces spoken monologues giving a stream of simple explanations of its movements. When speech recognition is added, human-robot interaction can be initiated by the human, using voice commands to direct the robot's movements. In addition, cooperative verbal behaviour emerges when the robot modifies its own verbal behaviour in response to being asked by the human to talk less or more. The robotics framework supports different behavioural paradigms, including finite state machines, reinforcement learning and fuzzy decisions. By combining finite state machines with the speech interface, spoken dialogue systems based on state transitions can be implemented. These dialogue systems exemplify emergent verbal behaviour that is robot-initiated: the robot asks appropriate questions in order to achieve the dialogue goal. The paper mentions current work on using Wikipedia as a knowledge base for open-domain dialogues, and suggests promising ideas for topic-tracking and robot-initiated conversational topics.
Communication in Human-Robot Interaction
Current Robotics Reports
Purpose of Review To present the multi-faceted aspects of communication between robot and humans (HRI), putting in evidence that it is not limited to language-based interaction, but it includes all aspects that are relevant in communication among physical beings, exploiting all the available sensor channels. Recent Findings For specific purposes, machine learning algorithms could be exploited when data sets and appropriate algorithms are available. Summary Together with linguistic aspects, physical aspects play an important role in HRI and make the difference with respect to the more limited human-computer interaction (HCI). A review of the recent literature about the exploitation of different interaction channels is presented. The interpretation of signals and the production of appropriate communication actions require to consider psychological, sociological, and practical aspects, which may affect the performance. Communication is just one of the functionalities of an interactive robot and, as all the others, will need to be benchmarked to support the possibility for social robots to reach a real market.
Mental imagery for a conversational robot
IEEE Transactions on Systems, Man, and Cybernetics Part B, Vol. 34(3), pp. 1374-1383, 2004
To build robots that engage in fluid face-to-face spoken conversations with people, robots must have ways to connect what they say to what they see. A critical aspect of how language connects to vision is that language encodes points of view. The meaning of my left and your left differs due to an implied shift of visual perspective. The connection of language to vision also relies on object permanence. We can talk about things that are not in view. For a robot to participate in situated spoken dialog, it must have the capacity to imagine shifts of perspective, and it must maintain object permanence. We present a set of representations and procedures that enable a robotic manipulator to maintain a "mental model" of its physical environment by coupling active vision to physical simulation. Within this model, "imagined" views can be generated from arbitrary perspectives, providing the basis for situated language comprehension and production. An initial application of mental imagery for spatial language understanding for an interactive robot is described.