The study of sequential and hierarchical organisation of behaviour via artificial mechanisms of action selection (original) (raw)

A robotic architecture for action selection and behavioral organization inspired by human cognition

Abstract—Robotic agents that interact with hu-mans and perform complex, everyday tasks in natural environments will require a system to autonomously organize their behavior. Current systems for robotic behavioral organization typically abstract from the low-level sensory-motor embodiment of the robot, leading to a gap between the level at which a sequence of actions is planned and the levels of perception and motor control. This gap is a major bottleneck for the autonomy of systems in complex, dynamic environments.

Simulating Intelligent Behavior Requires a Complex Approach

2006

This paper will discuss three arguments for a multilevel heterogeneous approach to Artificial Intelligence (AI) hard problems. By a heterogeneous approach, we mean the use of multiple methodologies (symbolic, sub-symbolic, subsumption) to solve AI problems. First, if one accepts the postulate that cognitive psychological principles can be beneficial to AI, then one must look at the heterogeneous nature of the human cognitive system. The brain is not homogeneous; it is a collection of different cellular organizations performing different functions. Secondly, there are several examples from the cognitive systems literature that show hybrid approaches provide effective solutions to complex problems. In some cases, these approaches have been better than a single approach. Finally, cognition is so complex, so full of subtle nuance and interwoven interdependencies, that a multiple level heterogeneous approach is the only approach that will prove to be successful in the long term. In other words, the complexity of perceiving and understanding the environment in a human manner necessitates a multilevel approach.

This is a slightly revised version of a paper that will appear in the Behavior Based AI, Cognitive Processes, and Emergent Behaviors in Autonomous Agents

Behavior based AI has questioned the need for modeling intelligent agency using generalized cognitive modules for perception and behavior generation. Behavior based AI has demonstrated successful interactions in unpredictable environments in the mobile robot domain ]. This has created a gulf between \traditional" approaches to modeling intelligent agency and behavior based approaches. We present an architecture for intelligent autonomous agents which we call GLAIR (Grounded Layered Architecture with Integrated Reasoning) . GLAIR is a general multi-level architecture for autonomous cognitive agents with integrated sensory and motor capabilities. GLAIR o ers an \unconscious" layer for modeling tasks that exhibit a close a nity between sensing and acting, i.e., behavior based AI modules, and a \conscious" layer for modeling tasks that exhibit delays between sensing and acting. GLAIR provides learning mechanisms that allow for autonomous agents to learn emergent behaviors and add it to their repertoire of behaviors. In this paper we will describe the principles of GLAIR and systems we have developed that demonstrate how GLAIR based agents acquire and exhibit a repertoire of behaviors at di erent cognitive levels.

AFRANCI : multi-layer architecture for cognitive agents

2012

The development of Autonomous and Intelligent agents, capable of accomplishing goals and survive in complex, dynamic and unpredictable environments is a highly complex task. In this context, intelligence, is regarded as an adaptive and a fast behaviour that agents use to survive in the environments.

Cognitive architectures and autonomy: Commentary and Response

Journal of Artificial General Intelligence

This paper provides a very useful and promising analysis and comparison of current architectures of autonomous intelligent systems acting in real time and specific contexts, with all their constraints. The chosen issue of Cognitive Architectures and Autonomy is really a challenge for AI current projects and future research. I appreciate and endorse not only that challenge but many specific choices and claims; in particular: (i) that "autonomy" is a key concept for general intelligent systems; (ii) that "a core issue in cognitive architecture is the integration of cognitive processes ...."; (iii) the analysis of features and capabilities missing in current architectures; (iv) that an appropriate benchmark is still lacking; (v) the stress on "real time", on learning, on "resource management" (though goals and motivation would deserve a more central role); and especially (vi) the nice characterization of "some key features that a robot's mind must possess" (attention, expectation, models, ...).

An Agent Architecture Potentially Capable of Robust Autonomy

2001

Robust autonomy on the part of software agents requires, at least in part, the ability to deal intelligently with novel and unexpected situations. According to global workspace theory, dealing with such situations is one of the primary functions of consciousness in humans. Below we briefly describe two software agents that implement this psychological theory, and discuss their resulting potential for robust autonomy.

Natural Intelligence for Autonomous Agents

2000

The paper presents a general architecture for behaviour based control systems for autonomous agents. A number of archi tectural principles are proposed which make it possible to combine reactive control with learning and problem solving in a coherent way. In particular, I investigate the interaction between reinforcement learning, internal world models and dynamic action selection as well as a number

Cross-paradigm analysis of autonomous agent architecture

JETAI, 2000

Autonomous agent architectures are design methodologies -collections of knowledge and strategies which are applied to the problem of creating situated intelligence. In this article, we attempt to integrate this knowledge across several architectural traditions. We pay particular attention to features which have tended to be selected under the pressure of extensive use in real-world systems. We conclude that the following strategies provide significant assistance in the design of autonomous intelligent control:

Computational and Robotic Models of the Hierarchical Organization of Behavior: An Overview

Springer eBooks, 2013

The hierarchical organisation of behaviour is a fundamental means through which robots and organisms can acquire and produce sophisticated and flexible behaviours that allowthem to solve multiple tasks in multiple conditions. Recently, the research on this topic has been receiving increasing attention. On the one hand, machine learning and robotics are recognising the fundamental importance of the hierarchical organisation of behaviour for building robots that scale up to solve complex tasks, possibly in a cumulative fashion. On the other, research in psychology and neuroscience is finding increasing evidence that modularity and hierarchy are pivotal organisation principles of behaviour and of the brain. This book reviews the state of the art in computational and robotic models of the hierarchical organisation of behaviour. Each contribution reviews the main works of the authors on this subject, the open challenges and promising research directions. Together, the contributions give a good coverage of the most important models, findings, and challenges of the field. This introductory chapter presents the general aims and scope of the book and briefly summarises the contents of each chapter.