A role for consciousness in action selection (original) (raw)

Consciousness, Action Selection, Meaning and Phenomenic Anticipation

2012

Phenomenal states are generally considered the ultimate sources of intrinsic motivation for autonomous biological agents. In this article, we will address the issue of the necessity of exploiting these states for the design and implementation of robust goal-directed artificial systems. We will provide an analysis of consciousness in terms of a precise definition of how an agent" understands" the informational flows entering the agent and its very own action possibilities.

On the Necessity of Consciousness for Sophisticated Human Action

Frontiers in Psychology, 2018

In this essay, we aim to counter and qualify the epiphenomenalist challenge proposed in this special issue on the grounds of empirical and theoretical arguments. The current body of scientific knowledge strongly indicates that conscious thought is a necessary condition for many human behaviors, and therefore, consciousness qualifies as a cause of those behaviors. We review illustrative experimental evidence for the causal power of conscious thought while also acknowledging its natural limitations. We argue that it is implausible that the metabolic costs inherent to conscious processes would have evolved in humans without any adaptive benefits. Moreover, we discuss the relevance of conscious thought to the issue of freedom. Many accounts hold conscious thought as necessary and conducive to naturalistic conceptions of personal freedom. Apart from these theories, we show that the conscious perception of freedom and the belief in free will provide sources of interesting findings, beneficial behavioral effects, and new avenues for research. We close by proposing our own challenge via outlining the gaps that have yet to be filled to establish hard evidence of an epiphenomenal model of consciousness. To be sure, we appreciate the epiphenomenalist challenge as it promotes critical thinking and inspires rigorous research. However, we see no merit in downplaying the causal significance of consciousness a priori. Instead, we believe it more worthwhile to focus on the complex interplay between conscious and other causal processes.

A Role for Action Selection in Consciousness: An Investigation of a Second-Order Darwinian Mind

CEUR Workshop Proceedings, 2016

—We investigate a small footprint cognitive architecture comprised of two reactive planner instances. The first interacts with the world via sensor and behaviour interfaces. The second monitors the first, and dynamically adjusts its plan in accordance with some predefined objective function. We show that this configuration produces a Darwinian mind, yet aware of its own operation and performance, and able to maintain performance as the environment changes. We identify this architecture as a second-order Darwinian mind, and discuss the philosophical implications for the study of consciousness. We use the Instinct Robot World agent based modelling environment, which in turn uses the Instinct Planner for cognition.

Towards autonomous artificial agents with an active self: Modeling sense of control in situated action

Cognitive Systems Research, 2022

In this paper we present a computational modeling account of an active self in artificial agents. In particular we focus on how an agent can be equipped with a sense of control and how it arises in autonomous situated action and, in turn, influences action control. We argue that this requires laying out an embodied cognitive model that combines bottom-up processes (sensorimotor learning and fine-grained adaptation of control) with top-down processes (cognitive processes for strategy selection and decision-making). We present such a conceptual computational architecture based on principles of predictive processing and free energy minimization. Using this general model, we describe how a sense of control can form across the levels of a control hierarchy and how this can support action control in an unpredictable environment. We present an implementation of this model as well as first evaluations in a simulated task scenario, in which an autonomous agent has to cope with un-/predictable situations and experiences corresponding sense of control. We explore different model parameter settings that lead to different ways of combining low-level and high-level action control. The results show the importance of appropriately weighting information in situations where the need for low/high-level action control varies and they demonstrate how the sense of control can facilitate this.

The Prospects for Creating Conscious Machines

While there have been discussions of computers reaching human leve ls of intelligence ([1], [2]), we should not over-simplify the issue of building intelligent or conscious machines. We should also not conflat e intelligence and consciousness. Many computational problems may involve t rade-offs of intelligence and consciousness. The largest current computer is the 212,992 processor IBM BlueGene (74 terabytes memory and 596 teraflops peak speed). It could probably simulate 10 13 synapses [3], while humans have about 10 15 . There are many unknowns, however, related to wiring diagrams, software, hardware, algorithms, learning, sensory input, and motor -control output. A machine that combines intelligence and consciousness cannot just be an isolated compute r. It will need to be a complex system of systems and be capable of learning and understanding real world situations. The ke y, however, is emergent behavior development through a variety of algorithmic techniques including: genetic algorithms, machine learning, cognitive architectures and connectionist methods. Humans will not be capabl e of completely sp ecifying and programming the entire system; learning and emergent behavior [4] will be a stringent requirement for development.

Attention and Consciousness in Intentional Action: Steps Toward Rich Arti¯cial Agency

If arti¯cial agents are to be created such that they occupy space in our social and cultural milieu, then we should expect them to be targets of folk psychological explanation. That is to say, their behavior ought to be explicable in terms of beliefs, desires, obligations, and especially intentions. Herein, we focus on the concept of intentional action, and especially its relationship to consciousness. After outlining some lessons learned from philosophy and psychology that give insight into the structure of intentional action, we¯nd that attention plays a critical role in agency, and indeed, in the production of intentional action. We argue that the insights o®ered by the literature on agency and intentional action motivate a particular kind of computational cognitive architecture, and one that hasn't been well-explicated or computationally°eshed out among the community of AI researchers and computational cognitive scientists who work on cognitive systems. To give a sense of what such a system might look like, we present the ARCADIA attention-driven cognitive system as¯rst steps toward an architecture to support the type of agency that rich humanÀmachine interaction will undoubtedly demand.

A Model of Consciousness and Attention Aimed at Speedy Autonomous Adaptation

Procedia Computer Science, 2014

An advanced model is proposed that can explain the function in short term adaptation. Associative temporal memory and the function of top-down attention are adopted in the model, in which consciousness and attention are defined as clearly different functions; consciousness is mainly defined as a process of learning as a whole system, and attention is defined as functions that quickly select resources with priority based on the information for learning. These functions work by complementing each other via associative temporal memory. As a result, consciousness and attention are explained as fundamental functions for an autonomous adaptive system to be efficiently and speedily adapted to an environment. Even though the model proposed here has a configuration similar to Global Workspace Theory (GWT) including Global Neural Workspace (GNW), it differs greatly from our model in its basic purposes and functions.

Attention as selection-for-action: a scheme for active perception

1999 Third European Workshop on Advanced Mobile Robots (Eurobot'99). Proceedings (Cat. No.99EX355), 1999

In the second example we also report two learning experiments where a robot picks out the correct focus of attention for a task based on reinforcement learning.