The Technology of Thinking/Doing: The Autonomous Agent Acts (original) (raw)
Related papers
Artificial intelligence and the future
Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 2003
We consider some of the ideas influencing current artificial-intelligence research and outline an alternative conceptual framework that gives priority to social relationships as a key component and constructor of intelligent behaviour. The framework starts from Weizenbaum's observation that intelligence manifests itself only relative to specific social and cultural contexts. This is in contrast to a prevailing view, which sees intelligence as an abstract capability of the individual mind based on a mechanism for rational thought. The new approach is not based on the conventional idea that the mind is a rational processor of symbolic information, nor does it require the idea that thought is a kind of abstract problem solving with a semantics that is independent of its embodiment. Instead, priority is given to affective and social responses that serve to engage the whole agent in the life of the communities in which it participates. Intelligence is seen not as the deployment of capabilities for problem solving, but as constructed by the continual, ever-changing and unfinished engagement with the social group within the environment. The construction of the identity of the intelligent agent involves the appropriation or 'taking up' of positions within the conversations and narratives in which it participates. Thus, the new approach argues that the intelligent agent is shaped by the meaning ascribed to experience, by its situation in the social matrix, and by practices of self and of relationship into which intelligent life is recruited. This has implications for the technology of the future, as, for example, classic artificial intelligence models such as goal-directed problem solving are seen as special cases of narrative practices instead of as ontological foundations.
ACTION AND AGENCY IN ARTIFICIAL INTELLIGENCE: A PHILOSOPHICAL CRITIQUE
Justin Nnaemeka Onyeukaziri, 2023
The objective of this work is to explore the notion of "action" and "agency" in artificial intelligence (AI). It employs a metaphysical notion of action and agency as an epistemological tool in the critique of the notion of "action" and "agency" in artificial intelligence. Hence, both a metaphysical and cognitive analysis is employed in the investigation of the quiddity and nature of action and agency per se, and how they are, by extension, employed in the language and science of artificial intelligence. The advent of the science of artificial intelligence and cognitive science, and the technological applications of artificial intelligence in the production of agents such as driverless cars and expert systems, have raised the question of moral, ethical and/or legal responsibility in AI agents. This has re-emphasized the importance of the philosophical discourse on the notions of action and agency, which in contemporary intellectual discourse are now perceived to be phenomena within the epistemic competence of the natural sciences. This paper argues that AI systems do not and cannot possess free agency and autonomy, thus, cannot be morally and ethically responsible. Hence, it recommends a socio-political response to the question of responsibility in AI. It is then the duty of individual nations, or the global community to define and enact policies on who shoulders the responsibility of actions executed by AIs.
General Artificial Intelligence in Self-developing Reflective-Active Environments
Lecture notes in networks and systems, 2022
Abstract The purpose—explain identify the features of General Artificial Intel-ligence (AGI), from the standpoint of scientific rationality stages (classic, non-classical, post-non-classical), and show its difference from traditional Artificial Intel-ligence (AI). The latter is currently implemented mainly with digital computers and implements the functions of recognition, forecasting and preparation of answers to simple questions. New conditions force us to consider AGI from the stand-point of functionalism, as a man–machine system, purposefully functioning in a self-developing poly-subject (reflexive-active) environment. Design/Methodology/Approach—AI development paradigm should consider aspects of its immersion in the social and humanitarian environment and the inno-vative atmosphere. The new paradigm of AI development should reflect the unfor-malized cognitive dynamics of AI models and support the self-development of AI systems under pressure from the external environment. The methodology of creating AGI is based on the ideas of the subject-oriented and ontological approach, func-tionalism, the phenomenology of subjective reality, the convergent cognitive archi-tectures, as well as the methods of creating a self-developing poly-subject (reflexive-active) environment. AGI becomes a hybrid, purposefully integrating the capabilities of a machine and a person. Findings—General properties, value-semantic and intentional-volitional oper-ational structures of the phenomenon of subjective reality do not lean itself on direct formalized and algorithmic representation in discrete computer systems of von Neumann architecture. The study of consciousness in the context of subjective reality made it possible to formulate the main systemic, structural, functional, and operational characteristics of human cognitive activity, which allows a new approach to the modelling of cognitive architectures that meet the tasks of building AGI. The characteristics of subjective reality cannot be fully represented in the paradigm of physicalism; that is, it cannot be represented only with the help of physical devices. The chapter proposes a non-reductionist way of taking this characteristic into account by considering the problem of consciousness in an ontological and epistemological context, which allows representing the processes of consciousness and cognitive activity of a person and a group of people indirectly and inversely. Originality/Value—State-of-the-art cognitive architectures and traditional AI approaches practically ignore solving the problems of AGI. They are more focused on the formalized construction of a thinking model, identifying physical blocks and processes of mental activity. At the same time, for AGI, ontological, subjec-tive and hybrid reality issues are of the most importance, especially in explanations of the activity of consciousness, unconsciousness, and causeless processes, which can act purposefully in conditions of goal uncertainty. AGI must help to describe the phenomena of subjective reality, which causes physical changes, explain the ability of goal setting, free will, the ability of self-management by the physical actions of an individual in a team, etc. Research/Practical/Social/Environment implications—the chapter give rise to a new type of control, which differs from the traditional control in digital reality. The chapter’s results made it possible to uniquely find the optimal measure of centraliza-tion and autonomy of control loops that can ensure the preservation and strengthening of the integrity of a complex poly-subject system functioning in a reflexive-active environment, the interpretation of which does not fit into the narrow framework of digital and algorithmic reality, and traditional AI. Research limitations—the AGI approach based on exceptional methods of constructing subjective reality also has its limitations. For example, the approach we propose to explain the connection between the human brain, consciousness, thought processes and environment does not yet allow us to explain the information and cognitive processes generated by the effect of subjective reality nonlocality, which arises, e.g., at the atomic level of the human brain and should be considered when studying cognitive processes.
Seeing clearly and moving forward [artificial intelligence]
IEEE Intelligent Systems, 2000
THIS SURVEY HIGHLIGHTS SOME important trends in AI research and development, focusing on perceiving and affecting the real world. We will primarily address robotics, but we certainly don't intend to imply that this is the only important area of AI research and development in the 21st century. We see tremendous importance in the continuing work in such areas as data mining, learning, knowledge representation, planning and scheduling, natural language understanding, expert systems, and deductive and inductive reasoning. Many of these areas will contribute substantially to the computers that see, hear, speak, and move, and that will be required for advances in ubiquitous, embedded computation. At the close of the article, we single out for special mention one area that contributes centrally to all of these technologies, software development technology. Ubiquitous computing We are moving swiftly toward a world of ubiquitous computing. Each year we produce more than one processor chip for each person on the planet, and the growth rate of chip production exceeds the population growth rate. So, by some definition, ubiquitous computing is surely upon us. Although for some people, ubiquitous computing means that we all carry devices that enable us to connect to the Internet wherever we happen to be, and although it does seem that this will happen, that is not what we mean by ubiquitous computing. Our vision of ubiquitous computing is that devices under our control will heavily populate our environment, and that these devices AI RESEARCH IN THE NEAR FUTURE WILL TEND TOWARD
Enaction-Based Artificial Intelligence: Toward Co- evolution with Humans in the Loop
This article deals with the links between the enaction paradigm and artificial intelligence. Enaction is considered a metaphor for artificial intelligence, as a number of the notions which it deals with are deemed incompatible with the phenomenal field of the virtual. After explaining this stance, we shall review previous works regarding this issue in terms of artifical life and robotics. We shall focus on the lack of recognition of co-evolution at the heart of these approaches. We propose to explicitly integrate the evolution of the environment into our approach in order to refine the ontogenesis of the artificial system, and to compare it with the enaction paradigm. The growing complexity of the ontogenetic mechanisms to be activated can therefore be compensated by an interactive guidance system emanating from the environment. This proposition does not however resolve that of the relevance of the meaning created by the machine (sense-making). Such reflections lead us to integrate human interaction into this environment in order to construct relevant meaning in terms of participative artificial intelligence. This raises a number of questions with regards to setting up an enactive interaction. The article concludes by exploring a number of issues, thereby enabling us to associate current approaches with the principles of morphogenesis, guidance, the phenomenology of interactions and the use of minimal enactive interfaces in setting up experiments which will deal with the problem of artificial intelligence in a variety of enaction-based ways.
Human cognition and the AI revolution
Annals of the New York Academy of Sciences, 2019
Discovering the true nature of reality may ultimately hinge on grasping the nature and essence of human understanding. What are the fundamental elements or building blocks of human cognition? And how will the rise of superintelligent machines challenge our ideas about cognition, reality, and the limits of human understanding? Logician/mathematician Roger Antonsen and computer science pioneer Barbara J. Grosz join forces to shed light on these questions and the horizon of artificial intelligence.
Philosophical Foundations of AI
Lecture Notes in Computer Science, 2007
Artificial Intelligence was born in 1956 as the off-spring of the newly-created cognitivist paradigm of cognition. As such, it inherited a strong philosophical legacy of functionalism, dualism, and positivism. This legacy found its strongest statement some 20 years later in the physical symbol systems hypothesis, a conjecture that deeply influenced the evolution of AI in subsequent years. Recent history has seen a swing away from the functionalism of classical AI toward an alternative position that re-asserts the primacy of embodiment, development, interaction, and, more recently, emotion in cognitive systems, focussing now more than ever on enactive models of cognition. Arguably, this swing represents a true paradigm shift in our thinking. However, the philosophical foundations of these approaches -phenomenology -entail some far-reaching ontological and epistemological commitments regarding the nature of a cognitive system, its reality, and the role of its interaction with its environment. The goal of this paper is to draw out the full philosophical implications of the phenomenological position that underpins the current paradigm shift towards enactive cognition.
Modelling social action for AI agents
Artificial Intelligence, 1998
AI is a science, not merely technology, engineering. It cannot find an identity (ubi consistam) in a technology, or set of technologies, and we know that such an identification is quite dangerous. AI is the science of possible forms of intelligence, both individual and collective. To rephrase Doyle's claim, AI is the discipline aimed at understanding intelligent beings by constructing intelligent systems.