Can Computational Intelligence Model Phenomenal Consciousness? (original) (raw)
Related papers
On the independence between phenomenal consciousness and computational intelligence
arXiv (Cornell University), 2022
Consciousness and intelligence are properties commonly understood as dependent by folk psychology and society in general. The term artificial intelligence and the kind of problems that it managed to solve in the recent years has been shown as an argument to establish that machines experience some sort of consciousness. Following Russell's analogy, if a machine is able to do what a conscious human being does, the likelihood that the machine is conscious increases. However, the social implications of this analogy are catastrophic. Concretely, if rights are given to entities that can solve the kind of problems that a neurotypical person can, does the machine have potentially more rights that a person that has a disability? For example, the autistic syndrome disorder spectrum can make a person unable to solve the kind of problems that a machine solves. We believe that the obvious answer is no, as problem solving does not imply consciousness. Consequently, we will argue in this paper how phenomenal consciousness and, at least, computational intelligence are independent and why machines do not possess phenomenal consciousness, although they can potentially develop a higher computational intelligence that human beings. In order to do so, we try to formulate an objective measure of computational intelligence and study how it presents in human beings, animals and machines. Analogously, we study phenomenal consciousness as a dichotomous variable and how it is distributed in humans, animals and machines. As phenomenal consciousness and computational intelligence are independent, this fact has critical implications for society that we also analyze in this work.
AI & Society, 2020
In the past few years, the subject of AI rights-the thesis that AIs, robots, and other artefacts (hereafter, simply 'AIs') ought to be included in the sphere of moral concern-has started to receive serious attention from scholars. In this paper, I argue that the AI rights research program is beset by an epistemic problem that threatens to impede its progress-namely, a lack of a solution to the 'Hard Problem' of consciousness: the problem of explaining why certain brain states give rise to experience. To motivate this claim, I consider three ways in which to ground AI rights-namely: superintelligence, empathy, and a capacity for consciousness. I argue that appeals to superintelligence and empathy are problematic, and that consciousness should be our central focus, as in the case of animal rights. However, I also argue that AI rights is disanalogous from animal rights in an important respect: animal rights can proceed without a solution to the 'Hard Problem' of consciousness. Not so with AI rights, I argue. There we cannot make the same kinds of assumptions that we do about animal consciousness, since we still do not understand why brain states give rise to conscious mental states in humans.
AI Consciousness and Intelligence in the Era of AI
LinkedIn, 2024
As artificial intelligence (AI) becomes increasingly integrated into various sectors, the debate surrounding its potential to achieve consciousness grows more pressing. This paper explores the distinction between computational intelligence and conscious intelligence, drawing on insights from key thought leaders such as Sir Roger Penrose, Federico Faggin, and Bernardo Kastrup. The argument presented aligns with Penrose’s assertion that while AI can excel in algorithmic tasks, it lacks the intrinsic awareness that characterizes human consciousness. The work emphasizes the risk of anthropomorphizing AI systems, warning against the societal implications of attributing consciousness to machines that operate purely through computation. Additionally, the paper discusses the advances in AI-driven biological models, such as protein language models, which push the boundaries of technology without crossing into the realm of conscious experience. Through a rigorous, evidence-based approach, this paper challenges the prevailing AI hype and advocates for a careful distinction between computational prowess and genuine awareness, to safeguard both technological innovation and societal welfare.
Artificial Intelligence, Human Rights and Disability
Pensar - Revista de Ciências Jurídicas, 2021
The use and proliferation of AI systems in our daily lives is an unavoidable reality. The debate is no longer about whether we should welcome this type of technology in our lives, but under what conditions and safeguards. Preliminary reports on the risks of using the AI system reveal discrimination in detriment of social groups in situations of vulnerability, and persons with disabilities are no exception to this phenomenon, very often through multiple discriminations. Persons with disabilities, as a group in a situation of social vulnerability, face a greater risk of violation of their fundamental rights and freedoms, which justifies adopting specific approaches based on the principle of equality and non-discrimination. From a specific approach towards human rights of persons with disabilities, AI systems represent prima facie, both risks and benefits for their enjoyment and exercise. Among the risks, the key areas of infringement are those related to equality and privacy. Among th...
Artificial Intelligence and Consciousness
Encyclopedia of Consciousness, 2009
Consciousness is only marginally relevant to artificial intelligence (AI), because to most researchers in the field other problems seem more pressing. However, there have been proposals for how consciousness would be accounted for in a complete computational theory of the mind, from theorists such as Dennett, Hofstadter, McCarthy, McDermott, Minsky, Perlis, Sloman, and Smith. One can extract from these speculations a sketch of a theoretical synthesis, according to which consciousness is the property a system has by virtue of modeling itself as having sensations and making free decisions. Critics such as Harnad and Searle have not succeeded in demolishing a priori this or any other computational theory, but no such theory can be verified or refuted until and unless AI is successful in finding computational solutions of difficult problems such as vision, language, and locomotion.
ARE MACHINES CONSCIOUS AND CAN THEY EVER BE
The previous decade has witnessed the widespread recognition that sophisticated AI is under development. The likes of Bill Gates, Stephen Hawking, André LeBlanc, Stefan Wess, Jonathan White, Daniel Dewey and other experts in this field have all agreed that it is a matter of time before super intelligent machines are created, and, although opinions differ, all predict this event will occur in the following couple of decades. However, they all end their predictions with strong cautionary notes that the rise of " superintelligent " machines might bring about great disasters as severe as the end of mankind. Even though this is a topic that sparks much interest, this paper is essentially not concerned with the scenario whether a machine revolution might wipe humans off the face of the earth, but with the formulation of an " intelligent " machine and the next step—a conscious artificial intelligence. So, what are the current positions on machine intelligence and consciousness that brought about such grave predictions? I would like to divide the current attitudes into three " schools " of thought: the computationalist/pragmatic school, spearheaded by several top AI researchers mentioned above; the functionalist/emergentist school, represented by Raymond Kurzweil; and the panpsychist/IIT school, whose proponents are Christof Koch and Giulio Tononi. It seems that all of the experts cited at the beginning of the previous paragraph (comprising the first, computationalist/pragmatic group of thinkers) focused their attention on intelligence as something that is (partially) equivalent with consciousness, disregarding qualia in general. In their opinion, the main premise of artificial intelligence (AI) coming into being is the so-called " intelligence explosion " , which would come about after scientists have devised a very sophisticated machine (be it hardware or software), far superior to anything we have today and integrated it with the greatest in AI at that time (a learning machine). This machine could efficiently form hypotheses, make plans based on them, execute these plans and observe the outcomes relative to the plans, and it would then be tasked to algorithmically investigate AI and create machines with greater computing power. This kind of " recursion " would add to the already exponential development of computing power in the machine, leading to an " intelligence explosion " (Dewey 2013), a threshold under which intelligence seems to peter out, but above which it thrives and exponentially grows. This would ultimately create a machine with would be able to outperform the human race in its totality in terms of intelligence, which would be a moment when the technological change would become so profound, it would change the fabric of human history. This moment was dubbed the " singularity " (White 2014). It is not hard to see why the premise of a goal-oriented and chain-reactive system opens a possibility for negative consequences. In an analogy with microorganisms, Dewey (2013) postulated the possibility of the systems algorithmic goals not being in line with the goals of humans, which would then initiate a process of eliminating the obstacle, turning the immensely superior computation machine against humanity. However, this is still the domain of intelligence viewed purely as computational power and algorithmic capabilities, while true consciousness is not even close to being explained. The question arises: could these
Topics in the Philosophy of AI
Course syllabus, 2024
Syllabus for 2024 version of Topics in Philosophy of AI, taught at the core module for the MA in Philosophy of AI, at the University of York. This course will explore social, political, moral, metaphysical, and epistemological issues surrounding artificial intelligence. We will explore questions like: What would it take for machines to have subjective experiences? Could machines deserve moral treatment? Can machines create art? How have new technologies affected the roles of traditionally marginalized groups? Can technology be racist? How does technology affect our social interactions with each other? What can we learn about the human mind by inventing intelligent machines?
Cognitive freedom (a human right born out of artificial intelligence)
Fundamentum Petendi Law Journal Jakarta Indonesia, 2022
When the manipulation of the human brain activity is a real possibility as it happens nowadays, a minimum of ethical values should be respected and incorporated into international and domestic Law. These rules shall aim to regulate the application of neurotechnologies and artificial intelligence to the human brain. No State which claims to be respectful of human rights can exercise the power to coercively manipulate the mental states of its population. In this article, we discuss cognitive freedom, a new right born out of neurotechnologies, which can also be understood as an update of human free will adapted to the 21st century. It is-as we will see-a multidimensional concept, difficult to define due to its complexity. In this article, we propose to consider cognitive freedom as an entirely new human right aimed at preserving the very essence of human nature. To validate our hypothesis, we use a qualitative methodology, aimed at establishing a consensual opinion of experts in the legal and scientific fields, together with the assistance of the main sources of the law, namely positive law, case law and doctrine.
Can Machines Think? : Investigating The Computability of Consciousness
In the course of the development of the computer, Alan Turing’s question enabled us to explore the possibility of machines thinking and acting like humans. But does artificial intelligence reach its limit in terms of simulating subjective consciousness? This work investigates how human consciousness should be viewed and up to what extent it can be simulated. It stresses the irreducible character of consciousness, as demonstrated by John Searle, but on the other hand shows how Marvin Minsky and David Chalmers still insists that it is possible for machines to be conscious like humans. In the end, we can see that the divide between mind and machine would be possible, affirming that Turing’s question is an exploration of limitless possibilities and a challenge of great proportions.