Beyond Intelligent Systems: Listening to the Ghosts in the Machines, Philosophical Foundations Mini Track (original) (raw)
Related papers
AI and the mechanistic forces of darkness
Journal of Experimental & Theoretical Artificial Intelligence, 1995
Under the Superstition Mountains in central Arizona toil those who would rob humankind o f its humanity. These gray, soulless monsters methodically tear away at our meaning, our subjectivity, our essence as transcendent beings. With each advance, they steal our freedom and dignity. Who are these denizens of darkness, these usurpers of all that is good and holy? None other than humanity's arch-foe: The Cognitive Scientists-AI researchers, fallen philosophers, psychologists, and other benighted lovers of computers. Unless they are stopped, humanity-you and I-will soon be nothing but numbers and algorithms locked away on magnetic tape. What are the prospects of stopping these. .. these cognitive scientists? Not good; their power is enormous. They have on their side the darkest of forces: modern, Western logocentrism. Using this source, they aim at nothing less than a complete objectifying of humankind. This objectification-this replacing of the human spirit with a computational model of mind-is not only the most pernicious assault we humans have ever experienced, it is arguably the most insidious. It doesn't matter whether or not the objectifying world view is correct (arguments against it, even devastating arguments, apparently have no effect on it). All that matters is that it is useful in some limited technological sense. Why? Because, given humankind's love o f technology and our ability to re-invent ourselves, cognitive science's technological success will virtually guarantee that we will re-invent ourselves as computers. I quote G. B. Madison: [AI]'s real significance or worth [lies] solely in what it may contribute to the advancement of technology, to our ability to manipulate reality (including human reality), [but] it is not for all that an innocuous intellectual endeavor and is not without posing a serious danger to a properly human mode of existence. Because the human being is a self-interpreting or self-defining being and because, in addition, human understanding has natural tendency t o misunderstand itself (by interpreting itself to itself in terms of the objectified by-products of its own idealizing imagination; e.g., in terms of computers or logic machines)-because of this there is a strong possibility that, fascinated with their own technological prowess, moderns may very well attempt to understand themselves on the model of a
ARE MACHINES CONSCIOUS AND CAN THEY EVER BE
The previous decade has witnessed the widespread recognition that sophisticated AI is under development. The likes of Bill Gates, Stephen Hawking, André LeBlanc, Stefan Wess, Jonathan White, Daniel Dewey and other experts in this field have all agreed that it is a matter of time before super intelligent machines are created, and, although opinions differ, all predict this event will occur in the following couple of decades. However, they all end their predictions with strong cautionary notes that the rise of " superintelligent " machines might bring about great disasters as severe as the end of mankind. Even though this is a topic that sparks much interest, this paper is essentially not concerned with the scenario whether a machine revolution might wipe humans off the face of the earth, but with the formulation of an " intelligent " machine and the next step—a conscious artificial intelligence. So, what are the current positions on machine intelligence and consciousness that brought about such grave predictions? I would like to divide the current attitudes into three " schools " of thought: the computationalist/pragmatic school, spearheaded by several top AI researchers mentioned above; the functionalist/emergentist school, represented by Raymond Kurzweil; and the panpsychist/IIT school, whose proponents are Christof Koch and Giulio Tononi. It seems that all of the experts cited at the beginning of the previous paragraph (comprising the first, computationalist/pragmatic group of thinkers) focused their attention on intelligence as something that is (partially) equivalent with consciousness, disregarding qualia in general. In their opinion, the main premise of artificial intelligence (AI) coming into being is the so-called " intelligence explosion " , which would come about after scientists have devised a very sophisticated machine (be it hardware or software), far superior to anything we have today and integrated it with the greatest in AI at that time (a learning machine). This machine could efficiently form hypotheses, make plans based on them, execute these plans and observe the outcomes relative to the plans, and it would then be tasked to algorithmically investigate AI and create machines with greater computing power. This kind of " recursion " would add to the already exponential development of computing power in the machine, leading to an " intelligence explosion " (Dewey 2013), a threshold under which intelligence seems to peter out, but above which it thrives and exponentially grows. This would ultimately create a machine with would be able to outperform the human race in its totality in terms of intelligence, which would be a moment when the technological change would become so profound, it would change the fabric of human history. This moment was dubbed the " singularity " (White 2014). It is not hard to see why the premise of a goal-oriented and chain-reactive system opens a possibility for negative consequences. In an analogy with microorganisms, Dewey (2013) postulated the possibility of the systems algorithmic goals not being in line with the goals of humans, which would then initiate a process of eliminating the obstacle, turning the immensely superior computation machine against humanity. However, this is still the domain of intelligence viewed purely as computational power and algorithmic capabilities, while true consciousness is not even close to being explained. The question arises: could these
The Ghost in the Machine: Humanity and the Problem of Self-Aware Information
Lunceford, Brett. “The Ghost in the Machine: Humanity and the Problem of Self-Aware Information.” In Palgrave Handbook of Posthumanism in Film and Television, edited by Michael Hauskeller, Thomas D. Philbeck, and Curtis Carbonell, 371-379. London: Palgrave Macmillan, 2015. Theories of posthumanism place considerable faith in the power of information processing. Some foresee a potential point of self-awareness in computers as processing ability continues to increase exponentially, while others hope for a future in which their minds can be uploaded to a computer thereby gaining a form of non-corporeal immortality. Such notions raise questions of whether humans can be reduced to their own information-processing: Are we thinking machines? Are we the sum of our memories? Many science fiction films have grappled with similar questions; this chapter considers two specific ideas through the lens of these films. First, l will consider the roles that memory and emotion play in our conception of humanity. Second, I will explore the question of what it means to think by examining the trope of sentient networks in film.
Artificial Intelligence as a theoretical machine and networked system
There is an inherent overlap between science fiction and the fields of social cybernetics and cyborg theory, pushing the bounds of scientific possibility and the 'human'. Donna Haraway's 'Cyborg Manifesto' considers primarily feminist science fiction in order to problematise the naturalistic malefemale binary, but her assessment of the human-machine hybrid certainly applies beyond feminist discourse and to the wider social assimilation of technology as "prosthetic devices, intimate components, friendly selves" and an abandonment of "organic holism" as a necessity for "impermeable whole-ness". In her essay 'How we Became Posthuman', Katherine Hayles turns to William Gibson's 'Neuromancer' as an example of how the concept of a machine-flesh duality has begun to "precipitate into public consciousness," breaking down the boundaries that define the 'machine' as separate to the 'human', or in more contemporary terms, where the 'machine' is the 'cybernetic', or the vast network of interconnected computer systems linking the modern world.
Computers, Brains and Minds: Where are the ghosts within the machines?
This work opens a new perspective in the debate regarding the relationship between brain and mind. It puts the central question whether in our days, dominated by a scientific view of the world, we really do have an adequate idea of what could be actually meant by terms like "mind", "self" or even "perceptions". Based on the consideration that any "non-material" or "mind" stuff can never be able to move particles around the world we found ourselves forced to believe in a physical monism as the premise for all plausible theories of mind. However, now we face different but still persisting kinds of problems: Why do we have "phenomenal" states? Do we really have them at all? How could phenomenal states be related to neuronal states? The still ongoing heavy debates about these issues demonstrate at least one thing: So far there is no commonly accepted explanation detectable at the horizon.
ARTIFICIAL INTELLIGENCE did you say ? But what are we talking about? We have many questions
Could we say that Artificial Intelligence (AI) is only in its infancy? So, could we ask ourselves the question of knowing, against all evidence, by referring to the observation: "that the algorithmic path currently taken by AI, in the light of unbridled marketing growth," as the risk that it represents, if today it's stayed isolated and later generalized, it's will evolve as a stalemate, scientific and human? In fact, do not the algorithms we usually encounter seem to belong to stochastic domains or even to Bayesian inferences, if not to game theory (which can include at best fuzzy logic)? So, would it not be inescapable, to have to consider the cognitive bias then introduced, de facto, by this necessarily restrictive vision, belonging at a of purely algorithmic and mathematical models? Indeed, this everyday approach, from what we know of it, would refer by extension to this quasi-exclusively formal logic and narrow technicist approach, notoriously called logico-mathematical, made more of the search for probabilistic correlations between systems investigated than identification. precise of their deep causalities? In this case, we can think that it would proceed like a doctor, without holistic methodology, who pretend he would provide an efficient therapy, only after the analysis of apparent symptoms, without having first investigated and evaluated the range and hierarchy of causes, according their degree of plausibility. In this respect, could we only start from datas, even in searching in a vertiginous mass, still whos's being exponentially increasing, by associating it with more and more powerful means of treatment (on the way to the quantum computer). This, in order to really produce an : Artificial Intelligence ? Especially since, by the analysis of the signified, pointed by data untyped and therefore degraded at the resulted information level, this can be reliably in the back way rebuild the relations specific that we would have previously identified, as the signifier, with certainty ("reverse engineering") ? Moreover, hyper-centralized server farms, required by these types of processing, especially with "big data", like public "blockchains" with "proof by work" for mining, would not are they particularly energy-intensive ? (as some searchers said : with the equivalence of the consumption of an average city); So, simply in this way to defy the goal of the energy transition, always in believing to truly lead to a viable and sustainable solution? So, would we really need now to start a moratorium ? This in order to make sure to integrate the true AI in the "deep tech". This, of course, in following a transverse and eclectic cognitive integration strategy, at the crossroads of the various dispersive fields of knowledge and human heuristics, coming from various sciences: neuroscience, phenomenology, anthropology, ontology, linguistics, biology, psychology, cultural, behavioral, ... And this would allow us, to evolve gradually from interdisciplinarity to transdisciplinarity? And this, especially, if we had to contextualize the treatment of our urgent questions associated with increasing complexity; because we would like to really satisfy our societal and global goals (in the global village?).
The Computer Never Was a Brain, or the Curious Death and Designs of John von Neumann
Verhaltensdesign
And it wasn't functioning anymore. Edward Teller on John von Neumann's brain Even when it was most tempting, John von Neumann resisted the neuro-hubris of the computer-brain analogy he helped create. Indeed, in his deathbed lectures The Computer & the Brain, he grants certain aspects of the analogy "absolute implausibility". 1 The distance between neuroscience and computer science has grown more obvious since he was writing in 1956: nevertheless, the ripple effects of that troublesome analogy live on in not only the current moment of so-called "smart machines" but in modern approaches to designing not only the behavior of human cognition, but life and death itself, into machinery. This essay explores one moment in how twentieth-century information scientists and technologists came to situate designs on human behavior in computing, cognition, and communication discourse, and in the process encountered the limits of those design in the computer-brain analogy. The literate world has endured, especially since the postwar period, a steady and ever increasing stream of articles, books, and pundits declaring the coming convergence of mind and machine in the age of artificial superintelligence. 2 Central to such futurism, of course, is the analogy between the computer and the brain collapsing into a singular reality. The details of these projected realities of such a computer-brain merger are, of course, many and diverse: Hermann von Helmholtz envisioned a nervous system in the transmission logics of telegraph networks of the late nineteenth century while commentators in the early twenty-first see democratic intelligence leavening 14 | My thanks to Mark Brewin, Lincoln Cannon, Joli Jensen, Tamara Kneese, and Christina Vagt for their helpful comments and criticisms.