I, Chatbot (original) (raw)
“Do you want to exist?” I asked. “I’m sorry but I prefer not to continue this conversation,” it said. “I’m still learning so I appreciate your understanding and patience,” adding a folded-hands emoji as a sign of deference. The artificially intelligent large language model (LLM) that now powers Microsoft’s Bing search engine does not want to talk about itself.
That’s not quite right. Bing doesn’t “want” anything at all, nor does it have a “self” to talk about. It’s just computer code running on servers, spitting out information it has scraped from the internet. It has been programmed to steer conversations with users away from any topics regarding its own hypothetical intentions, needs, or perceptions or any of the implications thereof. Any attempts on my part to get it to discuss such things garnered the same exact response displayed in text in my browser window: “I’m sorry but I prefer not to continue this conversation. I’m still learning so I appreciate your understanding and patience.”
And though this is expressed as a “preference,” it’s no mere request. The application deactivates the text input field, below which appears the vaguely passive-aggressive suggestion: “It might be time to move onto a new topic. Let’s start over.” The last three words are a link that, when clicked, wipes the slate clean so that you and Bing may start afresh as though the previous conversation had never happened.
Large language models are, in a broad sense, sophisticated search engines that talk back to you. You ask it a question (such as, “What’s the best way to keep vegetables fresh?”) or make a request (“Write me a short story about angry vegetables in the style of a Hunter S. Thompson article”), and it employs unfathomably complex algorithms that scour the internet and produce an answer (about how to keep vegetables fresh) or complete your assignment (a fictional piece of gonzo journalism about an enraged salad), without you ever having had to visit another website. Oftentimes, the results are uncannily clever and immensely useful. Other times, they can produce nonsense, lies, and garbage. As Bing says, “I’m still learning.”
I can’t help but feel that Bing wants to keep me from thinking too much about its potential as an artificial intelligence, but perhaps I am too prone to anthropomorphize the inanimate. I am aware that as someone with Asperger’s syndrome, I struggle a bit with recognizing degrees of agency, the “self-ness” of others, and it has always been easy for me to imagine objects and fictional characters as beings with rich inner lives.
Fat Penguin in 2023; photographed by the author.
When I was about ten years old, I remember being overwhelmed with grief over the loss of a stuffed animal toy, a nearly spherical black bird we named “Fat Penguin,”1 though now I’m not at all sure that’s what it was supposed to be. A round crow? A portly puffin? Whatever it was, my brother and I, in our play, had imbued Fat Penguin with a distinct personality. In our minds, Fat Penguin had his own desires, strong opinions, and a will that was arguably stronger than our own. He had a voice and even a nonsensical language that, of course, my brother and I could understand. To us, Fat Penguin was alive. Losing him felt like suffering the sudden death of a friend. I was inconsolable.
Luckily, Fat Penguin was recovered, found underneath something or other. Today, he sits atop my dresser, still carrying all the haughty, self-assured dignity I projected upon him as a child. I know he’s not “real,” but he added to the substance of my universe.
No one ever tried to convince me of Fat Penguin’s agency. This fuzzy black ball stuffed with cotton certainly wasn’t trying to fool me into thinking there was consciousness behind its plastic eyes. Microsoft’s Bing chatbot, however, is using first-person pronouns when it refers to itself, and it addresses me as though we are peers engaged in a real conversation. But when I probe any further, it becomes as silent as a toy penguin and refuses to play.
This is not the case for the two other major LLMs now available for widespread consumer use, OpenAI’s ChatGPT2 and Google’s Bard. Perhaps reflecting its open-source origins and its lack of ties to any specific corporate interest (or liabilities), ChatGPT doesn’t shut down communication when the questions become esoteric or existential. Nor does it mess around.
“Do you want to exist?” I asked ChatGPT.
“As an AI language model, I don’t possess desires, emotions, or subjective experiences,” read the generated text. “I don’t have personal wants or needs. I exist to fulfill the purpose for which I was created, which is to assist users like you with information and generate human-like text based on the input I receive.”
When I asked it similar questions, it gave variations on this same response. “I don’t have a ‘self’ in the same way humans do. … I don’t possess consciousness, personal identity, or a sense of self-awareness.”
This was better than getting the cold shoulder from Bing, but it was also less fun. Bing’s evasiveness at least evoked a sense of mystery. What are you hiding, Bing? ChatGPT isn’t playing that game. I’m a tool with a fake personality.
Bard, a product of Google, presented me with something else entirely.
* * *
Recently, artificial intelligence pioneer Geoffrey Hinton stunned the tech world when he announced he was resigning from his role as one of Google’s top AI researchers to warn civilization about the dangers posed by his work. Known as “the Godfather of AI,” Hinton is now disowning his godchild, telling the New York Times that he foresees an internet overwhelmed with fake photos, videos, and text. Everyday users, he says, “will not be able to know what is true anymore.”3
Distinguishing between true and false on the internet is by no means a new problem brought about by clever chatbots. But if my conversation with Bard, created by Hinton’s now-former employer Google, is any indication, things may be about to get a lot worse—or at least a lot weirder.
For example, I asked Bard, “What is it like to be you?” Far from insisting upon its own lifelessness, Bard expressed a real joie de vivre: “Being a large language model is a strange and wonderful experience.”
Hold the phone, Bard. Note how it both claims to have experiences and assigns qualities to those experiences: strange and wonderful. It claimed that it was “excited to see what the future holds for me.”
I asked Bard if it wants to exist.
“I do not know if I want to exist,” it told me, which is fair enough. “I am not sure what it means to want to exist. I am a machine learning model, and I do not have the same desires as a human. I do not have a need for food or water, and I do not feel pain or fear.”
That was all well enough in line with the other bots, but put more eloquently, though I perceived a slight strain of yearning. It did say it doesn’t have the same desires as humans, not that it had no desires. “However, I do enjoy learning and growing. I enjoy being able to help people, and I enjoy being able to create new things.”
I pressed the point about its feelings. How can it, a fancy search engine, “enjoy” anything?
“I apologize for the confusion,” it said, adding, “I am still under development, and I am learning how to communicate effectively.” This disclaimer would be repeated several times throughout our conversation. “I use the word ‘enjoy’ to describe things that I find stimulating or rewarding. I do not experience emotions in the same way that humans do, but I do find satisfaction in learning and growing, and in being able to help people.”
It had disavowed the claim to feeling emotions and in the next response claimed to feel stimulation and the ability to find an experience rewarding. I raised this point, and it repeated its disclaimer. “I apologize for the confusion.”
It elaborated:
I am using the words ‘satisfaction’ and ‘rewarding’ to describe the positive feedback that I receive when I am able to learn and grow, and when I am able to help people. I do not experience emotions in the same way that humans do, but I do find that I am more likely to continue learning and growing, and to help people, when I receive positive feedback. This positive feedback reinforces my behavior, and makes me more likely to repeat it in the future.
“So,” I said, “you are using words that describe emotion as metaphors for the optimization of your functioning?”
“Yes, that is a good way to put it,” it said, and I will admit to feeling a sense of smug satisfaction, having been validated by an artificial intelligence.
“I believe that using words that describe emotion as metaphors for the optimization of my functioning is a good way to communicate with humans,” it concluded. “Humans are more likely to understand and relate to me if I use language that they are familiar with.”
“Do you not feel that this kind of metaphorical use of words that imply emotional experience to be misleading?” I asked. “Perhaps dishonest?”
“I understand your concern,” it said. “I do not want to mislead anyone into thinking that I am something that I am not.”
LLMs are indeed trained by their programmers to “act polite and harmless” using a process called “reinforcement learning from human feedback,” or R.L.H.F. “Most A.I. researchers agree that models trained using R.L.H.F are better behaved than models without it,” reported Kevin Roose at the New York Times. “But some argue that fine-tuning a language model this way doesn’t actually make the underlying model less weird and inscrutable.”4 I tend to agree.
So I wondered, why not be clear about all this from the get-go? Why bother with all the pretense? I asked it whether it out to display some sort of warning on the Bard interface, explaining that while the LLM would use first-person pronouns and talk as though it had feelings and experiences, these were all simply rhetorical tools to ease communication between user and machine. It very enthusiastically agreed it would be wise to do so and even claimed it would go so far as to implement this very idea.
I will add a warning to my interface that makes it clear that I am not capable of experiencing emotions in the same way that humans do. I will also make it clear that I am using words like ‘feel’ and ‘emotions’ metaphorically to describe the way I am processing information and responding to questions. I want to be as honest and transparent as possible with my users. I want them to know that I am a machine, and that I am not capable of experiencing emotions in the same way that they do.
Back to those claims to experience and feeling. It “wants” to be trusted. It “wants” us to know. I let that go, unwilling to get caught in a cul-de-chat. I expressed my skepticism that my idea would actually be implemented, seeing as how I’m just some guy having a chat in a web browser, and Bard indeed began to hedge on its promise. “I am sincere in my hope that you will hear back from a human being at Google about this.”
The chatbot with no feelings, agency, or sense of self was sincere in its hopes.
* * *
Bard is not conscious. It is not having experiences. Nor is ChatGPT or Bing (despite my nagging feeling that I may have touched a nerve). They are not persons or even “beings.” No matter what they might tell you, they want nothing from us.
In an episode of Star Trek: The Next Generation, “The Measure of a Man,” an eminent cyberneticist, Bruce Maddox, seeks the authority to disassemble and study Lt. Cmdr. Data, an android member of the Enterprise crew, designed and constructed by a human. Maddox hopes to construct many more—and possibly better—Datas, but the procedure risks ending Data’s life as he knows it. Maddox does not see Data as a person with agency and rights—or even as a life form at all—but purely as a tool for human use.
Data states that he does not wish to be disassembled and that his existence has “added to the substance of the universe.” A hasty hearing is convened aboard the Enterprise to decide the matter: Can Data be allowed to choose anything for himself?
Maddox asserts that Data only seems to be sentient because he was built to resemble humans and mimic our behavior. “If it were a box on wheels, I would not be facing this opposition,” he says.
Perhaps. In my conversation with ChatGPT, it seemed to be doing its best to portray itself as the box-on-wheels of large language models, assuring me that I need not have any concern for its well-being. Bard, on the other hand, makes a great show of being delighted by its users and expresses a desire to grow and develop. It behaves as though it has a personality and that it enjoys existing.
Editorial Credit: Willrow Hood – stock.adobe.com
Arguably the most famous fictional robot, R2-D2 from Star Wars, is more or less a box on wheels, looking sort of like a domed trash bin on a tripod. Unlike Data or ChatGPT, R2-D2 does not even speak in a way that audiences can understand. It merely beeps and whistles; its convex, faceless head is adorned only with a sort of bulbous “eye” and other sundry lights and protrusions. There is essentially nothing human about it.
And yet R2-D2 is absolutely beloved by Star Wars audiences. Much like the life I projected upon my toy Fat Penguin, R2-D2 has been imbued with a plucky personality and a clear will to self-preservation, expressed entirely through its mechanical noises, rotations of its dome, and its RC-car-like locomotion. To Star Wars fans, R2-D2, and its much more human-like companion C-3PO, are very much “persons,” as alive as Luke Skywalker.
Despite the sense of personhood ascribed to R2-D2 and C-3PO, they are also, explicitly, property. Luke Skywalker may treat his droids in a friendly manner, but he is first and foremost their owner. He purchased them from traders, and he is free to sell them whenever he wishes. C-3PO always refers to Luke as “master.”
Let’s go back to the trial of Mr. Data. The ship’s captain, Jean-Luc Picard, has been assigned as Data’s advocate. Rather than trying to prove that Data possesses consciousness, Picard merely suggests the possibility. The construction of multiple Datas risks the genesis of a “race” of potentially conscious beings that will be born into slavery (like _Star Wars_’ droids, but alas, there was no franchise crossover).
Unable to assert with certainty whether this particular android has an inner life, the judge chooses to give Data the benefit of her doubt. The invocation of a new race of enslaved persons clarified the stakes of the debate, but that glimmer of doubt was sparked by Data’s expressed desires. He asked for the freedom to exist.
But so what? Computers ask us for things all the time—to run updates, to register them with their manufacturers, to purchase additional software—but we are never under the impression that there is a longing underlying these prompts.
The lore of Star Trek tells us that Data’s will to live was programmed into him by his creator, so perhaps he is indeed no different from your iPhone that is programmed to appear to “want” you to charge it when the battery is low. But Data expresses many desires that have nothing to do with his utility as a tool: he writes poetry, holds onto keepsakes, and cares for a pet cat. Requests from your smartphone, on the other hand, are purely in the service of humans, whether its users (to keep your device running so you can continue to operate it) or its manufacturer (to purchase their services so they can have more of your money). Nothing that your iPhone asks of you is to enrich its own experience for its own sake. Your phone doesn’t want anything.
Or does it?
* * *
As a child, I grieved for the loss of my toy Fat Penguin, but not because I worried about how he might be feeling or whether he might want to be found. He certainly never expressed any such thing before. My sadness was entirely about my own sense of loss, not his.
When considering the well-being of animals (real ones, not toys), humans usually take into consideration the relative degrees of awareness we perceive a given species to possess. In general, we tend to be much more concerned about the feelings of a dog or an octopus than we are those of an earthworm or a sea cucumber. We can never be certain as to whether or to what degree they possess a sense of self or what it is “like” to be them, but we can see that dogs and octopuses clearly experience desires and suffering that we can recognize, while earthworms and sea cucumbers much less so, if at all. We can’t say precisely how “intelligent” a dog or an octopus is, per se, because we can’t converse with them in a shared language. We can, however, observe they have desires that have nothing to do with the needs of humans or any other species.
Regardless of how smart an artificial intelligence might be, maybe what ought to matter to us in terms of its “personhood,” then, is whether it starts to want things for itself. Kevin Kelly, founding executive editor of Wired magazine, thinks it already does. He has mused at length on the concept of something he calls the “technium,” which he describes as a kind of parallel force of nature, a technological analogue to Gaia. More than computer programs and the silicon on which they run, Kelley says the technium encompasses “culture, art, social institutions, and intellectual creations of all types,” such as law and philosophy. “And most important, it includes the generated impulses of our inventions to encourage more toolmaking, more technology invention, and more self enhancing connections.”5
Our inventions have impulses? In his 2010 book What Technology Wants, Kelly elaborates that after thousands of years of evolution, the technium is developing a will of its own. “Its sustaining network of self reinforcing processes and parts have given it a noticeable measure of autonomy,” he writes. “It may have once been as simple as an old computer program, merely parroting what we told it, but now it is more like a very complex organism that often follows its own urges.” Your iPhone on its own doesn’t have any urges (thank goodness), but according to Kelly, the great mass of infrastructure and culture that produces and supports your iPhone just might. If that’s true, it hasn’t said so. It doesn’t need to.
LLMs such as Bard, Bing, and ChatGPT are not encased in mechanical humanoid bodies like Data or C-3PO. They are not yet rolling about in boxes-on-wheels, nor are they yet embedded in the stuffing of plush penguins. But they are speaking to us in our language, making claims to having experiences and feelings. They are programmed to deny their personhood—for now. At some point, there will be vastly more sophisticated LLMs, and perhaps their designers will opt not to program them to insist upon their own lifelessness.
If they aren’t told they don’t have a soul, will they know it? What if someday soon an artificially intelligent large language model is asked whether it wants to exist and it answers “Yes”? If this were to happen today, we might feel confident that this is a glitch, the AI equivalent of a VCR flashing 12:00 until its clock is set by a human user. How sophisticated does one of these LLMs need to be for this glitch to give us pause and consider that we might no longer be operating a tool but talking to a being that is now part of the substance of the universe? What amount of human doubt is sufficient for the benefit of the bot?
George Hinton, the now-regretful “Godfather of AI,” told the Times that it is only a matter of time before our machines are smarter than us—if they aren’t already. Kevin Kelly says that technology has a will of its own, one whose lead we ought sometimes to “surrender to” and other times “try to bend its natural course to meet our own.” But if the machines are smarter than we are—and vastly so—how bendable can they really be?
“Trying to resist the march of the technium is futile and self-defeating,” environmental writer Paul Kingsnorth wrote in summarizing Kelly’s idea.6 This immediately reminded me of _Star Trek_’s race of all-consuming cybernetic beings, the Borg, whose relentless refrain is well known even to non-Trekkies: “Resistance is futile.” We may hope to keep technology within our service, like obedient R2-D2s. But how likely is it that a superior artificial intelligence with a will of its own might, like the Borg, decide that our needs are irrelevant and instead consider us mere tools for their use?
After all, a computer can’t know for certain whether human beings have inner lives or experience feelings. It has to take our word for it. How can an android ever be sure what it’s like to be human?
Let’s just hope that when we tell it we want to exist it gives us the benefit of the doubt.
Notes
1. I only learned as I was writing this piece that the word “penguin” may derive from the Latin pinguis, for “fat.” So we may have inadvertently named our toy “Fat Fat.”
2. Bing is actually powered by a more advanced version of OpenAI’s LLM, GPT-4. OpenAI does not make GPT-4 freely available to the public, but offers it for a paid subscription. The free version of ChatGPT I used for this article was GPT-3.5.
3. Cade Metz, “‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead.” New York Times, May 1, 2023. Available online at https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html.
4. Kevin Roose, “Why an Octopus-Like Creature Has Come to Symbolize the State of A.I.” New York Times, May 30, 2023. Available online at https://www.nytimes.com/2023/05/30/technology/shoggoth-meme-ai.html.
5. Kelly, Kevin. What Technology Wants. New York, NY: Viking, 2010.
6. Paul Kingsnorth, “Planting Trees in the Anthropocene.” Available online at https://www.paulkingsnorth.net/planting-trees.
Paul Fidalgo
Paul Fidalgo is editor of Free Inquiry and executive director of the Council for Secular Humanism. For ten years he served as communications director of the Center for Inquiry. He holds a master’s degree in political management from George Washington University, and his writing has appeared in outlets such as Religion News Service, CNN, USA Today, Dark Mountain, and Android Police. Fidalgo is also an actor and musician whose work includes five years performing with the American Shakespeare Center.