The AI Doomers’ Playbook (original) (raw)

from the don't-be-a-doomer dept

AI Doomerism is becoming mainstream thanks to mass media, which drives our discussion about Generative AI from bad to worse, or from slightly insane to batshit crazy. Instead of out-of-control AI, we have out-of-control panic.

When a British tabloid headline screams, “Attack of the psycho chatbot,” it’s funny. When it’s followed by another front-page headline, “Psycho killer chatbots are befuddled by Wordle,” it’s even funnier. If this type of coverage stayed in the tabloids, which are known to be sensationalized, that was fine.

But recently, prestige news outlets have decided to promote the same level of populist scaremongering: The New York Times published “If we don’t master AI, it will master us” (by Harari, Harris & Raskin), and TIME magazine published “Be willing to destroy a rogue datacenter by airstrike” (by Yudkowsky).

In just a few days, we went from “governments should force a 6-month pause” (the petition from the Future of Life Institute) to “wait, it’s not enough, so data centers should be bombed.” Sadly, this is the narrative that gets media attention and shapes our already hyperbolic AI discourse.

In order to understand the rise of AI Doomerism, here are some influential figures responsible for mainstreaming doomsday scenarios. This is not the full list of AI doomers, just the ones that recently shaped the AI panic cycle (so I‘m focusing on them).

AI Panic Marketing: Exhibit A: Sam Altman.

Sam Altman has a habit of urging us to be scared. “Although current-generation AI tools aren’t very scary, I think we are potentially not that far away from potentially scary ones,” he tweeted. “If you’re making AI, it is potentially very good, potentially very terrible,” he told the WSJ. When he shared the bad-case scenario of AI with Connie Loizo, it was ”lights out for all of us.”

In an interview with Kara Swisher, Altman expressed how he is “super-nervous” about authoritarians using this technology.” He elaborated in an ABC News interview: “A thing that I do worry about is … we’re not going to be the only creator of this technology. There will be other people who don’t put some of the safety limits that we put on it. I’m particularly worried that these models could be used for large-scale disinformation.” These models could also “be used for offensive cyberattacks.” So, “people should be happy that we are a little bit scared of this.” He repeated this message in his following interview with Lex Fridman: “I think it’d be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid.”

Having shared this story in 2016, it shouldn’t come as a surprise: “My problem is that when my friends get drunk, they talk about the ways the world will END.” One of the “most popular scenarios would be A.I. that attacks us.” “I try not to think about it too much,” Altman continued. “But I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”

(Wouldn’t it be easier to just cut back on the drinking and substance abuse?).

Altman’s recent post “Planning for AGI and beyond” is as bombastic as it gets: “Successfully transitioning to a world with superintelligence is perhaps the most important – and hopeful, and scary – project in human history.”

It is at this point that you might ask yourself, “Why would someone frame his company like that?” Well, that’s a good question. The answer is that making OpenAI’s products “the most important and scary – in human history” is part of its marketing strategy. “The paranoia is the marketing.”

AI doomsaying is absolutely everywhere right now,” described Brian Merchant in the LA Times. “Which is exactly the way that OpenAI, the company that stands to benefit the most from everyone believing its product has the power to remake – or unmake – the world, wants it.” Merchant explained Altman’s science fiction-infused marketing frenzy: “Scaring off customers isn’t a concern when what you’re selling is the fearsome power that your service promises.”

During the Techlash days in 2019, which focused on social media,Joseph Bernstein explained how the alarm over disinformation (e.g., “Cambridge Analytica was responsible for Brexit and Trump’s 2016 election”) actually “supports Facebook’s sales pitch”:

What could be more appealing to an advertiser than a machine that can persuade anyone of anything?”

This can be applied here: The alarm over AI’s magic power (e.g., “replacing humans”) actually “supports OpenAI’s sales pitch”:

“What could be more appealing to future AI employees and investors than a machine that can become superintelligence?”

AI Panic as a Business. Exhibit A & B: Tristan Harris & Eliezer Yudkowsky.

Altman is at least using apocalyptic AI marketing for actual OpenAI products. The worst kind of doomers is those whose AI panic is their product, their main career, and their source of income. A prime example is the Effective Altruism institutes that claim to be the superior few who can save us from a hypothetical AGI apocalypse.

In March,Tristan Harris, Co-Founder of the Center for Humane Technology, invited leaders to a lecture on how AI could wipe out humanity. To begin his doomsday presentation, he stated: “What nukes are to the physical world … AI is to everything else.”

Steven Levy summarized that lecture at WIRED, saying, “We need to be thoughtful as we roll out AI. But hard to think clearly if it’s presented as the apocalypse.” Apparently, after the “Social Dilemma” has been completed, Tristan Harris is now working on the AI Dilemma. Oh boy. We can guess how it’s going to look (The “nobody criticized bicycles” guy will make a Frankenstein’s monster/Pandora’s box “documentary”).

In the “Social Dilemma,” he promoted the idea that “Two billion people will have thoughts that they didn’t intend to have” because of the designers’ decisions. But, as Lee Visel pointed out, Harris didn’t provide any evidence that social media designers actually CAN purposely force us to have unwanted thoughts.

Similarly, there’s no need for evidence now that AI is worse than nuclear power; simply thinking about this analogy makes it true (in Harris’ mind, at least). Did a social media designer force him to have this unwanted thought? (Just wondering).

To further escalate the AI panic, _Tristan Harris_published an OpEd in The New York Times_with_Yuval Noah Harari_and_Aza Raskin. Among their overdramatic claims: “We have summoned an alien intelligence,” “A.I. could rapidly eat the whole human culture,” and AI’s “godlike powers” will “master us.”

Another statement in this piece was, “Social media was the first contact between A.I. and humanity, and humanity lost.” I found it funny as it came from two men with hundreds of thousands of followers (@harari_yuval 540.4k, @tristanharris 192.6k), who use their social media megaphone … for fear-mongering. The irony is lost on them.

“This is what happens when you bring together two of the worst thinkers on new technologies,” added Lee Vinsel. “Among other shared tendencies, both bloviate free of empirical inquiry.”

This is where we should be jealous of AI doomers. Having no evidence and no nuance is extremely convenient (when your only goal is to attack an emerging technology).

Then came the famous “Open Letter.” This petition from the Future of Life Institute lacked a clear argument or a trade-off analysis. There were only rhetorical questions, like, should we develop imaginary “nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us?“ They provided no evidence to support the claim that advanced LLMs pose an unprecedented existential risk. There were a lot of highly speculative assumptions. Yet, they demanded an immediate 6-month pause on training AI systems and argued that “If such a pause cannot be enacted quickly, governments should institute a _moratorium._”

Please keep in mind that (1). A $10 million donation from Elon Musk launched the Future of Life Institute in 2015. Out of its total budget of 4 million euros for 2021, Musk Foundation contributed 3.5 million euros (the biggest donor by far). (2). Musk once said that “With artificial intelligence, we are summoning the demon.” (3). Due to this, the institute’s mission is to lobby against extinction, misaligned AI, and killer robots.

“The authors of the letter believe they are superior. Therefore, they have the right to call a stop, due to the fear that less intelligent humans will be badly influenced by AI,” responded Keith Teare (CEO SignalRank Corporation). “They are taking a _paternalistic_view of the entire human race, saying, ‘You can’t trust these people with this AI.’ It’s an _elitist_point of view.”

“It’s worth noting the letter overlooked that much of this work is already happening,” added

Spencer Ante (Meta Foresight). “Leading providers of AI are taking AI safety and responsibility very seriously, developing risk-mitigation tools, best practices for responsible use, monitoring platforms for misuse, and learning from human feedback.”

Next, because he thought the open letter didn’t go far enough, Eliezer Yudkowsky took “PhobAI” too far. First, Yudkowsky asked us all to be afraid of made-up risks and an apocalyptic fantasy he has about “superhuman intelligence” “killing literally everyone” (or “kill everyone in the U.S. and in China and on Earth”). Then, he suggested that “preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange.” By explicitly advocating violent solutions to AI, we have officially reached the height of hysteria.

Rhetoric from AI doomers is not just ridiculous. It’s dangerous and unethical,” responded Yann Lecun(Chief AI Scientist, Meta). “AI doomism is quickly becoming indistinguishable from an apocalyptic religion. Complete with prophecies of imminent fire and brimstone caused by an omnipotent entity that doesn’t actually exist.”

“You stand a far greater chance of dying from lightning strikes, collisions with deer, peanut allergies, bee stings & ignition or melting of nightwear – than you do from AI,” Michael Shermer wrote to Yudkowsky. “Quit stoking irrational fears.”

The problem is that “irrational fears” sell. They are beneficial to the ones who spread them.

How to Spot an AI Doomer?

On April 2nd, Gary Marcus asked: “Confused about the terminology. If I doubt that robots will take over the world, but I am very concerned that a massive glut of authoritative-seeming misinformation will undermine democracy, do I count as a “doomer”?

One of the answers was: “You’re a doomer as long as you bypass participating in the conversation and instead appeal to populist fearmongering and lobbying reactionary, fearful politicians with clickbait.”

Considering all of the above, I decided to define “AI doomer” and provide some criteria:

How to spot an AI Doomer?

Then, Adam Thierer added another characteristic:

Doomers have a general preference for very amorphous, top-down Precautionary Principle-based solutions, but they (1) rarely discuss how (or if) those schemes would actually work in practice, and (2) almost never discuss the trade-offs/costs their extreme approaches would impose on society/innovation.

Answering Gary Marcus’ question, I do not think he qualifies as a doomer. You need to meet all criteria (he does not). Meanwhile, Tristan Harris and Eliezer Yudkowsky meet all seven.

Are they ever going to stop this “Panic-as-a-Business”? If the apocalyptic catastrophe doesn’t occur, will the AI doomers ever admit they were wrong? I believe the answer is “No.”

Doomsday cultists don’t question their own predictions. But you should.

Dr. Nirit Weiss-Blatt (@DrTechlash) is the author of The Techlash and Tech Crisis Communication

Filed Under: ai, ai dilemma, ai doom, ailash, eliezer yudkowsky, extinction, sam altman, social dilemma, techlash, tristan harris