Bill Gates Joins Stephen Hawking in Fears of a Coming Threat from "Superintelligence" (original) (raw)

Bill Gates has jumped into the fray over concerns of a coming machine superintelligence. He did so in a recent Q&A at reddit, reports TechRadar:

Gates was asked: “How much of an existential threat do you think machine superintelligence will be?”

He admitted: “I am in the camp that is concerned about super intelligence.” He took a somewhat more measured stance than Hawking, but sees AI as a real concern.

“First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern.

“I agree with Elon Musk and some others on this and don’t understand why some people are not concerned,” he wrote in the thread.

Superintelligence is the currently trending idea of soon-to-come super-smart robots, the latest iteration of the decades-old dream of Artificial Intelligence (AI): machines that think like humans. Only with superintelligence, so the speculation goes, the machines will be smarter than humans, and thus pose an “existential” threat to us.

The term was coined by Nick Bostrom, Oxford professor and Director of the Future of Humanity Institute at Oxford. Bostrom has written the book on superintelligence — literally: his 2013 bestseller Superintelligence: Paths, Dangers, Strategies. In the book, he seeks to displace the old, science-fiction sounding prediction of a “Singularity,” a crossover point where society becomes unrecognizable due to superior machine intelligence, replacing it with the more Oxford-sounding “intelligence explosion” (which means the same thing).

Bostrom, like “Singularitarians” Ray Kurzweil or Vernor Vinge before him, sees the fate of humanity as being in the hands of smarter-than-human machines. What will happen? How will they treat us?

I’m not surprised by the latest hype, the change of names for the same old sci-fi worries, and the rest of the almost manic superintelligence discussion today. As I’ve said before, it reminds me of the global warming debate circa 2005. AI is in a bubble, with worries about the future of smart machines all the rage. I am surprised, though, that many very prominent business, technology, and science personalities are so concerned.

They speak authoritatively about the imminent rise of smart machines. Yet as a scientific inference based on actual Artificial Intelligence research, I’ve argued repeatedly that the superintelligence claim is unsubstantiated.

It’s telling that luminaries like Gates, Elon Musk, and Stephen Hawking, who are weighing in now with such vehemence, are not scientists working on issues in Artificial Intelligence. Elon Musk is an entrepreneur, Stephen Hawking an astrophysicist, and Gates is a former software engineer whose main technical contribution was on operating systems back in the 1970s and ’80s.

Since then he’s been a businessman — a very good one, but business has little to reveal about the plausibility of reproducing minds through computer programs. Even Bostrom is a philosopher of economics. Apparently to foresee the imminent rise of machines, you can be only rather loosely connected to the actual science.

Image by World Economic Forum [CC BY-SA 2.0], via Wikimedia Commons.