For years, people have talked about the danger of an AI apocalypse. It’s been a staple of science fiction for decades. More recently, big names like Elon Musk have been talking about it as a live possibility, something we should be actively working to avoid. With the advent of AIs like ChatGPT and Dall-E, the calls have gotten more intense. But does anyone actually believe in the AI apocalypse or is it just a clickbaity way to get attention?
Setting aside the sci-fi scenarios (turning the whole human species into living batteries isn’t going to happen), there are several possibilities that get talked about by serious people. One of the most famous is the paperclip apocalypse. Basically, an AI is tasked with making as many paperclips as possible. To get this done, the AI hijacks all the possible paperclip materials, including farm equipment and things like that. Humans wind up dying out because all of the resources we need to survive get diverted into paperclip production.
Nir Eisikovitz (Who, like me, doesn’t believe the apocalyptic hype.) sums up the basic fear well, “AI is fast becoming an alien intelligence, good at accomplishing goals but dangerous because it won’t necessarily align with the moral values of its creators. And, in its most extreme version, this argument morphs into explicit anxieties about AIs enslaving or destroying the human race.” It would certainly suck to get enslaved or wiped out by machines, but how likely is it really?
In my view, not very likely at all. Certainly not likely enough to worry about. First off, who is going to build an AI without an off switch? Will this AI have some sort of perpetual energy source built in? Second, who is the idiot who programs anything to use all available resources to do anything? There just isn’t a market for that many paperclips. There isn’t a market for that much of anything. Third, how is the AI, which presumably works in a paperclip factory, supposed to get access to all of the world’s resources? Short of creating a paperclip army that marches out to collect the resources, it just isn’t possible.
People are certainly capable of being incredibly stupid, careless, and short-sighted. But an AI apocalypse needs much more than run of the mill stupidity, carelessness, and short-sightedness. If a perfect storm of those three things were at all likely, we would have already killed ourselves in a nuclear war.
I suppose there is a non-zero chance of the AI apocalypse happening. But there is also a non-zero chance of all the world’s farm animals rising up and destroying their human oppressors. We can’t worry about everything that has a non-zero chance of happening. So, does anyone really worry about the AI apocalypse? There probably are some. People believe all sorts of impossible things. I mean the odds of someone believing it are definitely higher than the odds of it actually happening. But can we please leave it to the crowd that believes in a flat earth and things like that? It fits in better there.