As noted by Great Power War, the backstory to the film is that the creation of the nearly-invincible cyborg Terminators stemmed from a “SkyNet” computer system that controlled U.S. nuclear weapons and “got smart,” eventually seeing all humans as its enemy.
So, in one fell swoop, the system launched its missiles at pre-programmed targets, which, of course, invited a second-strike counter-launch and created a nuclear holocaust that nearly destroyed all of humankind.
While the Terminator series never really identified the ‘smart’ SkyNet computer system as having artificial intelligence, some years later after AI became more of a thing it was understood that’s the kind of system the fictional SkyNet operated.
The “machine-learning” aspect of AI is how SkyNet “got smart” one day and launched the nuclear payloads it controlled.
But the Terminator series are just movies, right? Nothing like that could ever really happen…right?
In fact, as the Jerusalem Post notes, it nearly did happen — back in 1983 (one year before the original “Terminator” hit movie theaters):
An example that the article gives of human judgment’s importance was a 1983 incident when a Soviet officer named Stanislav Petrov disregarded automated audible and visual warnings that US nuclear missiles were inbound.
The systems were wrong and had Petrov trusted technology over his own instincts, the world might have gone to nuclear war over a technological malfunction.
According to a group of scientists, as AI technology advances in leaps and bounds, it’s possible that someday great powers like the U.S., Russia and China could turn over their launch capabilities to an AI-powered “machine learning” system that could accidentally start a nuclear war by identifying a false “threat.”
The UK-based Daily Mail reported this week that top nuclear scientists from Cornell are warning in a newly published paper that AI technology could turn on humans the way SkyNet did in the movies. (Related: Human-like Terminator rescue droids to become our “fourth emergency service” in 50 years.)
So-called “automation bias” would allow machines to “slip out of control.”
Who’s closer to turning over their nuclear launch sequences to AI-powered computers? The scientists identified Russia and China, both of which are working feverishly to develop the technology and both of which could use it to offset America’s technological advantages, despite the risks.
Global military powers could be convinced to employ AI and more safe than human judgment, though the technology could bring “insidious risks that do not manifest until an accident occurs.”
Moscow’s military is already developing an autonomous nuclear torpedo, codenamed “Status-6” or “Poseidon.” And the Cornell scientists believe that weapon could be the beginning of a trend.
“While so much about it is uncertain, Russia's willingness to explore the notion of a long-duration, underwater, uninhabited nuclear delivery vehicle in Status-6 shows that fear of conventional or nuclear inferiority could create some incentives to pursue greater autonomy,” the report’s primary author, Michael Horowitz, told the Bulletin of Atomic Scientists.
The Cornell report does concede that there may be some advantages to AI.
“Some forms of automation could increase reliability and surety in nuclear operations, strengthening stability,” it says. And the tech can help decision-makers by gathering comprehensive data in real-time.
However, “other forms could increase accident risk or create perverse incentives, undermining stability,” notes the report. “When modernizing nuclear arsenals, policymakers should aim to use automation to decrease the risk of accidents and false alarms and increase human control over nuclear operations.”
AI isn’t ready for prime time just yet, but the day is coming.
Sources include: