Scientists fear turning over launch systems for nuclear missiles to artificial intelligence will lead to real-life “Terminator” event, wiping out all humans
12/27/2019 // JD Heyes // Views

One of the most popular movie franchises of our time is the “Terminator” series, launched back in the early 1980s and featuring six-time Mr. America bodybuilder Arnold Schwarzenegger as a futuristic humanoid killing machine

As noted by Great Power War, the backstory to the film is that the creation of the nearly-invincible cyborg Terminators stemmed from a “SkyNet” computer system that controlled U.S. nuclear weapons and “got smart,” eventually seeing all humans as its enemy.

So, in one fell swoop, the system launched its missiles at pre-programmed targets, which, of course, invited a second-strike counter-launch and created a nuclear holocaust that nearly destroyed all of humankind.

While the Terminator series never really identified the ‘smart’ SkyNet computer system as having artificial intelligence, some years later after AI became more of a thing it was understood that’s the kind of system the fictional SkyNet operated.

The “machine-learning” aspect of AI is how SkyNet “got smart” one day and launched the nuclear payloads it controlled.

But the Terminator series are just movies, right? Nothing like that could ever really happen…right?

In fact, as the Jerusalem Post notes, it nearly did happen — back in 1983 (one year before the original “Terminator” hit movie theaters):

An example that the article gives of human judgment’s importance was a 1983 incident when a Soviet officer named Stanislav Petrov disregarded automated audible and visual warnings that US nuclear missiles were inbound.

Brighteon.TV

The systems were wrong and had Petrov trusted technology over his own instincts, the world might have gone to nuclear war over a technological malfunction.

According to a group of scientists, as AI technology advances in leaps and bounds, it’s possible that someday great powers like the U.S., Russia and China could turn over their launch capabilities to an AI-powered “machine learning” system that could accidentally start a nuclear war by identifying a false “threat.”

Some advantages, but more disadvantages

The UK-based Daily Mail reported this week that top nuclear scientists from Cornell are warning in a newly published paper that AI technology could turn on humans the way SkyNet did in the movies. (Related: Human-like Terminator rescue droids to become our “fourth emergency service” in 50 years.)

So-called “automation bias” would allow machines to “slip out of control.”

Who’s closer to turning over their nuclear launch sequences to AI-powered computers? The scientists identified Russia and China, both of which are working feverishly to develop the technology and both of which could use it to offset America’s technological advantages, despite the risks.

Global military powers could be convinced to employ AI and more safe than human judgment, though the technology could bring “insidious risks that do not manifest until an accident occurs.”

Moscow’s military is already developing an autonomous nuclear torpedo, codenamed “Status-6” or “Poseidon.” And the Cornell scientists believe that weapon could be the beginning of a trend.

“While so much about it is uncertain, Russia's willingness to explore the notion of a long-duration, underwater, uninhabited nuclear delivery vehicle in Status-6 shows that fear of conventional or nuclear inferiority could create some incentives to pursue greater autonomy,” the report’s primary author, Michael Horowitz, told the Bulletin of Atomic Scientists.

The Cornell report does concede that there may be some advantages to AI.

“Some forms of automation could increase reliability and surety in nuclear operations, strengthening stability,” it says. And the tech can help decision-makers by gathering comprehensive data in real-time. 

However, “other forms could increase accident risk or create perverse incentives, undermining stability,” notes the report. “When modernizing nuclear arsenals, policymakers should aim to use automation to decrease the risk of accidents and false alarms and increase human control over nuclear operations.”

AI isn’t ready for prime time just yet, but the day is coming.

Sources include:

DailyMail.co.uk

GreatPowerWar.com

NaturalNews.com



Take Action:
Support Natural News by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NaturalNews.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
App Store
Android App
eTrust Pro Certified

This site is part of the Natural News Network © 2022 All Rights Reserved. Privacy | Terms All content posted on this site is commentary or opinion and is protected under Free Speech. Truth Publishing International, LTD. is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Truth Publishing assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published here. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
Natural News uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.