Researchers: AI decisions could cause “nuclear-level” CATASTROPHE
04/18/2023 // Belle Carter // Views

A survey by Stanford University conducted among natural language processing (NLP) researchers found that artificial intelligence (AI) could make catastrophic decisions.

The survey by Stanford's Institute for Human-Centered Artificial Intelligence (HAI) found that 73 percent of NLP researchers believe AI could lead to "revolutionary societal change." However, 36 percent of the surveyed researchers believe that AI could lead to the end of the world. The survey was included as part of HAI's yearly AI Index Report, which serves as the AI industry's "state of the union."

In the survey, HAI asked 327 participants from different universities whether they agree or disagree with this statement: "It is possible that decisions made by AI or machine learning systems could cause a catastrophe this century that is at least as bad as an all-out nuclear war." The survey was ran between May and June 2022, months before the release of ChatGPT.

These systems demonstrate capabilities in question answering, and the generation of text, image and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new," wrote the AI Index Report's authors.

"However, they are prone to hallucination, routinely biased and can be tricked into serving nefarious aims – highlighting the complicated ethical challenges associated with their deployment."

Concerns surrounding AI are on the rise

The HAI report also cited a poll by market company IPSOS that showed Americans getting warier of AI compared to other nationalities. The survey found that only 35 percent of Americans agreed with the statement "Products and services using AI had more benefits than drawbacks."

Brighteon.TV

In contrast, participants from other countries agreed with the statement more – with 78 percent of respondents coming from China, 76 percent from Saudi Arabia and 71 percent from India.

The HAI report also noted that the number of controversies connected to AI surged 26 times over the past decade, citing data from the AI, Algorithmic and Automation Incidents and Controversies (AIAAIC) initiative.

"According to the AIAAIC database... the number of AI incidents and controversies has increased 26 times since 2012. Some notable incidents in 2022 included a deep fake video of Ukrainian President Volodymyr Zelensky surrendering and U.S. prisons using call-monitoring technology on their inmates. This growth is evidence of both greater use of AI technologies and awareness of misuse possibilities."

Moreover, a report by the U.K.-based nonprofit Center for Countering Digital Hate (CCDH) found that chatbots could be tapped to push misinformation and lies. In fact, Google's new AI chatbot Bard spewed falsehoods in 78 of 100 cases. The CCDH said Bard even wrote a 227-word monologue denying the Holocaust. The monologue alleged that the photograph of the starving girl in the concentration camp was actually of an actress who was paid to pretend to be starving. (Related: Google suspends engineer for exposing "sentient" AI chatbot.)

Back in March, 1,300 people – including Twitter CEO Elon Musk and Apple co-founder Steve Wozniak – signed an open letter calling for a six-month suspension on training AI systems beyond the level of the GPT-4 chatbot. "Powerful AI systems should only be developed once we are confident that their effects will be positive and their risks will be manageable," the letter stated.

Despite these concerns, the HAI report said only 41 percent of natural language processing (NLP) researchers think that AI should be regulated.

Watch Yuval Noah Harari of the World Economic Forum warn how AI can potentially take power away from humanity.

This video is from the Thrivetime Show channel on Brighteon.com.

More related stories:

AI startup under fire after trolls used its voice cloning tool to make celebrities say "offensive things."

Former Google engineer predicts human IMMORTALITY by 2030 – but at what cost?

DEAD RISING: AI-powered ChatGPT to connect the living and the dead.

AI-powered bot successfully requested refund from Wells Fargo using FAKE voice.

Sources include:

Futurism.com

AIIndex.Stanford.edu [PDF]

FoxNews.com

CampaignAsia.com

Brighteon.com



Take Action:
Support Natural News by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NaturalNews.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
App Store
Android App
eTrust Pro Certified

This site is part of the Natural News Network © 2022 All Rights Reserved. Privacy | Terms All content posted on this site is commentary or opinion and is protected under Free Speech. Truth Publishing International, LTD. is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Truth Publishing assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published here. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
Natural News uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.