The survey by Stanford's Institute for Human-Centered Artificial Intelligence (HAI) found that 73 percent of NLP researchers believe AI could lead to "revolutionary societal change." However, 36 percent of the surveyed researchers believe that AI could lead to the end of the world. The survey was included as part of HAI's yearly AI Index Report, which serves as the AI industry's "state of the union."
In the survey, HAI asked 327 participants from different universities whether they agree or disagree with this statement: "It is possible that decisions made by AI or machine learning systems could cause a catastrophe this century that is at least as bad as an all-out nuclear war." The survey was ran between May and June 2022, months before the release of ChatGPT.
These systems demonstrate capabilities in question answering, and the generation of text, image and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new," wrote the AI Index Report's authors.
"However, they are prone to hallucination, routinely biased and can be tricked into serving nefarious aims – highlighting the complicated ethical challenges associated with their deployment."
The HAI report also cited a poll by market company IPSOS that showed Americans getting warier of AI compared to other nationalities. The survey found that only 35 percent of Americans agreed with the statement "Products and services using AI had more benefits than drawbacks."
In contrast, participants from other countries agreed with the statement more – with 78 percent of respondents coming from China, 76 percent from Saudi Arabia and 71 percent from India.
The HAI report also noted that the number of controversies connected to AI surged 26 times over the past decade, citing data from the AI, Algorithmic and Automation Incidents and Controversies (AIAAIC) initiative.
"According to the AIAAIC database... the number of AI incidents and controversies has increased 26 times since 2012. Some notable incidents in 2022 included a deep fake video of Ukrainian President Volodymyr Zelensky surrendering and U.S. prisons using call-monitoring technology on their inmates. This growth is evidence of both greater use of AI technologies and awareness of misuse possibilities."
Moreover, a report by the U.K.-based nonprofit Center for Countering Digital Hate (CCDH) found that chatbots could be tapped to push misinformation and lies. In fact, Google's new AI chatbot Bard spewed falsehoods in 78 of 100 cases. The CCDH said Bard even wrote a 227-word monologue denying the Holocaust. The monologue alleged that the photograph of the starving girl in the concentration camp was actually of an actress who was paid to pretend to be starving. (Related: Google suspends engineer for exposing "sentient" AI chatbot.)
Back in March, 1,300 people – including Twitter CEO Elon Musk and Apple co-founder Steve Wozniak – signed an open letter calling for a six-month suspension on training AI systems beyond the level of the GPT-4 chatbot. "Powerful AI systems should only be developed once we are confident that their effects will be positive and their risks will be manageable," the letter stated.
Despite these concerns, the HAI report said only 41 percent of natural language processing (NLP) researchers think that AI should be regulated.
Watch Yuval Noah Harari of the World Economic Forum warn how AI can potentially take power away from humanity.
This video is from the Thrivetime Show channel on Brighteon.com.
Former Google engineer predicts human IMMORTALITY by 2030 – but at what cost?
DEAD RISING: AI-powered ChatGPT to connect the living and the dead.
AI-powered bot successfully requested refund from Wells Fargo using FAKE voice.
Sources include:
AIIndex.Stanford.edu [PDF]