Former OpenAI employees release “A Right to Warn” document warning about advanced AI risks
06/17/2024 // Ethan Huff // Views

A cohort of artificial intelligence (AI) experts and developers who used to work for OpenAI and Google are warning that the technology they helped spearhead is a serious threat to humanity.

Though they still say they "believe in the potential of AI technology to deliver unprecedented benefits to humanity," the authors of an article called "A Right to Warn about Advanced Artificial Intelligence" say there are "serious risks" as well.

"These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction," they write.

"AI companies themselves have acknowledged these risks, as have governments across the world and other AI experts," they continue, citing all their work with hyperlinks throughout the piece.

With "sufficient guidance from the scientific community, policymakers, and the public, these experts are hopeful that AI can be reined in for the benefit of humanity. At the same time, such guidance is unlikely to ever come based on the way the system works.

"AI companies have strong financial incentives to avoid effective oversight," the authors note, "and we do not believe bespoke structures of corporate governance are sufficient to change this."

(Related: Did you catch The New York Times' lawsuit against Microsoft and OpenAI over AI copyright infringement?)

AI companies know much more than the general public about risks

Though they will never say it publicly, AI companies like OpenAI and Google, which run a creepy AI program called "DeepMind," know full well the serious risks involved with the technology. It is much too profitable for them to ever stop, though.

Human knowledge is under attack! Governments and powerful corporations are using censorship to wipe out humanity's knowledge base about nutrition, herbs, self-reliance, natural immunity, food production, preparedness and much more. We are preserving human knowledge using AI technology while building the infrastructure of human freedom. Speak freely without censorship at the new decentralized, blockchain-power Brighteon.io. Explore our free, downloadable generative AI tools at Brighteon.AI. Support our efforts to build the infrastructure of human freedom by shopping at HealthRangerStore.com, featuring lab-tested, certified organic, non-GMO foods and nutritional solutions.

As it currently stands, there are very few legal requirements for AI companies to disclose what they know to the government about the technologies they are developing. Even worse are the public disclosure requirements, which pretty much do not exist.

"We do not think they can all be relied upon to share it voluntarily," the authors warn.

"So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues."

As for existing whistleblower protections, these, too, are insufficient because they center around illegal activity – and there is nothing illegal, for better or for worse, about developing advanced AI systems capable of controlling the world.

Many of the known risks of AI technology are not yet regulated – and possibly never will be in any formidable way. This means that there is a strong incentive for AI companies to retaliate against anyone who comes forward to try to tell the ugly truth about AI.

Even so, the authors and signatories of the "Right to Warn" article are calling on all AI companies everywhere to voluntarily agree to a series of commitments and principles, including a promise to never enter into or enforce any agreement that prohibits "disparagement" or criticism of their AI work.

The authors also want AI companies to:

- Allow employees, both current and former, to anonymously raise their concerns about AI to company heads

- Support a culture of open criticism of AI

- Not retaliate against current and former employees who publicly share what they know about AI

The article's signatories include:

Jacob Hilton, formerly OpenAI

Daniel Kokotajlo, formerly OpenAI

Ramana Kumar, formerly Google DeepMind

Neel Nanda, currently Google DeepMind, formerly Anthropic

William Saunders, formerly OpenAI

Carroll Wainwright, formerly OpenAI

Daniel Ziegler, formerly OpenAI

Anonymous, currently OpenAI

There were also six anonymous signatories, all of whom used to work for OpenAI.

Yoshua Bengio, Geoffrey Hinton and Stuart Russell also endorsed the "Right to Warn" call for action.

No matter how you slice it, AI is a recipe for human disaster. Learn more at FutureTech.news.

Sources for this article include:

RightToWarn.ai

NaturalNews.com



Related News
Take Action:
Support Natural News by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NaturalNews.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
App Store
Android App
eTrust Pro Certified

This site is part of the Natural News Network © 2022 All Rights Reserved. Privacy | Terms All content posted on this site is commentary or opinion and is protected under Free Speech. Truth Publishing International, LTD. is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Truth Publishing assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published here. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
Natural News uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.