Though they still say they "believe in the potential of AI technology to deliver unprecedented benefits to humanity," the authors of an article called "A Right to Warn about Advanced Artificial Intelligence" say there are "serious risks" as well.
"These risks range from the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction," they write.
"AI companies themselves have acknowledged these risks, as have governments across the world and other AI experts," they continue, citing all their work with hyperlinks throughout the piece.
With "sufficient guidance from the scientific community, policymakers, and the public, these experts are hopeful that AI can be reined in for the benefit of humanity. At the same time, such guidance is unlikely to ever come based on the way the system works.
"AI companies have strong financial incentives to avoid effective oversight," the authors note, "and we do not believe bespoke structures of corporate governance are sufficient to change this."
(Related: Did you catch The New York Times' lawsuit against Microsoft and OpenAI over AI copyright infringement?)
Though they will never say it publicly, AI companies like OpenAI and Google, which run a creepy AI program called "DeepMind," know full well the serious risks involved with the technology. It is much too profitable for them to ever stop, though.
Human knowledge is under attack! Governments and powerful corporations are using censorship to wipe out humanity's knowledge base about nutrition, herbs, self-reliance, natural immunity, food production, preparedness and much more. We are preserving human knowledge using AI technology while building the infrastructure of human freedom. Speak freely without censorship at the new decentralized, blockchain-power Brighteon.io. Explore our free, downloadable generative AI tools at Brighteon.AI. Support our efforts to build the infrastructure of human freedom by shopping at HealthRangerStore.com, featuring lab-tested, certified organic, non-GMO foods and nutritional solutions.
As it currently stands, there are very few legal requirements for AI companies to disclose what they know to the government about the technologies they are developing. Even worse are the public disclosure requirements, which pretty much do not exist.
"We do not think they can all be relied upon to share it voluntarily," the authors warn.
"So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues."
As for existing whistleblower protections, these, too, are insufficient because they center around illegal activity – and there is nothing illegal, for better or for worse, about developing advanced AI systems capable of controlling the world.
Many of the known risks of AI technology are not yet regulated – and possibly never will be in any formidable way. This means that there is a strong incentive for AI companies to retaliate against anyone who comes forward to try to tell the ugly truth about AI.
Even so, the authors and signatories of the "Right to Warn" article are calling on all AI companies everywhere to voluntarily agree to a series of commitments and principles, including a promise to never enter into or enforce any agreement that prohibits "disparagement" or criticism of their AI work.
The authors also want AI companies to:
- Allow employees, both current and former, to anonymously raise their concerns about AI to company heads
- Support a culture of open criticism of AI
- Not retaliate against current and former employees who publicly share what they know about AI
The article's signatories include:
Jacob Hilton, formerly OpenAI
Daniel Kokotajlo, formerly OpenAI
Ramana Kumar, formerly Google DeepMind
Neel Nanda, currently Google DeepMind, formerly Anthropic
William Saunders, formerly OpenAI
Carroll Wainwright, formerly OpenAI
Daniel Ziegler, formerly OpenAI
Anonymous, currently OpenAI
There were also six anonymous signatories, all of whom used to work for OpenAI.
Yoshua Bengio, Geoffrey Hinton and Stuart Russell also endorsed the "Right to Warn" call for action.
No matter how you slice it, AI is a recipe for human disaster. Learn more at FutureTech.news.
Sources for this article include: