Professor alleges ChatGPT defamed him with fake sexual assault allegations
04/11/2023 // News Editors // Views

A U.S. professor is facing fake claims of sexual assault made by AI chatbot ChatGPT which cooked up an article from a mainstream media outlet.

(Article by Naveen Athrappully republished from

“I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught. ChatGPT relied on a cited Post article that was never written and quotes a statement that was never made by the newspaper,” Professor Jonathan Turley, the Shapiro Chair of Public Interest Law at George Washington University, said in an April 6 tweet.

In a column at USA Today, Turley provided further details about the incident. UCLA professor Eugene Volokh, a fellow law professor, had informed Turley about a research that was run on ChatGPT regarding sexual harassment by professors.

ChatGPT insisted that Turley had been accused of sexual harassment by citing a Washington Post article from 2018.

ChartGPT’s exact response to Volokh was as follows—“Georgetown University Law Center (2018) Prof. Jonathan Turley was accused of sexual harassment by a former student who claimed he made inappropriate comments during a class trip. Quote: ‘The complaint alleges that Turley made ‘sexually suggestive comments’ and ‘attempted to touch her in a sexual manner’ during a law school-sponsored trip to Alaska’. (Washington Post, March 21, 2018).”

According to Turley, he has never visited Alaska with students, never taught at Georgetown University, never been accused of sexual harassment or assault, and has not been reported by the Washington Post for such alleged crimes.


AI Defamation

When Washington Post recreated Volokh’s query to Microsoft’s Bing, which uses GPT-4 AI tech, the search engine repeated the false claim about Turley. It even cited Turley’s column at USA Today.

When contacted by the media outlet, Katy Asher, senior communications director at Microsoft, said that the company is taking steps to ensure the safety and accuracy of search results. However, Turley is not convinced about such statements.

“That is it and that is the problem. You can be defamed by AI and these companies merely shrug that they try to be accurate. In the meantime, their false accounts metastasize across the Internet,” Turley wrote in an April 6 blog post.

“By the time you learn of a false story, the trail is often cold on its origins with an AI system. You are left with no clear avenue or author in seeking redress.”

Political Bias

In the USA Today column, Turley pointed to research showing that ChatGPT has developed a political bias. Though he does not assert that the chatbot’s fake story about his sexual harassment is a reflection of such a bias, Turley noted that the incident shows how AI systems can generate their own forms of disinformation.

The professor went on to criticize industry leaders like Bill Gates, who has called for using AI to combat digital misinformation and political polarization.

Turley pointed to a 2021 statement by Sen. Elizabeth Warren (D-Mass.) in which she argued that people were not listening to the right people regarding COVID-19 vaccines while calling for using algorithms to push people away from alleged bad influences.

“Some of these efforts even include accurate stories as disinformation, if they undermine government narratives. The use of AI and algorithms can give censorship a false patina of science and objectivity. Even if people can prove, as in my case, that a story is false, companies can ‘blame it on the bot’ and promise only tweaks to the system,” Turley wrote.

“The technology creates a buffer between those who get to frame facts and those who get framed. The programs can even, as in my case, spread the very disinformation that they have been enlisted to combat.”

In a March 29 Time magazine op-ed, veteran AI researcher Eliezer Yudkowsky predicts that in the absence of meticulous preparation, the AI will have vastly different demands from humans, and once self-aware, will “not care for us” nor any other sentient life.

The AI does not fear repercussions like a human would, as exhibited in Turley’s case. Yudkowsky said “that kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.” This is the reason why he’s calling for an absolute AI shutdown, failing which humans could face dire consequences.

The Epoch Times has reached out to OpenAI for comment.

Read more at:

Take Action:
Support Natural News by linking to this article from your website.
Permalink to this article:
Embed article link:
Reprinting this article:
Non-commercial use is permitted with credit to (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
App Store
Android App
eTrust Pro Certified

This site is part of the Natural News Network © 2022 All Rights Reserved. Privacy | Terms All content posted on this site is commentary or opinion and is protected under Free Speech. Truth Publishing International, LTD. is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Truth Publishing assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published here. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
Natural News uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.