Jeff Hancock, a professor of communications at Stanford, admitted in a court declaration that he overlooked "hallucinated citations" that were generated for him by the artificial intelligence (AI) program known as ChatGPT.
In his declaration, Hancock looked at scholarly literature to assess the risks of deepfake technology and the spread of misinformation through GPT-40. Ironically, he failed to fact-check the citations generated by the AI program – citations that turned out to be from phony journals that do not even exist.
"I did not intend to mislead the Court or counsel," Hancock wrote. "I express my sincere regret for any confusion this may have caused. That said, I stand firmly behind all the substantive points in the declaration."
(Related: Earlier this year, fake news media "doctor" Allison Neitzel apologized publicly for lying to the public and defaming actual doctors by her spreading of misinformation.)
Hancock's original filing was filed on November 1 as part of a court case in Minnesota involving that state's 2023 ban on the use of deepfakes to influence an election.
Plaintiffs in the case argue that the ban is unconstitutional in that it wrongly limits free speech. Hancock, on behalf of defendant Minnesota Attorney General Keith Ellison, submitted an expert declaration stating that deepfakes amplify misinformation while chipping away at the perceived legitimacy of "democratic institutions."
Attorneys for the plaintiffs then accused Hancock of using AI to craft the court declaration itself, pointing at two citations to articles that do not even exist.
Hancock filed another declaration detailing the process he went through to research and draft the first declaration, admitting that he used GPT-40 and Google Scholar to create the faulty citation list.
In addition to generating two "hallucinated citations," as the AI industry and its proponents are calling them, ChatGPT also produced a notable error in the list of authors for an existing study.
"I use tools like GPT-4o to enhance the quality and efficiency of my workflow, including search, analysis, formatting and drafting," Hancock wrote.
The error was generated after Hancock asked the GPT-40 program to produce a short paragraph based on one of the bullet points he personally wrote. A "[cite]" he included as a placeholder to remind himself to add the correct citations failed to be properly read by the AI model, which created fake citations at the end of Hancock's court declaration instead of real citations.
Because Hancock was paid $600 per hour to create the declaration, the government seems to want that money back, especially since Hancock stated under penalty of perjury that everything included in the document was "true and correct."
"AI definitely is being used by the laziest among the 'academics' as a pass to doing work," someone wrote on X / Twitter. "It's only a matter of time with each of these that they are caught."
"They always do what they accuse others of doing ... 100% of the time," wrote another.
"What if this undermines trust in academia?!" wrote another, seemingly tongue in cheek since trust in academia is already at dismally low levels after COVID.
"Ivy League frauds," suggested another about the potential deeper layers to all this. "They will probably promote him to dean."
"Legacy media certainly should be blamed a lot, but the worst butchery of truth & integrity is done by these 'misinfo expert' academics. They are just fraud activists."
Misinformation is the new boogeyman. Learn more at Deception.news.
Sources for this article include: