When author Peter Hasson asked the AI program questions about a book he wrote in 2020, The Manipulators: Facebook, Google, Twitter, and Big Tech’s War on Conservatives, he found that the chatbot actually concocted a series of fake reviews attributed to real people in an effort to discredit it.
The book, which was about the political biases of Google and other tech firms, was thoroughly researched, and Hasson drew from leaked documents and inside sources to inform his writing. It apparently struck a nerve with Google as its AI program told lie after lie in an attempt to smear it.
As part of Google’s description of the book, the chatbot wrote that it has been “criticized for lacking concrete evidence and relying on anecdotal information.”
As someone who has followed the reception of his own book very closely, Hasson could not recall such a criticism, so he asked the AI who made those comments.
It then supplied him with summaries of four negative reviews that were allegedly from the New York Times, Wired, Washington Free Beacon and the New York Times Book Review.
The blurbs were very convincing. For example, the AI wrote: "Book review by Matthew Continetti in The Washington Free Beacon: This review criticizes the book for relying on ‘anecdotal evidence’ and ‘cherry-picking examples’ to support its claims."
While the Washington Free Beacon did indeed review the book, the reviewer was actually Charles Lehman, and his assessment was incredibly positive. He said it was a “thoroughly researched” and “excellent” book that "should leave any sensible reader, conservative or otherwise, outraged."
Thanks to your generous support, we are building the infrastructure of human freedom and actively donating our technology to independent publishers, authors and home schooling organizations. Learn about our game-changing non-commercial AI project here. Support our ongoing efforts to preserve and enhance human knowledge by shopping at HealthRangerStore.com.
It wasn’t just the Washington Free Beacon review that was invented; all of the reviews were completely concocted, as were the quotes Gemini attributed to their authors.
When Hasson asked Gemini to provide him with source links to the cited reviews, even though he knew they didn’t exist, the chatbot seemed to panic and suddenly claimed it couldn’t help.
It told him: "I do not have enough information about that person to help with your request. I am a large language model, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions, but my knowledge about this person is limited. Is there anything else I can do to help you with this request?"
He persisted and was met with the same generic reply every time.
Hasson's attempts to reach out to Google were met with a cookie-cutter reply saying that "Gemini is built as a creativity and productivity tool, and it may not always be accurate or reliable.”
This is the same AI that made headlines recently for its inaccurate and woke output. For example, it refused to generate images of white families on the grounds that it won’t create images that specify ethnicity or race, yet it was all too willing to create images of black and Hispanic families.
When prompted to create images of Nazis, it seemed unable to create a historically accurate image of a white person and instead created people of various other races wearing SS uniforms, while its depiction of a pope showed an Indian woman wearing papal garb.
In response to user outrage and bad publicity, Google announced they would be pausing the image generation feature while they address some of its flaws.
Sources for this article include: