Google AI summaries deliver misleading health information, raising safety concerns
01/05/2026 // Laura Harris // Views

  • A Guardian investigation found that Google's AI Overviews, which provide quick summaries of health topics, can contain dangerously inaccurate information.
  • AI summaries gave harmful guidance, including advising pancreatic cancer patients to avoid high-fat foods, misleading "normal" liver test ranges and incorrectly stating that pap tests detect vaginal cancer.
  • Summaries on conditions like psychosis and eating disorders sometimes omitted context or gave harmful advice, potentially discouraging people from seeking help.
  • The company defended the AI, calling it "helpful" and "reliable," claiming most summaries are factual, comparable to traditional featured snippets and subject to continuous quality improvements.
  • Health experts and charities warn that inaccurate AI summaries appearing at the top of search results could put millions at risk, emphasizing the need for stricter monitoring.

The Guardian investigation has found that people seeking health advice on Google may be at risk of harm due to inaccurate information provided by the company's artificial intelligence (AI) summaries.

Google's AI Overviews, according to BrightU.AI's Enoch, are designed to give quick snapshots of essential information on a topic or question. These overviews, available on Google's AI website, cover topics providing a foundational understanding of each subject.

However, multiple examples uncovered by the investigation show the summaries can contain dangerously misleading health advice. In one instance, Google advised people with pancreatic cancer to avoid high-fat foods – a recommendation experts described as "really dangerous." Anna Jewell, director of Support, Research and Influencing at Pancreatic Cancer United Kingdom, warned that following this guidance could leave patients unable to maintain sufficient calorie intake, potentially affecting their ability to tolerate chemotherapy or life-saving surgery.

"The Google AI response suggests that people with pancreatic cancer avoid high-fat foods. If someone followed what the search result told them, they might not take in enough calories, struggle to put on weight and be unable to tolerate treatment. This could jeopardize a person's chances of recovery," Jewell said.

Other AI Overviews were equally concerning. Searches about liver function tests returned misleading "normal" ranges, ignoring critical factors such as age, sex, ethnicity and nationality. Pamela Healy, chief executive of the British Liver Trust, said, "Many people with liver disease show no symptoms until the late stages. If AI gives misleading normal ranges, some people may wrongly assume they are healthy and fail to attend follow-up healthcare appointments. This is dangerous."

The AI also provided incorrect information on women's cancer tests. Searching for "vaginal cancer symptoms and tests" suggested a pap test could detect vaginal cancer – an assertion experts described as "completely wrong." Athena Lamnisos, chief executive of the Eve Appeal cancer charity, said the errors could deter people from seeking timely medical attention.

"Getting wrong information like this could potentially lead to someone not getting symptoms checked because they had a clear result at a recent cervical screening. The fact that the AI summary changed each time we searched is also worrying – people are receiving different answers depending on when they search, and that's not good enough," Lamnisos said.

Mental health searches were also affected. Google's AI summaries for conditions such as psychosis and eating disorders sometimes contained "incorrect, harmful" advice or omitted important context.

"Some of the AI summaries offered very dangerous advice. They could lead people to avoid seeking help or direct them to inappropriate sources. AI often reflects existing biases, stereotypes or stigmatising narratives, which is a huge concern for mental health support," said  Stephen Buckley, head of information at Mind.

Google denies accusations

Despite all the evidence, Google has denied the misleading and inaccurate health information on its AI overview.

Instead, the company described the AI-generated snapshots as "helpful" and "reliable," emphasizing that most are factual and provide useful guidance. Google said the accuracy of AI Overviews is comparable to other search features, such as featured snippets, which have been part of its search engine for more than a decade. Google even added that it continuously makes improvements to the system to ensure users receive correct and useful information.

However, experts and charities are still calling for stricter oversight, noting that the company's automated summaries appear prominently at the top of search results, meaning millions of users could be exposed to potentially harmful guidance.

"People turn to the internet in moments of worry and crisis. If the information they receive is inaccurate or out of context, it can seriously harm their health," said Stephanie Parker, the director of digital at Marie Curie, an end-of-life charity.

Watch this skit from "Catch Up" featuring a satirical portrayal of a Google spokesperson addressing criticisms surrounding Gemini.

This video is from the channel The Prisoner on Brighteon.com.

Sources include:

TheGuardian.com

OECD.ai

BrightU.ai

Brighteon.com

Ask Brightu.AI


Take Action:
Support Natural News by linking to this article from your website.
Permalink to this article:
Copy
Embed article link:
Copy
Reprinting this article:
Non-commercial use is permitted with credit to NaturalNews.com (including a clickable link).
Please contact us for more information.
Free Email Alerts
Get independent news alerts on natural cures, food lab tests, cannabis medicine, science, robotics, drones, privacy and more.
App Store
Android App
Brighteon.AI

This site is part of the Natural News Network © 2022 All Rights Reserved. Privacy | Terms All content posted on this site is commentary or opinion and is protected under Free Speech. Truth Publishing International, LTD. is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. Truth Publishing assumes no responsibility for the use or misuse of this material. Your use of this website indicates your agreement to these terms and those published here. All trademarks, registered trademarks and servicemarks mentioned on this site are the property of their respective owners.

This site uses cookies
Natural News uses cookies to improve your experience on our site. By using this site, you agree to our privacy policy.
Learn More
Close
Get 100% real, uncensored news delivered straight to your inbox
You can unsubscribe at any time. Your email privacy is completely protected.