Google AI Overviews under fire for misleading health advice
Google AI Overviews spark concern after risky health guidance exposed. File photo
Google AI Overviews spark concern after risky health guidance exposed. File photo
LONDON (Web Desk): Google AI Overviews health advice is raising serious concerns after a Guardian investigation found misleading medical information appearing at the top of searches.

People are being put at risk by false and misleading health information in Google’s AI summaries. The company has said its AI Overviews, which use generative AI to provide snapshots of essential information about a topic or question, are “helpful” and “reliable”.

But some summaries, appearing at the top of search results, served inaccurate health information and put people at risk of harm. In one case described by experts as “really dangerous”, Google wrongly advised people with pancreatic cancer to avoid high-fat foods. Experts said this advice is the exact opposite of what should be recommended and may increase the risk of patients dying.

Read more: Samsung plans to double AI devices to 800 million

In another “alarming” example, Google provided bogus information about crucial liver function tests. This could leave people with serious liver disease wrongly thinking they are healthy. Searches about women’s cancer tests also provided “completely wrong” information, which experts warned could lead people to dismiss genuine symptoms.

A Google spokesperson said that many of the health examples shared were “incomplete screenshots”, but added that from what they could assess, the summaries linked “to well-known, reputable sources and recommend seeking out expert advice”.

Sophie Randall, director of the Patient Information Forum, said the examples showed “Google’s AI Overviews can put inaccurate health information at the top of online searches, presenting a risk to people’s health”. Stephanie Parker, director of digital at Marie Curie, added, “People turn to the internet in moments of worry and crisis. If the information they receive is inaccurate or out of context, it can seriously harm their health.”

Anna Jewell from Pancreatic Cancer UK said advising patients to avoid high-fat foods was “completely incorrect”. She added, “Doing so could be really dangerous and jeopardise a person’s chances of being well enough to have treatment.” Jewell also said, “If someone followed what the search result told them then they might not take in enough calories, struggle to put on weight, and be unable to tolerate either chemotherapy or potentially life-saving surgery.”

Typing “what is the normal range for liver blood tests” also produced misleading information, with masses of numbers, little context, and no accounting for nationality, sex, ethnicity, or age. Pamela Healy of the British Liver Trust said the AI summaries were alarming. “Many people with liver disease show no symptoms until the late stages, which is why it’s so important that they get tested. But what the Google AI Overviews say is ‘normal’ can vary drastically from what is actually considered normal.” She added, “It’s dangerous because it means some people with serious liver disease may think they have a normal result then not bother to attend a follow-up healthcare meeting.”

Read more: Meta to buy Chinese AI startup that claims to beat OpenAI

A search for “vaginal cancer symptoms and tests” listed a pap test as a test for vaginal cancer, which is incorrect. Athena Lamnisos from the Eve Appeal said, “It isn’t a test to detect cancer, and certainly isn’t a test to detect vaginal cancer – this is completely wrong information. Getting wrong information like this could potentially lead to someone not getting vaginal cancer symptoms checked because they had a clear result at a recent cervical screening.” She added, “Some of the results we’ve seen are really worrying and can potentially put women in danger.”

The Guardian also found Google AI Overviews gave misleading results for mental health conditions. Stephen Buckley from Mind said, “Some of the AI summaries for conditions such as psychosis and eating disorders offered very dangerous advice and were incorrect, harmful or could lead people to avoid seeking help.” He added, “They may suggest accessing information from sites that are inappropriate … and we know that when AI summarises information, it can often reflect existing biases, stereotypes or stigmatising narratives.”

People often trust search results without question. Wrong health advice can delay treatment and cost lives. Experts warn that AI summaries must be carefully verified before being shown.