If you thought Google’s AI Overviews were past the whole ‘eat a rock a day, put glue on your pizza‘ thing, think again. The search giant proved once again that its AI-powered summaries cannot be trusted when it comes to potentially life-changing information after The Guardian did a little digging, calling it “dangerous and alarming,” and ultimately ended with Google turning off AI Overviews entirely when asking for certain medical advice.
“In one case that experts described as “really dangerous”, Google wrongly advised people with pancreatic cancer to avoid high-fat foods. Experts said this was the exact opposite of what should be recommended, and may increase the risk of patients dying from the disease,” The Guardian wrote.
Surprise, surprise
It didn’t stop there. The Guardian found that Google’s AI summaries dished out incorrect information surrounding liver function tests, as well as women’s cancer tests. The “completely wrong” information it offered could be woefully misconstrued and result in folks potentially dismissing synonyms and skipping treatment. By the time a user suspects the information the AI Overviews served up was spurious, it may already be too late.
It may not be as brazenly incorrect as Google suggesting its users stick the cheese to their pizza with glue, but one could argue that its current implementation is even worse. While many can shrug off an obviously silly AI suggestion, folks unfamiliar with the medical field (pretty much everyone) are more easily preyed upon.
Read More: Google ditches the paywall for Gmail’s AI powers
The Guardian later noted that Google had since disabled AI Overviews entirely for certain queries, such as “what is the normal range for liver blood tests” and “what is the normal range for liver function tests” — which, unsurprisingly, were the phrases that The Guardian had proven to offer incorrect information. We should mention that phrasing these and other queries differently may still generate AI Overviews on the page.
It will likely prove difficult for Google to root out other such mistakes, judging by the fact that it wasn’t aware these were incorrect in the first place. That goes for a lot of the search company’s AI Overviews, we guess. “In cases where AI Overviews miss some context, we work to make broad improvements, and we also take action under our policies where appropriate,” a Google spokesperson told The Guardian.





