Google has removed AI summaries from its search results after a Guardian investigation revealed that they had provided false and misleading information to users, potentially putting people's lives at risk.
The company's AI Overviews use generative AI to provide snapshots of essential information about a topic or question. While Google claims that these summaries are "helpful" and "reliable," experts have described them as "dangerous" and "alarming." In one case, the summaries provided bogus information about crucial liver function tests, leaving people with serious liver disease wrongly thinking they were healthy.
The investigation found that the AI Overviews often failed to provide context or account for factors like nationality, sex, ethnicity, or age of patients. This meant that users might receive incorrect test results, leading them to miss follow-up healthcare meetings and potentially worsening their condition.
After the Guardian's findings, Google removed AI Overviews for certain search terms related to liver health. However, experts say that this is only a small step towards addressing the bigger issue of inaccurate information being presented in health-related search results.
"We're pleased to see the removal of the Google AI Overviews in these instances," said Vanessa Hebditch, director of communications and policy at the British Liver Trust. "However, if the question is asked in a different way, a potentially misleading AI Overview may still be given, and we remain concerned that other AI-produced health information can be inaccurate and confusing."
Google has defended its AI Overviews, saying they link to well-known and reputable sources and inform users when it's essential to seek expert advice. However, experts say that the company needs to do more to ensure that its AI tool isn't dispensing dangerous health misinformation.
The investigation highlights the need for companies like Google to prioritize accuracy and transparency in their search results, particularly when it comes to sensitive topics like health. As one expert noted, "It's not just about nit-picking a single search result; it's about tackling the bigger issue of AI Overviews for health."
The company's AI Overviews use generative AI to provide snapshots of essential information about a topic or question. While Google claims that these summaries are "helpful" and "reliable," experts have described them as "dangerous" and "alarming." In one case, the summaries provided bogus information about crucial liver function tests, leaving people with serious liver disease wrongly thinking they were healthy.
The investigation found that the AI Overviews often failed to provide context or account for factors like nationality, sex, ethnicity, or age of patients. This meant that users might receive incorrect test results, leading them to miss follow-up healthcare meetings and potentially worsening their condition.
After the Guardian's findings, Google removed AI Overviews for certain search terms related to liver health. However, experts say that this is only a small step towards addressing the bigger issue of inaccurate information being presented in health-related search results.
"We're pleased to see the removal of the Google AI Overviews in these instances," said Vanessa Hebditch, director of communications and policy at the British Liver Trust. "However, if the question is asked in a different way, a potentially misleading AI Overview may still be given, and we remain concerned that other AI-produced health information can be inaccurate and confusing."
Google has defended its AI Overviews, saying they link to well-known and reputable sources and inform users when it's essential to seek expert advice. However, experts say that the company needs to do more to ensure that its AI tool isn't dispensing dangerous health misinformation.
The investigation highlights the need for companies like Google to prioritize accuracy and transparency in their search results, particularly when it comes to sensitive topics like health. As one expert noted, "It's not just about nit-picking a single search result; it's about tackling the bigger issue of AI Overviews for health."