Google AI Tool’s Misleading Responses Raise Alarms Among Experts

2 min read

“The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web,” Google said a written statement. “Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce.”

It’s hard to reproduce errors made by AI language models — in part because they’re inherently random. They work by predicting what words would best answer the questions asked of them based on the data they’ve been trained on. They’re prone to making things up — a widely studied problem known as hallucination.

The AP tested Google’s AI feature with several questions and shared some of its responses with subject matter experts. Asked what to do about a snake bite, Google gave an answer that was “impressively thorough,” said Robert Espinoza, a biology professor at the California State University, Northridge, who is also president of the American Society of Ichthyologists and Herpetologists.

But when people go to Google with an emergency question, the chance that an answer the tech company gives them includes a hard-to-notice error is a problem.

“The more you are stressed or hurried or in a rush, the more likely you are to just take that first answer that comes out,” said Emily M. Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory. “And in some cases, those can be life-critical situations.”

That’s not Bender’s only concern — and she has warned Google about them for several years. When Google researchers in 2021 published a paper called “Rethinking search” that proposed using AI language models as “domain experts” that could answer questions authoritatively — much like they are doing now — Bender and colleague Chirag Shah responded with a paper laying out why that was a bad idea.

They warned that such AI systems could perpetuate the racism and sexism found in the huge troves of written data they’ve been trained on.

“The problem with that kind of misinformation is that we’re swimming in it,” Bender said. “And so people are likely to get their biases confirmed. And it’s harder to spot misinformation when it’s confirming your biases.”

You May Also Like

More From Author

+ There are no comments

Add yours