Google defends AI Overviews against viral errrors

Company clarifies the nature of its AI, refutes claims of inaccuracy
An undated image of Google chrome. — Unsplash
An undated image of Google chrome. — Unsplash

Google has published a statement addressing the viral inaccuracy of its newly launched AI search feature, AI Overviews.

Having exclusively launched in the US a couple of weeks ago, the AI feature has been found providing odd responses, suggesting users to eat rocks and add glue over Pizza.

The company began by explaining how AI Overviews operate, emphasising their apparent difference compared to regular chatbots and other LLM products.

"They’re not simply generating an output based on training data. While AI Overviews are powered by a customized language model, the model is integrated with our core web ranking systems and designed to carry out traditional ‘search’ tasks, like identifying relevant, high-quality results from our index. That’s why AI Overviews don’t just provide text output, but include relevant links so people can explore further," Google stated in an official statement.

The AI-driven feature is distinguished from other LLM products as it relies on top web results to avoid the tendency to “hallucinate” or make up information, as is common in other chatbots.

“This means that AI Overviews generally don’t “hallucinate” or make things up in the ways that other LLM products might,” Google added.

In its defence, the company states that when a mistake is made, it's usually due to misinterpretation found in the results or the lack of nuance.

One such instance was when the bot misinterpreted a satirical article produced by The Onion as being genuine. The company admits that in trying to faithfully produce a satisfactory response, the bot often does lack the ability to identify satire.

Considering this, Google has decided to limit humorous or satirical contact to promote, “better detection mechanisms for nonsensical queries”.