Google explains why AI Overviews couldn’t understand a joke and told users to eat one rock a day – and promises it’ll get better

If you’ve been keeping up with the latest developments in the area of generative AI, you may have seen that Google has stepped up the rollout of its ‘AI Overviews’ section in Google Search to all of the US.

At Google I/O 2024, held on May 14, Google confidently presented AI Overviews as the next big thing in Search that it expected to wow users, and when the feature finally began rolling out the following week it received a less than enthusiastic response. This was mainly due to AI Overviews returning peculiar and outright wrong information, and now, Google has responded by explaining what happened and why AI Overviews performed the way it did (according to Google). 

The feature was intended to bring more complex and better-verbalized answers to user queries, synthesizing a pool of relevant information and distilling it into a few convenient paragraphs. This summary would then be followed by the listed blue links with brief descriptions of the websites that we’re used to. 

Unfortunately for Google, screenshots of AI Overviews that provided strange, nonsensical, and downright wrong information started circulating on social media shortly after the rollout. Google has since pulled the feature, and published an explanatory post on its ‘Keyword’ blog to explain why AI Overviews was doing this, as mentioned – being quick to point out that many of these screenshots were faked. 

What AI Overviews were intended to be

Keynote speech at Google i/o 2024

(Image credit: Future)

In the blog post, Google first explains that the AI Overviews were designed to collect and present information that you would have to dig further via multiple searches to find out otherwise, and to prominently include links to credit where the information comes from, so you could easily follow up from the summary. 

According to Google, this isn’t just its large language models (LLMs) assembling convincing-sounding responses based on existing training data. AI Overviews is powered by its own custom language model that integrates Google’s core web ranking systems, which are used to carry out searches and integrate relevant and high-quality information into the summary. Accuracy is one of the cornerstones that Google prides itself on when it comes to search, the company notes, saying that it built AI Overviews to show information that’s sourced only from the web results it deems the best. 

This means that AI Overviews are generally supposed to hallucinate less than other LLM products, and if things happen to go wrong, it’s probably for a reason that Google also faces when it comes to search, giving the possible issues as “misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available.”

What actually happened during the rollout

Windows 10 dual screen

(Image credit: Shutterstock / Dotstock)

Google goes on to state that AI Overviews was optimized for accuracy and tested extensively before its wider rollout, but despite these seemingly robust testing efforts, Google does admit that’s not the same as having millions of people trying out the feature with a flood of novel searches. It also points out that some people were trying to provoke its search engine into producing nonsensical AI Overviews by carrying out ridiculous searches. 

I find this part of Google’s explanation a bit odd, seeing as I’d imagine that when building such a feature as AI Overviews, the company would appreciate that folks are likely to try to break it, or send it off the rails somehow, and that it should therefore be designed to handle silly or nonsense searches in its stride.

At any rate, Google then goes on to call out fake screenshots of some of the nonsensical and humorous AI Overviews that made their way around the web, which is fair I think. It reminds us we shouldn’t believe everything we see online, of course, although the faked screenshots looked pretty good if you didn't scrutinize them too closely (and all this underscores the need to check AI-generated features, anyway).

Google does admit, though, that sometimes AI Overviews did produce some odd, inaccurate, or unhelpful responses. It elaborates by explaining that there are multiple reasons why these happened, and that this whole episode has highlighted specific areas where AI Overviews could be improved.

The tech company further observes that these questionable AI Overviews would appear on searches for queries that didn’t happen often. A Threads user, @crumbler, posted an AI Overviews screenshot that went viral after they asked Google: “how many rocks should i eat?” This returned an AI Overview that recommended eating at least one small rock per day. Google’s explanation is that before this screenshot circulated online, this question had rarely been asked in search (which is certainly believable enough). 

A screenshot of an AI Overview recommending that humans should eat one small rock a day

(Image credit: Google/@crumbler on Threads)

Google continues to explain that there isn’t a lot of quality source material to answer that question seriously, either, calling instances when this happens a “data void” or an “information gap.” Additionally, in the case of the query above, some of the only content that was available was satirical by nature, and was linked in earnest as one of the only websites that addressed the query. 

Other nonsensical and silly AI Overviews pulled details from sarcastic or humorous content sources, and the likes of troll posts from discussion forums.

Google's next steps and the future of AI Overviews

When explaining what it’s doing to fix and improve AI Overviews, or any part of its Search results, Google notes that it doesn’t go through Search results pages one by one. Instead, the company tries to implement updates that affect whole sets of queries, including possible future queries. Google claims that it’s been able to identify patterns when analyzing the instances where AI Overviews got things wrong, and that it’s put in a whole set of new measures to continue to improve the feature.

You can check out the full list in Google’s post, but better detection capabilities for nonsensical queries trying to provoke a weird AI Overview are being implemented, and the search giant is looking to limit the inclusion of satirical or humorous content.

Along with the new measures to improve AI Overviews, Google states that it’s been monitoring user feedback and external reports, and that it’s taken action on a small number of summaries that violate Google’s content policies. This happens pretty rarely – in less than one in seven million unique queries, according to Google – and it’s being addressed.

The final reason Google gives for why AI Overviews performed this way is just the sheer scale of the billions of queries that are performed in Search every day. I can’t say I fault Google for that, and I would hope it ramps up the testing it does on AI Overviews even as the feature continues to be developed.

As for AI Overviews not understanding sarcasm, this sounds like a cop-out at first, but sarcasm and humor in general is a nuance of human communication that I can imagine is hard to account for. Comedy is a whole art form in itself, and this is going to be a very thorny and difficult area to navigate. So, I can understand that this is a major undertaking, but if Google wants to maintain a reputation for accuracy while pushing out this new feature – it’s something that’ll need to be dealt with.

We’ll just have to see how Google’s AI Overviews perform when they are reintroduced – and you can bet there’ll be lots of people watching keenly (and firing up yet more ridiculous searches in an effort to get that viral screenshot).

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

The Meta Quest Pro 2 could be a wearable LG OLED TV, and I couldn’t be more excited


  • Meta and LG confirm collaboration on “extended reality (XR) ventures”
  • This could mean a future Meta Quest Pro 2 uses an LG display
  • Announcement also hints at team-up for “content/service capabilities”

Following months of speculation and rumors – the most recent of which came literally days ahead of an official announcement – Meta and LG have confirmed that they’ll be collaborating on next-gen XR hardware (with XR standing for 'extended reality' and being a catchall for VR, MR and AR). And I couldn’t be more excited to see what their Apple Vision Pro rival looks like.

While they didn’t expressly outline what the collaboration entails, or what hardware they’ll be working on together, it seems all but guaranteed that Meta’s next VR headset – likely the Meta Quest Pro 2, but maybe the Meta Quest 4 or other future models – will use LG’s display tech for its screens. This means Meta might finally release another OLED Quest headset which promises some superb visuals for our favorite VR software.

Unfortunately, there’s also no mention of a timescale so we don’t know when the first LG and Meta headset will be released. But several recent rumors have suggested that the next Quest headset (probably the Pro 2) will launch in 2025; so we could see LG tech in Meta hardware next year if we’re lucky.

The Meta Quest Pro

Forget an iPad for your face, the next Meta headset could be an LG TV (Image credit: Meta)

We should always take rumors with a pinch of salt, but these same leaks teased the LG collaboration – so there’s a good chance that they’re on the money for the release date, too.

Beyond the potential for OLED Quest headsets, what’s particularly interesting is a line in the press release that mentions the desire for the companies to bring together “Meta’s platform with [LG’s] content/service capabilities.” To me, that hints at more than simply working together on hardware, but also bringing the LG TV software experience to your Meta headset as well.

More than just an OLED screen

Exactly what this means is yet to be seen, but it could result in a whole host of TV apps reimagined for VR. For Meta, this could importantly mean finally getting VR apps for the best streaming services including Disney Plus, Paramount Plus and Apple TV Plus – as well working apps for Netflix, Prime Video and other services that have Quest software that is practically non-functioning. 

These kinds of streaming apps are the one massive software area in which Meta has no answer to the Apple Vision Pro.

I’ve previously asked Meta if it had plans to bring more streaming services to Quest and a representative told me it had “no additional information to share at this time.” I hoped this meant it had some kind of reveal on the way in the near future, and it appears this LG announcement has answered my calls.

The Disney app running on the Apple Vision Pro

Disney Plus is a sight to behold on the Vision Pro (Image credit: Apple)

That said, while the press release certainly teases some interesting collaborations, until we actually see something in action there’s no telling what form Meta and LG’s partnership will take because the announcement is (no doubt intentionally) a little vague.

There’s also a chance the LG-powered TV apps won’t offer the same 3D movie selection or immersive environments found on Vision Pro. Depending on how the apps are implemented, 3D video might not be possible – or perhaps Apple has an exclusive deal for content with these apps on its Vision Pro platform.

Regardless I’m pretty excited by the potential this announcement brings as it appears to answer two of my four biggest Meta Quest Pro 2 feature requests. Here’s hoping the other two features follow suit. If they do (and the device isn’t astronomically pricey) the Meta Quest Pro 2 could be my new favorite VR headset by a landslide.

You might also like

TechRadar – All the latest technology news

Read More