Google explains why AI Overviews couldn’t understand a joke and told users to eat one rock a day – and promises it’ll get better

If you’ve been keeping up with the latest developments in the area of generative AI, you may have seen that Google has stepped up the rollout of its ‘AI Overviews’ section in Google Search to all of the US.

At Google I/O 2024, held on May 14, Google confidently presented AI Overviews as the next big thing in Search that it expected to wow users, and when the feature finally began rolling out the following week it received a less than enthusiastic response. This was mainly due to AI Overviews returning peculiar and outright wrong information, and now, Google has responded by explaining what happened and why AI Overviews performed the way it did (according to Google). 

The feature was intended to bring more complex and better-verbalized answers to user queries, synthesizing a pool of relevant information and distilling it into a few convenient paragraphs. This summary would then be followed by the listed blue links with brief descriptions of the websites that we’re used to. 

Unfortunately for Google, screenshots of AI Overviews that provided strange, nonsensical, and downright wrong information started circulating on social media shortly after the rollout. Google has since pulled the feature, and published an explanatory post on its ‘Keyword’ blog to explain why AI Overviews was doing this, as mentioned – being quick to point out that many of these screenshots were faked. 

What AI Overviews were intended to be

Keynote speech at Google i/o 2024

(Image credit: Future)

In the blog post, Google first explains that the AI Overviews were designed to collect and present information that you would have to dig further via multiple searches to find out otherwise, and to prominently include links to credit where the information comes from, so you could easily follow up from the summary. 

According to Google, this isn’t just its large language models (LLMs) assembling convincing-sounding responses based on existing training data. AI Overviews is powered by its own custom language model that integrates Google’s core web ranking systems, which are used to carry out searches and integrate relevant and high-quality information into the summary. Accuracy is one of the cornerstones that Google prides itself on when it comes to search, the company notes, saying that it built AI Overviews to show information that’s sourced only from the web results it deems the best. 

This means that AI Overviews are generally supposed to hallucinate less than other LLM products, and if things happen to go wrong, it’s probably for a reason that Google also faces when it comes to search, giving the possible issues as “misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available.”

What actually happened during the rollout

Windows 10 dual screen

(Image credit: Shutterstock / Dotstock)

Google goes on to state that AI Overviews was optimized for accuracy and tested extensively before its wider rollout, but despite these seemingly robust testing efforts, Google does admit that’s not the same as having millions of people trying out the feature with a flood of novel searches. It also points out that some people were trying to provoke its search engine into producing nonsensical AI Overviews by carrying out ridiculous searches. 

I find this part of Google’s explanation a bit odd, seeing as I’d imagine that when building such a feature as AI Overviews, the company would appreciate that folks are likely to try to break it, or send it off the rails somehow, and that it should therefore be designed to handle silly or nonsense searches in its stride.

At any rate, Google then goes on to call out fake screenshots of some of the nonsensical and humorous AI Overviews that made their way around the web, which is fair I think. It reminds us we shouldn’t believe everything we see online, of course, although the faked screenshots looked pretty good if you didn't scrutinize them too closely (and all this underscores the need to check AI-generated features, anyway).

Google does admit, though, that sometimes AI Overviews did produce some odd, inaccurate, or unhelpful responses. It elaborates by explaining that there are multiple reasons why these happened, and that this whole episode has highlighted specific areas where AI Overviews could be improved.

The tech company further observes that these questionable AI Overviews would appear on searches for queries that didn’t happen often. A Threads user, @crumbler, posted an AI Overviews screenshot that went viral after they asked Google: “how many rocks should i eat?” This returned an AI Overview that recommended eating at least one small rock per day. Google’s explanation is that before this screenshot circulated online, this question had rarely been asked in search (which is certainly believable enough). 

A screenshot of an AI Overview recommending that humans should eat one small rock a day

(Image credit: Google/@crumbler on Threads)

Google continues to explain that there isn’t a lot of quality source material to answer that question seriously, either, calling instances when this happens a “data void” or an “information gap.” Additionally, in the case of the query above, some of the only content that was available was satirical by nature, and was linked in earnest as one of the only websites that addressed the query. 

Other nonsensical and silly AI Overviews pulled details from sarcastic or humorous content sources, and the likes of troll posts from discussion forums.

Google's next steps and the future of AI Overviews

When explaining what it’s doing to fix and improve AI Overviews, or any part of its Search results, Google notes that it doesn’t go through Search results pages one by one. Instead, the company tries to implement updates that affect whole sets of queries, including possible future queries. Google claims that it’s been able to identify patterns when analyzing the instances where AI Overviews got things wrong, and that it’s put in a whole set of new measures to continue to improve the feature.

You can check out the full list in Google’s post, but better detection capabilities for nonsensical queries trying to provoke a weird AI Overview are being implemented, and the search giant is looking to limit the inclusion of satirical or humorous content.

Along with the new measures to improve AI Overviews, Google states that it’s been monitoring user feedback and external reports, and that it’s taken action on a small number of summaries that violate Google’s content policies. This happens pretty rarely – in less than one in seven million unique queries, according to Google – and it’s being addressed.

The final reason Google gives for why AI Overviews performed this way is just the sheer scale of the billions of queries that are performed in Search every day. I can’t say I fault Google for that, and I would hope it ramps up the testing it does on AI Overviews even as the feature continues to be developed.

As for AI Overviews not understanding sarcasm, this sounds like a cop-out at first, but sarcasm and humor in general is a nuance of human communication that I can imagine is hard to account for. Comedy is a whole art form in itself, and this is going to be a very thorny and difficult area to navigate. So, I can understand that this is a major undertaking, but if Google wants to maintain a reputation for accuracy while pushing out this new feature – it’s something that’ll need to be dealt with.

We’ll just have to see how Google’s AI Overviews perform when they are reintroduced – and you can bet there’ll be lots of people watching keenly (and firing up yet more ridiculous searches in an effort to get that viral screenshot).

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Tim Cook explains why Apple’s generative AI could be the best on smartphones – and he might have a point

It’s an open secret that Apple is going to unveil a whole host of new artificial intelligence (AI) software features in the coming weeks, with major overhauls planned for iOS 18, macOS 15, and more. But it’s not just new features that Apple is hoping to hype up – it’s the way in which those AI tools are put to use.

Tim Cook has just let slip that Apple’s generative AI will have some major “advantages” over its rivals. While the Apple CEO didn’t explain exactly what Apple’s generative AI will entail (we can expect to hear about that at WWDC in June), what he did say makes a whole lot of sense.

Speaking on Apple’s latest earnings call yesterday, Cook said: “We believe in the transformative power and promise of AI, and we believe we have advantages that will differentiate us in this new era, including Apple’s unique combination of seamless hardware, software, and services integration, groundbreaking Apple silicon with our industry-leading neural engines, and our unwavering focus on privacy, which underpins everything we create.”

Cook also said Apple is making “significant investments” in generative AI, and that he has “some very exciting things” to unveil in the near future. “We continue to feel very bullish about our opportunity in generative AI,” he added.

Why Tim Cook might be right

Siri

(Image credit: Unsplash [Omid Armin])

There are plenty of reasons why Apple’s AI implementation could be an improvement over what's come before it, not least of which is Apple’s strong track record when it comes to privacy. The company often prefers to encrypt data and run tasks on your device, rather than sending anything to the cloud, which helps ensure that it can’t be accessed by nefarious third parties – and when it comes to AI, it looks like this approach might play out again.

Bloomberg's Mark Gurman, for example, has reported that Apple’s upcoming AI features will work entirely on your device, thereby continuing Apple’s commitment to privacy, amid concerns that the rapid development of AI is putting security and privacy at risk. If successful, it could also be a more ethical approach to AI than that employed by Apple’s rivals.

In addition, the fact that Apple creates both the hardware and software in its products allows them to be seamlessly integrated in ways most of its competitors can’t match. It also means devices can be designed with specific use cases in mind that rely on hardware and software working together, rather than Apple having to rely on outside manufacturers to play ball. When it comes to AI, that could result in all kinds of benefits, from performance improvements to new app features.

We’ll find out for sure in the coming weeks. Apple is hosting an iPad event on May 7, which reports have suggested Apple might use to hint at upcoming AI capabilities. Beyond that, the company’s Worldwide Developers Conference (WWDC) lands on June 10, where Apple is expected to devote significant energy to its AI efforts. Watch this space.

You might also like

TechRadar – All the latest technology news

Read More

Google explains how Gemini’s AI image generation went wrong, and how it’ll fix it

A few weeks ago Google launched a new image generation tool for Gemini (the suite of AI tools formerly known as Bard and Duet) which allowed users to generate all sorts of images from simple text prompts. Unfortunately, Google’s AI tool repeatedly missed the mark and generated inaccurate and even offensive images that led a lot of us to wonder – how did the bot get things so wrong? Well, the company has finally released a statement explaining what went wrong, and how it plans to fix Gemini. 

The official blog post addressing the issue states that when designing the text-to-image feature for Gemini, the team behind Gemini wanted to “ensure it doesn’t fall into some of the traps we’ve seen in the past with image generation technology — such as creating violent or sexually explicit images, or depictions of real people.” The post further explains that users probably don’t want to keep seeing people of just one ethnicity or other prominent characteristic. 

So, to offer a pretty basic explanation for what’s been going on: Gemini has been throwing up images of people of color when prompted to generate images of white historical figures, giving users ‘diverse Nazis’, or simply ignoring the part of your prompt where you’ve specified exactly what you’re looking for. While Gemini’s image capabilities are currently on hold, when you could access the feature you’d specify exactly who you’re trying to generate – Google uses the example “a white veterinarian with a dog” – and Gemini would seemingly ignore the first half of that prompt and generate veterinarians of all races except the one you asked for. 

Google went on to explain that this was the outcome of two crucial failings – firstly, Gemini was showing a range of different people without considering a range not to show. Alongside that, in trying to make a more conscious, less biased generative AI, Google admits the “model became way more cautious than we intended and refused to answer certain prompts entirely – wrongly interpreting some very anodyne prompts as sensitive.”

So, what's next?

At the time of writing, the ability to generate images of people on Gemini has been paused while the Gemini team works to fix the inaccuracies and carry out further testing. The blog post notes that AI ‘hallucinations’ are nothing new when it comes to complex deep learning models – even Bard and ChatGPT had some questionable tantrums as the creators of those bots worked out the kinks. 

The post ends with a promise from Google to keep working on Gemini’s AI-powered people generation until everything is sorted, with the note that while the team can’t promise it won’t ever generate “embarrassing, inaccurate or offensive results”, action is being taken to make sure it happens as little as possible. 

All in all, this whole episode puts into perspective that AI is only as smart as we make it. Our editor-in-chief Lance Ulanoff succinctly noted that “When an AI doesn't know history, you can't blame the AI.” With how quickly artificial intelligence has swooped in and crammed itself into various facets of our daily lives – whether we want it or not – it’s easy to forget that the public proliferation of AI started just 18 months ago. As impressive as the tools currently available to us are, we’re ultimately still in the early days of artificial intelligence. 

We can’t rain on Google Gemini’s parade just because the mistakes were more visually striking than say, ChatGPT’s recent gibberish-filled meltdown. Google’s temporary pause and reworking will ultimately lead to a better product, and sooner or later we’ll see the tool as it was meant to be. 

You might also like…

TechRadar – All the latest technology news

Read More

Nvidia explains how its ACE software will bring ChatGPT-like AI to non-player characters in games

Earlier this year at Computex 2023, Nvidia revealed a new technology during its keynote presentation: Nvidia ACE, a ‘custom AI model foundry’ that promised to inject chatbot-esque intelligence into non-player characters in games.

Now, Nvidia has more to say about ACE: namely, NVIDIA NeMo SteerLM, a new technique that will make it easier than ever before for game developers to make characters that act and sound more realistic and organic.

We’ve heard about NeMo before, back when Nvidia revealed its ‘NeMo Guardrails’ software for making sure that large language model (LLM) chatbots such as the ever-present ChatGPT are more “accurate, appropriate, on topic and secure”. NeMo Steer LM acts in a similar but more creative way, allowing game devs to ‘steer’ AI behavior in certain directions with simple sliders; for example, making a character more humorous, or more aggressive and rude.

I was a bit critical of NeMo Guardrails back when it was originally unveiled, since it raises the question of exactly who programs acceptable behaviors into AI models. In publicly accessible real-world chatbot tools, programmer bias could lead to AI-generated responses that offend some while appearing innocuous to others. But for fictional characters, I’m willing to believe that NeMo has huge potential. Imagine a gameworld where every character can truly react dynamically and organically to the player’s words and actions – the possibilities are endless!

The problems with LLMs in games

Of course, it’s not quite as simple as that. While SteerLM does promise to make the process of implementing AI-powered NPCs a lot more straightforward, there are still issues surrounding the use of LLMs in games in general. Early access title Vaudeville shows that AI-driven narrative games have a long way to go, and that’s not even the whole picture.

LLM chatbots such as ChatGPT and Bing AI have proven in the past that they’re not infallible when it comes to remaining on-topic and appropriate. Indeed, when I embarked on a quest to break ChatGPT, I was able to make it say things my editor sadly informed me were not fit for publication. While tools such as Nvidia’s Guardrails can help, they’re not perfect – and as AI models continue to evolve and advance, it may become harder than ever to keep them playing nice.

Even beyond the potential dangers of introducing actual AI models into games – let alone ones with SteerLM’s ‘toxicity’ slider, which on paper sounds like a lawsuit waiting to happen – a major stumbling block to implementing tools like this could actually be hardware-related.

Screenshot of 'Jin the ramen shop owner', an AI-generated non-player character.

Nvidia’s Computex demo of ‘Jin the ramen shop owner’ was technologically impressive but raises a lot of questions about AI in games. (Image credit: Nvidia)

If a game uses local hardware acceleration to power its SteerLM-enhanced NPCs, the performance could be affected by how powerful your computer is when it comes to running AI-based workloads. This introduces an entirely new headache for both game devs and gamers: inconsistency in game quality dependent not on anything the developers can control, but on the hardware used by the player.

According to the Steam Hardware Survey, the majority of PC gamers are still using RTX 2000 or older GPUs. Hell, the current top spot is occupied by the budget GTX 1650, a graphics card that lacks the Tensor cores used by RTX GPUs to carry out high-end machine-learning processes. The 1650 isn’t incapable of running AI-related tasks, but it’s never going to keep up with the likes of the mighty RTX 4090.

I’m picturing a horrible future for PC gaming, where your graphics card determines not just the visual fidelity of the games you play, but the quality of the game itself. For those lucky enough to own, say, an RTX 5000 GPU, incredibly lifelike NPC dialogue and behavior could be at your fingertips. Smarter enemies, more helpful companions, dynamic and compelling villains. For the rest of us, get used to dumb and dumber character AI as game devs begin to rely more heavily on LLM-managed NPCs.

Perhaps this will never happen. I certainly hope so, anyway. There’s also the possibility of tools like SteerLM being implemented in a way that doesn’t require local hardware acceleration; that would be great! Gamers should never have to shell out for the very best graphics cards just to get the full experience from a game – but I’ll be honest, my trust in the industry has been sufficiently battered over the last few years that I’m braced for the worst.

You might also like

TechRadar – All the latest technology news

Read More

Apple explains what ‘Clean Charging’ is for iOS 16.1 – but it’s US only for now

iOS 16.1 is now available for iPhone 8 and newer handsets, and it comes with an interesting carbon-saving feature that helps bolster Apple's eco-friendly credentials – and the company has now explained how it works.

In a support document, Apple states that when this feature is enabled, your iPhone gains an overview of the carbon emissions being used in your area, and iOS 16.1 will charge your device during times when cleaner energy production is being used.

It's an interesting feature, and it makes us wonder how this could expand to Apple's other devices.

A reduced carbon footprint for your MacBook Pro?

Macbook Pro 14-inch

(Image credit: TechRadar)

iPhones are one of the most repeatedly charged devices that many of us rely on every day, but most of us don't think about where the electricity we use to charge our iPhones comes from.

At the moment, this feature is only available to people in the US, though we hope it gets a global rollout soon. If you're in the US and you don't see Clean Energy Charging in your battery settings, you need to have Location Services enabled, alongside System Customization and Significant Locations. These can all be found within in Settings > Privacy & Security > Location Services > System Services.

It's too early to tell if the Clean Energy Charging feature will make a big difference in carbon emissions, but if it does, could we see it come to other Apple products, such as Macs and MacBooks?  With rumors that new M2 MacBook Pros could be arriving soon, it could be perfect timing for this feature to pop up in a future macOS Ventura update.

Apple recently published a press release, calling on its supply chain to fully decarbonize by 2030 and use fully-renewable sources, so it's clear that the company is getting serious about minimising the environmental impact of its products.

We're expecting the company to go harder in its renewable-energy efforts in the near future, further showing the industry how it can thrive in a clean-energy world while we enjoy sending memes to friends over iMessage.

TechRadar – All the latest technology news

Read More

“It’s important to give people choice”: Instagram explains why it brought back chronological feeds

Instagram has finally confirmed that the option to use a chronological feed is rolling out to all users on iOS and Android from today, March 23.

The rolling feed of images and video had changed in 2016 to one that was instead judged by algorithms. Instagram thought that users would prefer to be shown what they might like, rather than showing the latest images, with no ability to switch between modes.

However, users have been clamoring to scroll through a feed from newest to oldest, and Instagram has finally relented. Eventually, you will be given two options on your feed – Following and Favorites, which can then be set to show your posts chronologically.

TechRadar spoke to the company to find out why this change has occurred now, and whether this applies to Instagram's other features.

A logical choice, at last

This is an update that won't require you to go to the App Store or Google Play Store to update – it should appear on your feed soon.

It's a welcome change, and many had been wishing for the company to revert back to a chronological feed since it changed back in 2016. So much so, Instagram commented on this at the end of 2021 through a series of tweets.

See more

In the meantime, we asked an Instagram spokesperson as to why it decided to make the change. “For some time now, we’ve been working on different ways to give people more control over their experience. This is one of the many things we’re doing to give people more choice,” the spokesperson explains. “We moved away from a full chronological feed because we learned that many people were missing posts. That said, we think it’s important to give people choice – so we’re providing them with more options in Feed to tailor their experience.”

There is a small caveat to the return of the chronological feed; you can't currently set it as the default option, compared to what you can do with Twitter's two feeds. We asked if this was something that the company would consider in the future. “We’re giving new options within your Feed to give people more control and choice,” Instagram's spokesperson clarifies. “The Home Feed will remain a mix of content that you see today, including ranked content from people you follow, recommended content you may like, and more.”

Instagram Desktop creation on the web

(Image credit: Instagram)

Six years is a long time in technology, especially when it comes to social media. Since then, we've seen Instagram Stories and Reels arrive, alongside being able to access the platform on the web. We asked whether the chronological feed would also apply here as well, with some bad news, confirming just two platforms again to us. “This feature is currently only available on iOS and Android.”

Finally, with Reels attempting to take on TikTok in its rolling video, we wondered whether this would also reap the benefit of an organized feed. “Currently, Favourites only applies to posts that appear in Feed.”

For now at least, the first steps of a chronological feed have arrived. And while you can't make it the default view for your feed, alongside being able to apply it to your Reels or hashtag feeds, it's a start.

But with more users accessing the platform through iPads and web browsers on their Windows PCs, it's now a matter of when, not if, the chronological feed will also appear there as well.

TechRadar – All the latest technology news

Read More

“It’s important to give people choice”: Instagram explains why it brought back chronological feeds

Instagram has finally confirmed that the option to use a chronological feed is rolling out to all users on iOS and Android from today, March 23.

The rolling feed of images and video had changed in 2016 to one that was instead judged by algorithms. Instagram thought that users would prefer to be shown what they might like, rather than showing the latest images, with no ability to switch between modes.

However, users have been clamoring to scroll through a feed from newest to oldest, and Instagram has finally relented. Eventually, you will be given two options on your feed – Following and Favorites, which can then be set to show your posts chronologically.

TechRadar spoke to the company to find out why this change has occurred now, and whether this applies to Instagram's other features.

A logical choice, at last

This is an update that won't require you to go to the App Store or Google Play Store to update – it should appear on your feed soon.

It's a welcome change, and many had been wishing for the company to revert back to a chronological feed since it changed back in 2016. So much so, Instagram commented on this at the end of 2021 through a series of tweets.

See more

In the meantime, we asked an Instagram spokesperson as to why it decided to make the change. “For some time now, we’ve been working on different ways to give people more control over their experience. This is one of the many things we’re doing to give people more choice,” the spokesperson explains. “We moved away from a full chronological feed because we learned that many people were missing posts. That said, we think it’s important to give people choice – so we’re providing them with more options in Feed to tailor their experience.”

There is a small caveat to the return of the chronological feed; you can't currently set it as the default option, compared to what you can do with Twitter's two feeds. We asked if this was something that the company would consider in the future. “We’re giving new options within your Feed to give people more control and choice,” Instagram's spokesperson clarifies. “The Home Feed will remain a mix of content that you see today, including ranked content from people you follow, recommended content you may like, and more.”

Instagram Desktop creation on the web

(Image credit: Instagram)

Six years is a long time in technology, especially when it comes to social media. Since then, we've seen Instagram Stories and Reels arrive, alongside being able to access the platform on the web. We asked whether the chronological feed would also apply here as well, with some bad news, confirming just two platforms again to us. “This feature is currently only available on iOS and Android.”

Finally, with Reels attempting to take on TikTok in its rolling video, we wondered whether this would also reap the benefit of an organized feed. “Currently, Favourites only applies to posts that appear in Feed.”

For now at least, the first steps of a chronological feed have arrived. And while you can't make it the default view for your feed, alongside being able to apply it to your Reels or hashtag feeds, it's a start.

But with more users accessing the platform through iPads and web browsers on their Windows PCs, it's now a matter of when, not if, the chronological feed will also appear there as well.

TechRadar – All the latest technology news

Read More