Google Lens just got a powerful AI upgrade – here’s how to use it

We've just seen the Samsung Galaxy S24 series unveiled with plenty of AI features packed inside, but Google isn't slowing down when it comes to upgrading its own AI tools – and Google Lens is the latest to get a new feature.

The new feature is actually an update to the existing multisearch feature in Google Lens, which lets you tweak searches you run using an image: as Google explains, those queries can now be more wide-ranging and detailed.

For example, Google Lens already lets you take a photo of a pair of red shoes, and append the word “blue” to the search so that the results turn up the same style of shoes, only in a blue color – that's the way that multisearch works right now.

The new and improved multisearch lets you add more complicated modifiers to an image search. So, in Google's own example, you might search with a photo of a board game (above), and ask “what is this game and how is it played?” at the same time. You'd get instructions for playing it from Google, rather than just matches to the image.

All in on AI

Two phones on an orange background showing Google Lens

(Image credit: Google)

As you would expect, Google says this upgrade is “AI-powered”, in the sense that image recognition technology is being applied to the photo you're using to search with. There's also some AI magic applied when it comes to parsing your text prompt and correctly summarizing information found on the web.

Google says the multisearch improvements are rolling out to all Google Lens users in the US this week: you can find it by opening up the Google app for Android or iOS, and then tapping the camera icon to the right of the main search box (above).

If you're outside the US, you can try out the upgraded functionality, but only if you're signed up for the Search Generative Experience (SGE) trial that Google is running – that's where you get AI answers to your searches rather than the familiar blue links.

Also just announced by Samsung and Google is a new Circle to Search feature, which means you can just circle (or scribble on) anything on screen to run a search for it on Google, making it even easier to look up information visually on the web.

You might also like

TechRadar – All the latest technology news

Read More

Google Bard just got a super-useful Google Lens boost – here’s how to use it

Google Bard is getting update after update as of late, with the newest one being the incorporation of Google Lens – which will allow users to upload images alongside prompts to give Bard additional context.

Google seems to be making quite a point of expanding Bard’s capabilities and giving the chatbot a serious push into the artificial intelligence arena, either by integrating it into other Google products and services or simply improving the standalone chatbot itself.

This latest integration brings Google Lens into the picture, allowing you to upload images to part, identify objects and scenes, provide image descriptions, and search the web for pictures of what you might be looking for.

Image 1 of 2

Screenshot of Bard

(Image credit: Future)
Image 2 of 2

Asking Google Bard to show me a kitten

(Image credit: Future)

For example, I asked Bard to show me a photo of a kitten using a scratching post, and it pulled up a photo (accurately cited!) of exactly what I asked for, with a little bit of extra information on why and how cats use scratching posts. I also showed Bard a photo from my phone gallery, and it accurately described the scene and some tidbits of interesting information about rainbows.

Depending on what you ask Bard to do with the image provided, Bard can provide a variety of helpful responses. Since the AI-powered chatbot is mostly a conversational tool, adding as much context as you possibly can will consistently get you the best results, and you can refine its responses with additional prompts as needed. 

If you want to give Bard's new capabilities a try, just head over to the chatbot, click the little icon on the left side of the text box where you would normally type out your prompt, and add any photo you desire to your conversation. 

Including the image update, you can now pin conversation threads, get Bard to read responses out loud in over 40 languages, and get access to easier sharing methods. You can check out the Bard update page for a more detailed explanation of all the new additions.

TechRadar – All the latest technology news

Read More

Google Lens and Bard are an AI tag team that ChatGPT should fear

Google Lens has long been a powerful party trick for anyone who needs to identify a flower or translate their restaurant menu, but it's about to jump to the next level with some Bard integration that's rolling out “in the coming weeks”.

Google teased its tag-team pairing of Lens and Bard at Google IO 2023, but it's now given us an update on how the combo will work and when it's coming. In a new blog post, Google says that within weeks you'll be able to “include images in your Bard prompts and Lens will work behind the scenes to help Bard make sense of what’s being shown”.

The example that Google has shared is a shopping-based one. If you have a photo of a new pair of shoes that you've been eyeing up for a vacation, you can ask Bard what they're called and, unlike standard Lens, start grilling Bard for ideas on how you should style the new shoes.

Naturally, the Lens-Bard combo will be able to do more than just offer shopping advice, with huge potential for travel advice, education, and more. For example, imagine being able to ask a Lens-powered Bard to not only name a holiday landmark but build you a good day trip itinerary around it.

This isn't the end of Google Lens' new tricks, either. It's also tentatively jumping into the health space with a new feature that helps you identify any skin conditions that have been nagging you (below). To use the new feature, Google says you can “just take a picture or upload a photo through Lens, and you’ll find visual matches to inform your search”. 

It can apparently also help identify other nagging issues like “a bump on your lip, a line on your nails, or hair loss on your head”. Naturally, these won't be proper diagnoses of conditions, but they could be a start of a conversation with your doctor. 

If you aren't familiar with Google Lens, it's pretty easy to find on Android – it'll either be built into your camera app or you can just download the standalone Lens app from the App Store. On iPhone, you'll find Lens within the official Google app instead.

Next-gen Lens

A phone screen on an orange background showing a Google Lens search for a skin condition

(Image credit: Google)

The budding Google Lens and Bard partnership could be a match made in search heaven, given that Lens is the most powerful visual search tool around and Bard is improving by the week. And that combo could be a powerful alternative to ChatGPT.

ChatGPT itself has basic image recognition powers and Microsoft did recently bring AI-powered image recognition to its Bing search engine. But the integration of the two isn't quite as powerful as the incoming Lens-Bard integration, at least from what we've seen from Google's demos.

Unfortunately, Google's extreme tentativeness around Bard (which is still labeled an 'experiment') means we might not see its full potential for a while. For example, the huge potential power of this Lens and Bard combination will be limited by the fact that there's still no Google Bard mobile app.

Google could change its stance in the future, but right now we're limited to using Bard in our web browsers – and that's far less convenient for visual search than scanning the world with a smartphone and its built-in camera.

So while the integration of powerful Google apps like Lens with Bard has massive potential for how we search the world for info, ChatGPT will rest a little safer in the knowledge that Google is taking a glacial approach to unleashing its full AI-powered potential.

TechRadar – All the latest technology news

Read More

Multisearch could make Google Lens your search sensei

Google searches are about to get even more precise with the introduction of multisearch, a combination of text and image searching with Google Lens. 

After making an image search via Lens, you’ll now be able to ask additional questions or add parameters to your search to narrow the results down. Google’s use cases for the feature include shopping for clothes with a particular pattern in different colors or pointing your camera at a bike wheel and then typing “how to fix” to see guides and videos on bike repairs. According to Google, the best use case for multisearch, for now, is shopping results. 

The company is rolling out the beta of this feature on Thursday to US users of the Google app on both Android and iOS platforms. Just click the camera icon next to the microphone icon or open a photo from your gallery, select what you want to search, and swipe up on your results to reveal an “add to search” button where you can type additional text.

This announcement is a public trial of the feature that the search giant has been teasing for almost a year; Google discussed the feature when introducing MUM at Google I/O 2021, then provided more information on it in September 2021. MUM, or Multitask Unified Model, is Google’s new AI model for search that was revealed at the company’s I/O event the same year. 

MUM replaced the old AI model, BERT; Bidirectional Encoder Representations from Transformers. MUM, according to Google, is around a thousand times more powerful than BERT.

Google Lens Multisearch

(Image credit: Google)

Analysis: will it be any good?

It’s in beta for now, but Google sure was making a big hoopla about MUM during its announcement. From what we’ve seen, Lens is usually pretty good at identifying objects and translating text. However, the AI enhancements will add another dimension to it and could make it a more useful tool for finding the information you need about what you're looking at right now, as opposed to general information about something like it.

It does, though, beg the questions about how good it’ll be at specifying exactly what you want. For example, if you see a couch with a striking pattern on it but would rather have it as a chair, will you be able to reasonably find what you want? Will it be at a physical store or at an online storefront like WayFair? Google searches can often get inaccurate physical inventories of nearby stores, are those getting better, as well?

We have plenty of questions, but they’ll likely only be answered once more people start using multisearch. The nature of AI is to get better with use, after all.

TechRadar – All the latest technology news

Read More

Multisearch could make Google Lens your search sensei

Google searches are about to get even more precise with the introduction of multisearch, a combination of text and image searching with Google Lens. 

After making an image search via Lens, you’ll now be able to ask additional questions or add parameters to your search to narrow the results down. Google’s use cases for the feature include shopping for clothes with a particular pattern in different colors or pointing your camera at a bike wheel and then typing “how to fix” to see guides and videos on bike repairs. According to Google, the best use case for multisearch, for now, is shopping results. 

The company is rolling out the beta of this feature on Thursday to US users of the Google app on both Android and iOS platforms. Just click the camera icon next to the microphone icon or open a photo from your gallery, select what you want to search, and swipe up on your results to reveal an “add to search” button where you can type additional text.

This announcement is a public trial of the feature that the search giant has been teasing for almost a year; Google discussed the feature when introducing MUM at Google I/O 2021, then provided more information on it in September 2021. MUM, or Multitask Unified Model, is Google’s new AI model for search that was revealed at the company’s I/O event the same year. 

MUM replaced the old AI model, BERT; Bidirectional Encoder Representations from Transformers. MUM, according to Google, is around a thousand times more powerful than BERT.

Google Lens Multisearch

(Image credit: Google)

Analysis: will it be any good?

It’s in beta for now, but Google sure was making a big hoopla about MUM during its announcement. From what we’ve seen, Lens is usually pretty good at identifying objects and translating text. However, the AI enhancements will add another dimension to it and could make it a more useful tool for finding the information you need about what you're looking at right now, as opposed to general information about something like it.

It does, though, beg the questions about how good it’ll be at specifying exactly what you want. For example, if you see a couch with a striking pattern on it but would rather have it as a chair, will you be able to reasonably find what you want? Will it be at a physical store or at an online storefront like WayFair? Google searches can often get inaccurate physical inventories of nearby stores, are those getting better, as well?

We have plenty of questions, but they’ll likely only be answered once more people start using multisearch. The nature of AI is to get better with use, after all.

TechRadar – All the latest technology news

Read More