Google’s NotebookLM is now an even smarter assistant and better fact-checker

Google is updating its NotebookLM writing assistant with improved performance and new features, among other things. It now runs on the company’s Gemini 1.5 Pro model, the same AI that powers the Gemini Advanced chatbot. 

Thanks to this functionality, the assistant is more contextually aware, allowing you to ask “questions about [the] images, charts, and diagrams in your source” in order to gain a better understanding. 

Although Gemini 1.5 Pro is part of NotebookLM, it's unknown if this means the AI will accept longer text prompts or create more detailed answers. After all, Google's AI can handle context windows of up to a million tokens. We reached out to the tech giant to ask if people can expect to see support for bigger prompts.

Prompts will mostly stay the same, although they’ll now provide inline citations in the form of encircled numbers. Clicking on one of these citations takes you directly to supporting passages inside source documents. That way, you can double-check the material to see if NotebookLM got things right. 

AI hallucinations continue to be a problem for the tech, so it’s important that people are able to fact-check outputs. When it comes to images, opening the citation causes the source picture to appear in a small window next to the text.

NotebookLM

(Image credit: Google)

Upgraded sourcing

Support for information sources is expanding to now include “Google Slides and web URLs” alongside PDFs, text files, and Google Docs. A new feature called Notebook Guide is being added, too. What this does is give you the opportunity to rearrange the data you enter into a specific format like a series of FAQs or a Study Guide. It could be quite handy.

The Guide sees other changes being made, though they’re not included in the initial announcement. For instance, you can have up to 50 different sources per project, and each one can be up to 500,000 words long, according to TheVerge. Prior to this, users could only have five sources at once, so it’s a big step up. 

Raiza Martin, who is a senior product manager at Google Labs, also told the publication that “NotebookLM is a closed system.” This means the AI won’t perform any web searches beyond what you, the user, give it in a prompt. Every response it generates pertains only to the information it has on hand.

NotebookLM’s latest update is live now and is rolling out to “over 200 countries and territories around the world.” You can head over to the AI’s official website to try out the new features. But, do keep in mind that NotebookLM is still considered to be experimental and you may run into some quirks. TheVerge, for instance, claimed the URL source function didn’t work in their demo. However, in our experience, the tool worked just fine.

Be sure to check out TechRadar's list of the best business laptops for 2024, if you plan on using the assistant at work.

You might also like

TechRadar – All the latest technology news

Read More

Google’s Gemini Nano could launch on the Pixel 8a as early as next month

Google promised back in March that it would eventually bring Gemini Nano to the Pixel 8 and Pixel 8a although no one knew exactly when. Until now, that is, as the update may arrive “very soon.” 

Android Authority recently did a deep dive into the Pixel 8 series’ AICore app and found a pair of toggle switches inside the settings menu. The second switch is the main focus because flipping it turns on “on-device GenAI features.” Activating the tool presumably allows a Pixel 8 smartphone to harness Gemini Nano in order to power the device’s generative AI capabilities. 

It doesn’t actually say “Nano” in the accompanying text, though; just “Gemini”. However, we believe it is that particular model and not some stripped-down version, after all this is what Google said it would. Plus, the Pixel 8 trio runs on the Tensor G3 chipset and it can support the AI with the right hardware adjustments.

No one knows exactly what the AI will actually do here, though. Gemini Nano on the Pixel 8 Pro powers the phone’s “multimodal capabilities,” including, but not limited to, the Summarize tool in the Record app and Magic Compose in Google Messages.

Imminent launch

The other toggle switch isn’t as noteworthy – a screenshot in the report reveals it enables “AICore Persistent.” This gives applications “permission… to use as many [device] resources as possible”. 

Android Authority states that the sudden appearance of these switches could mean Google is almost ready to “announce Nano support for the Pixel 8/8a ”—maybe within the next couple of days or weeks. The company typically releases major updates for its mobile platforms in June, so we expect to see Gemini roll out to the rest of the Pixel 8 line next month. 

According to the publication, the toggles will likely be found in the Developer Options section of the smartphone’s Settings menu. However, it's important to note that this could change at any time.

Technically-minded users can find the switches by digging around the latest AICore patch. The software is available for download from the Google Play Store; however, your Pixel 8 model may need to be running Android 15 Beta 2.1. 9To5Google, in their coverage, claims to have found the AICore toggles on a Pixel 8 with the beta but not on a phone running Android 14.

As for the Pixel 8 Pro, it's unknown if the high-end model is going to receive the same update although there's a chance it could. Android Authority points out it's currently not possible for Pro users to deactivate Gemini Nano, but this update could give them the option.

Be sure to check out TechRadar's list of the best Pixel phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

What is Project Astra? Google’s futuristic universal assistant explained

Almost everyone in tech is investing heavily in artificial intelligence right now, and Google is among those most committed to an AI future. Project Astra, unveiled at Google I/O 2024, is a big part of that – and it could end up being one of Google's most important AI tools.

Astra is being billed as “a universal AI agent that is helpful in everyday life”. It's essentially something like a blending of Google Assistant and Google Gemini, with added features and supercharged capabilities for a natural, conversational experience.

Here, we're going to explain everything you need to know about Project Astra – how it works, what it can do, when you can get it, and how it might shape the future. 

What is Project Astra?

In some ways, Project Astra isn't any different to the AI chatbots we've already got: you ask a question about what's in a picture, or about how to do something, or request some creative text to be generated, and Astra gets on with it.

What elevates this particular AI project is its multimodal functionality (the way text, images, video, and audio can all be combined), the speed that the bot works at, and how conversational it is. Google's aim, as we've already mentioned, is to create “a universal AI agent” that can do anything and understand everything.

Google IO 2024

Project Astra in action (Image credit: Google)

Think about the Hal 9000 bot in Kubrick's 2001: A Space Odyssey, or the Samantha assistant in the movie Her: talking to them is like talking to a human being, and there isn't much they can't do. (Both those AIs eventually got too big for their creators to control, but let's ignore that for the time being.)

Project Astra has been built to understand context and to take actions, to be able to work in real time, and to remember conversations from the past. From the demos we've seen so far, it works on phones and on smart glasses, and is powered by the Google Gemini AI models – so it may eventually be part of the Gemini app, rather than something that's separate and standalone.

When is Project Astra coming out?

Project Astra is in its early stages: this isn't something that's going to be available to the masses for a few months at least. That said, Google says that “some of these agent capabilities will come to Google products like the Gemini app later this year”, so it looks as though elements of Astra will appear gradually in Google's apps as we go through 2024.

When we were given some hands-on time with Project Astra at I/O 2024, these sessions were limited to four minutes each – so that gives you some idea of how far away this is from being something that anyone, anywhere can make use of. What's more, the Astra kit didn't look particularly portable, and the Google reps were careful to refer to it as a prototype.

Project Astra demonstration room at Google I/O showing large display and toys

We’ve already tried Project Astra (Image credit: Philip Berne / Future)

Taking all that together, we get the impression that some of the Project Astra tricks we've seen demoed might appear in the Google Gemini app sooner rather than later. At the same time, the full Astra experience – perhaps involving some dedicated hardware – is probably not going to be rolling out until 2025 at the earliest.

Now that Google has shared what Project Astra is and what it's capable of, it's likely that we're going to hear a whole lot more about it in the months ahead. Bear in mind that ChatGPT and Dall-E developer OpenAI is busy pushing out major upgrades of its own, and Google isn't going to want to be left behind.

What can I do with Project Astra?

One of Google's demos shows Astra running on a phone, using its camera input and talking naturally to a user: it's asked to flag up something in view that can play sounds, and correctly identifies a speaker. When an arrow is drawn on screen, Astra then recognizes and talks about the speaker component highlighted by the arrow.

In another demo, we see Astra correctly identifying world landmarks from drawings in a sketchbook. It's also able to remember the order of objects in a list, identify a neighborhood from an image, understand the purpose of sections of code that are shown to it, and solve math problems that are written out.

There's a lot of emphasis on recognizing objects, drawings, text, and more through a camera system – while at the same time understanding human speech and generating appropriate responses. This is the multimodal part of Project Astra in action, which makes it a step up from what we already have – with improvements in caching, recording, and processing key to the real time responsiveness.

In our hands-on time with Project Astra, we were able to get it to tell a story based on objects that we showed to the camera – and adapt the story as we went on. Further down the line, it's not difficult to imagine Astra applying these smarts as you explore a city on vacation, or solve a physics problem on a whiteboard, or provide detailed information about what's being shown in a sports game.

Which devices will include Project Astra?

In the demonstrations of Project Astra that Google has shown off so far, the AI is running on an unidentified smartphone and an unidentified pair of smart glasses – suggesting that we might not have heard the last of Google Glass yet.

Google has also hinted that Project Astra is going to be coming to devices with other form factors. We've already mentioned the Her movie, and it's well within the realms of possibility that we might eventually see the Astra bot built into wireless earbuds (assuming they have a strong enough Wi-Fi connection).

Google Pixel 8 Pro back in porcelain in front of animal print

Expect to see Project Astra turn up on Pixel phones, eventually (Image credit: Future / Philip Berne)

In the hands-on area that was set up at Google I/O 2024, Astra was powered through a large camera, and could only work with a specific set of objects as props. Clearly, any device that runs Astra's impressive features is going to need a lot of on-board processing power, or a very quick connection to the cloud, in order to keep up the real-time conversation that's core to the AI.

As time goes on and technology improves, though, these limitations should slowly begin to be overcome. The next time we hear something major about Project Astra could be around the time of the launch of the Google Pixel 9 in the last few months of 2024; Google will no doubt want to make this the most AI-capable smartphone yet.

You might also like

TechRadar – All the latest technology news

Read More

Google’s Live Caption may soon become more emotionally expressive on Android

Google is reportedly working on implementing several customization features to the Live Caption accessibility feature on mobile devices. Evidence of this update was discovered by software deep diver Assemble Debug after digging through the Android System Intelligence app. According to an image given to Android Authority, there will be four options in total. We don't know much, but there is a little bit of explanation to be found. 

The first one allows Android phones to display “emoji icons” in a caption transcript; perhaps to better convey what emotions the voices expressing. The other three aren't as clear. The second feature will “emphasize emotional intensity in [the] transcription” while the third is said to include the “word duration [effects]” and the ability to display “emotional tags.”

Feature breakdown

As you can see, the wording is pretty vague, but there’s enough to paint a picture. It seems Live Caption will become better at replicating emotions in voices it transcribes. Say, for example, you’re watching a movie and someone is angrily screaming. Live Caption could perhaps show text in all caps to signify yelling. 

The feature could also slant words in a line to indicate whenever someone is being sarcastic or trying to imply something. Word duration effect could refer to the software showing drawn out letters in a set of captions. Maybe someone is singing in and they begin to hold a note. The sound that’s being held could be displayed thanks to this toggle. 

Emotional tags is admittedly more difficult to envision. Android Authority mentions the tags will be shown and included in a transcript. This could mean that the tool is going to add clear indicators within transcriptions of what a subject is expressing at the moment. Users might see the word “Angry” pop up whenever a person is feeling angry about something or “Sad” whenever someone is crying.

Greater utility

That’s our best guess. If these rumored features do operate as described, it would give Live Caption even greater utility than what it already has. The tool was introduced back in 2019 as an accessibility tool to help people enjoy content if they’re hard of hearing or can’t turn on the sound for whatever reason.

The current captions are rather plain, but with update, emotions could be added to Google’s tool for a better immersive experience.  

Android Authority claims the the features were found in a “variant of the Android System Intelligence app”. We believe this means that they were located inside a special version of the app meant for first-party hardware like the Google Pixel. So the customization tools may be exclusive to the Pixel 8 or a future model. It’s too early to tell at the moment. Hopefully, the upgraded Live Captions sees a much wider release.

Until we learn more, check out TechRadar's list of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Could ChromeOS eventually run on your Android phone? Google’s demo of exactly that is an exciting hint for the future

A recent report has revealed that Google held a private demonstration that showed off a tailored version of ChromeOS, its operating system (OS) for Chromebooks, running on an Android device. Of course, Android is the operating system for Google's smartphones and tablets, while ChromeOS was developed for its line of Chromebook laptops and Chromebox desktop computers.

Unnamed sources spoke with Android Authority and shared that Google hosted a demo of a specially built Chromium OS (an open source version of ChromeOS hosted and developed by Google), given the codename ‘ferrochrome,’ showing this off to other companies. 

The custom build was run in a virtual machine (think of this as a digital emulation of a device) on a Pixel 8, and while this Android smartphone was used as the hardware, its screen wasn't. The OS was projected to an external display, made possible by a recent development for the Pixel 8 that enables it to connect to an external display.

A recent report has revealed that Google held a private demonstration that showed off a tailored version of ChromeOS, its operating system  for es it possible to run a secure and private execution environment for highly sensitive code. The AVF was developed for other purposes, but this demonstration showed that it could also be used to run other operating systems. 

Close up of the Samsung Galaxy S20

(Image credit: Future / James ide)

What this means for Android users, for now

This demonstration is evidence that Google has the capability to run ChromeOS in Android, but there's no word, or remote hint, even, from Google that it has any plans to merge these two platforms. It also doesn't mean that the average Android device user will be able to swap over to ChromeOS, or that Google is planning to ship a version of its Pixel devices with ChromeOS either. 

In short, don’t read much into this yet, but it’s significant that this can be done, and possibly telling that Google is toying with the idea in some way.

As time has gone on, Google has developed Android and ChromeOS to be more synergistic, notably giving ChromeOS the capability to run Android apps natively. In the past, you may recall Google even attempted to make a hybrid of Android and ChromeOS, with the codename Andromeda. However, work on that was shelved as the two operating systems were already seeing substantial success separately. 

To put these claims to the test, Android Authority created its own ‘ferrochrome’ custom ChromeOS that it was able to run using a virtual machine on a Pixel 7 Pro, confirming that it's possible and providing a video of this feat.

For now, then, we can only wait and see if Google is going to explore this any further. But it’s already interesting to see Android Authority demonstrate this is possible, and that the tools to do this already exist if developers want to attempt it themselves. Virtualization is a popular method to run software originally built for another platform, and many modern phones have the hardware specs to facilitate it. It could also be a pathway for Google to improve the desktop mode for the upcoming Android 15, as apparently, the version seen in beta has some way to go. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Google’s answer to OpenAI’s Sora has landed – here’s how to get on the waitlist

Among the many AI treats that Google tossed into the crowd during its Google I/O 2024 keynote was a new video tool called Veo – and the waiting list for the OpenAI Sora rival is now open for those who want early access.

From Google's early Veo demos, the generative video tool certainly looks a lot like Sora, which is expected to be released “later this year.” It promises to whip up 1080p resolution videos that “can [be] beyond a minute” and in different cinematic styles, from time-lapses to aerial drone shots. You can see an example further down this page.

Veo, which is the engine behind a broader tool from Google's AI Test Kitchen called VideoFX, can also help you edit existing video clips. For example, you can give it an input video alongside a command, and it'll be able to generate extra scenery – Google's example being the addition of kayaks to an aerial coastal scene.

But like Sora, Veo is also only going to be open to a select few early testers. You can apply to be one of those 'Trusted Testers' now using the Google Labs form. Google says it will “review all submissions on a rolling basis” and some of the questions –including one that asks you to link to relevant work – suggest it could initially only be available to digital artists or filmmakers.

Still, we don't know the exact criteria to be an early Veo tester, so it's well worth applying if you're keen to take it for a spin.

The AI video tipping point

Veo certainly isn't the first generative video tool we've seen. As we noted when the Veo launch first broke, the likes of Synthesia, Colossyan, and Lumiere have been around for a while now. OpenAI's Sora has also hit the mainstream with its early music videos and strange TED Talk promos.

These tools are clearly hitting a tipping point because even the relatively conservative Adobe has shown how it plans to plug generative AI video tools into its industry-standard editor Premiere Pro, again “later this year.”

But the considerable computing power needed to run the likes of Veo's diffusion transformer models and maintain visual consistency across multiple frames, is also a major bottleneck on a wider rollout, which explains why many are still in demo form.

Still, we're now reaching a point where these tools are ready to partially leap into the wild, and being an early beta tester is a good way to get a feel for them before the inevitable monthly subscriptions are defined and rolled out.

You might also like

TechRadar – All the latest technology news

Read More

Google’s Project Astra could supercharge the Pixel 9 – and help Google Glass make a comeback

I didn't expect Google Glass to make a minor comeback at Google I/O 2024, but it did thanks to Project Astra. 

That's Google's name for a new prototype of AI agents, underpinned by the Gemini multimodal AI, that can make sense of video and speech inputs, and smartly react to what a person is effectively looking at and answer queries about it. 

Described as a “universal AI” that can be “truly helpful in everyday life”, Project Astra is designed to be proactive, teachable, and able to understand natural language. And in a video ,Google demonstrated this with a person using what looked like a Pixel 8 Pro with the Astra AI running on it. 

By pointing the phone's camera at room, the person was able to ask Astra to “tell me when you see something that makes sound”, to which the AI will flagged a speaker it can see within the camera's viewfinder. From there the person was able to ask what a certain part of the speaker was, with the AI replying that the part in question is a tweeter and handles high frequencies. 

But Astra does a lot more: it can identify code on a monitor and explain what it does, and it can work out where someone is in a city and provide a description of that area. Heck, when promoted, it can even make an alliterative sentence around a set of crayons in a fashion that's a tad Dr Zeus-like.

It can can even recall where the user has left a pair of glasses, as the AI remembers where it saw them last. It was able to do the latter as AI is designed to encode video frames of what it's seen, combine that video with speech inputs and put it all together in a timeline of events, caching that information so it can recall it later at speed. 

Then flipping over to a person wearing the Google Glass 'smart glasses', Astra could see that the person was looking at a diagram of a system on a whiteboard, and figure out where optimizations could be made when asked about them. 

Such capabilities suddenly make Glass seem genuinely useful, rather than the slightly creepy and arguably dud device it was a handful of years ago; maybe we'll see Google return to the smart glasses arena after this. 

Project Astra can do all of this thanks to using multimodal AI, which in simple terms is a mix of neural network models that can process data and inputs from multiple sources; think mixing information from cameras and microphones with knowledge the AI has already been trained on.

Google didn't say when Project Astra will make it into products, or even into the hands of developers, but Google's DeepMind CEO Demis Hassabis said that “some of these capabilities are coming to Google products, like the Gemini app, later this year.” I'd be very surprised if that doesn't mean the Google Pixel 9, which we're expecting to arrive later this year.

Now it's worth bearing in mind that Project Astra was shown off in a very slick video, and the reality of such onboard AI agents is they can suffer from latency. But it's a promising look at how Google will likely integrate actually useful AI tools into its future products.

You might also like

TechRadar – All the latest technology news

Read More

Google’s Gemini AI app could soon let you sync and control your favorite music streaming service

Google's latest AI experiment, Gemini, is about to get a whole lot more useful thanks to support for third-party music streaming services like Spotify and Apple Music. This new development was apparently found in Gemini’s settings, and users will be able to pick their preferred streaming service to use within Gemini.

Gemini has been running shifts all around different Google products, particularly as a digital assistant sometimes in place of and sometimes in tandem with Google Assistant

It’s still somewhat limited compared to Assistant and is not at the stage where it can fully replace the Google staple. One of these limitations is that it can’t enlist a streaming service of a user’s choice to play a song or other audio recording like many popular digital assistants (including Google Assistant) can. This might not be the case for long, however. 

The tech blog PiunikaWeb and X user @AssembleDebug claim that Gemini is getting the feature, and they have screenshots to back up their claim. 

Screenshots from PiunikaWeb’s tipster show that the Gemini app’s settings now have a new “Music” option, with text reading “Select preferred services used to play music” underneath. This will presumably allow users to choose from whatever streaming services Google deems compatible.

Once you choose a streaming service, Gemini will hopefully work seamlessly with that service and enable you to control it using voice commands. PiunikaWeb suggests that users will be able to use Gemini for song identification, possibly by letting Gemini listen to the song, and then interact with a streaming app to try and find the song that’s playing in their surroundings, similar to the way Shazam works. If that’s the case, that’s one fewer separate app you’ll need.

What we don't know yet, but hope to soon

Woman listening music on her headphones while resting on couch and holding her phone and looking out in the distance

(Image credit: Shutterstock/Dean Drobot)

This is all very exciting and from the screenshots, it looks like the feature is a good amount into development. 

It’s not clear if PiunikaWeb’s tipster could get the feature to actually work or which streaming services will work in sync with Gemini, and we don’t know when Google will roll this feature out. 

Still, it’s highly requested and a must if Google has plans for Gemini to take Assistant’s place, so it’ll probably be rolled out in a future Gemini update. It’s also indicative to me that Google seems pretty committed to expanding Gemini’s repertoire so that it joins Google’s other popular products and services. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Confused about Google’s Find My Device? Here are 7 things you need to know

It took a while, but Google has released the long-awaited upgrade to its Find My Device network. This may come as a surprise. The update was originally announced back in May 2023, but was soon delayed with apparent launch date. Then, out of nowhere, Google decided to release the software on April 8 without major fanfare. As a result, you may feel lost, but we can help you find your way.

Here's a list of the seven most important things you need to know about the Find My Device update. We cover what’s new in the update as well as the devices that are compatible with the network, because not everything works and there’s still work to be done.

1. It’s a big upgrade for Google’s old Find My Device network 

Google's Find My Device feature

(Image credit: Google)

The previous network was very limited in what it could do. It was only able to detect the odd Android smartphone or Wear OS smartwatch. However, that limitation is now gone as Find My Device can sniff other devices; most notably Bluetooth location trackers. 

Gadgets also don’t need to be connected to the internet or have location services turned on, since the software can detect them so long as they’re within Bluetooth range. However, Find My Device won’t tell you exactly where the devices are. You’ll instead be given an approximate location on your on-screen map. You'll ultimately have to do the legwork yourself.

Find My Device functions similarly to Apple’s Find My network, so “location data is end-to-end encrypted,” meaning no one, not even Google, can take a peek.

2. Google was waiting for Apple to add support to iPhones 

iPhone 15 from the front

(Image credit: Future)

The update was supposed to launch in July 2023, but it had to be delayed because of Apple. Google was worried about unwanted location trackers, and wanted Apple to introduce “similar protections for iOS.” Unfortunately, the iPhone manufacturer decided to drag its feet when it came to adding unknown tracker alerts to its own iPhone devices.

The wait may soon be over as the iOS 17.5 beta contains lines of code suggesting that the iPhone will soon get these anti-stalking measures. Soon, iOS devices might encourage users to disable unwanted Bluetooth trackers uncertified for Apple’s Find My network. It’s unknown when this feature will roll out as the features in the Beta don’t actually do anything when enabled. 

Given the presence of unwanted location tracker software within iOS 17.5, Apple's release may be imminent. Apple may have given Google the green light to roll out the Find My Device upgrade ahead of time to prepare for their own software launch.

3. It will roll out globally

Android

(Image credit: Future)

Google states the new Find My Device will roll out to all Android devices around the world, starting in the US and Canada. A company representative told us other countries will receive the same update within the coming months, although they couldn’t give us an exact date.

Android devices do need to meet a couple of requirements to support the network. Luckily, they’re not super strict. All you need is a smartphone running Android 9 with Bluetooth capabilities.

If you own either a Pixel 8 or Pixel 8 Pro, you’ll be given an exclusive feature: the ability to find a phone through the network even if the phone is powered down. Google reps said these models have special hardware that allows them to pour power into their Bluetooth chip when they're off. Google is working with other manufacturers in bringing this feature to other premium Android devices.

4. You’ll receive unwanted tracker alerts

Apple AirTags

(Image credit: Apple)

Apple AirTags are meant to be attached to frequently lost items like house keys or luggage so you can find them easily. Unfortunatley, several bad eggs have utilized them as an inexpensive way to stalk targets. Google would eventually update Android by giving users a way to detect unwanted AirTags.

For nearly a year, the OS could only seek out AirTags, but now with the upgrade, Android phones can locate Bluetooth trackers from other third-party brands such as Tile, Chipolo, and Pebblebee. It is, by far, the most single important feature in the update as it'll ensure your privacy and safety.

You won’t be able to find out who placed a tracker on you. According to a post on the company’s Security blog, only the owner can view that information. 

5. Chipolo and Pebblebee are launching new trackers for it soon

Chipolo's new trackers

(Image credit: Chipolo)

Speaking of Chipolo and Pebblebee, the two brands have announced new products that will take full advantage of the revamped network. Google reps confirmed to us they’ll be “compatible with unknown tracker alerts across Android and iOS”.

On May 27th, we’ll see the introduction of the Chipolo ONE Point item tracker as well as the Chipolo CARD Point wallet finder. You’ll be able to find the location of whatever item they’re attached to via the Find My Device app. The pair will also sport speakers on them to ring out a loud noise letting you where they are. What’s more, Chipolo’s products have a long battery life: Chipolo says the CARD finder lasts as long as two years on a single charge.

Pebblebee is achieving something similar with their Tag, Card, and Clip trackers. They’re small and lightweight and attachable to larger items, Plus, the trio all have a loud buzzer for easy locating. These three are available for pre-order right now although no shipping date was given. 

6. It’ll work nicely with your Nest products

Google Nest Wifi

(Image credit: Google )

For smart home users, you’ll be able to connect the Find My Device app to a Google Nest device to find lost items. An on-screen animation will show a sequence of images displaying all of the Nest hardware in your home as the network attempts to find said missing item. Be aware the tech won’t give you an exact location.

A short video on the official announcement shows there'll be a message stating where it was last seen, at what time, and if there was another smart home device next to it. Next to the text will be a refresh option in case the lost item doesn’t show up.

Below the message will be a set of tools to help you locate it. You can either play a sound from the tracker’s speakers, share the device, or mark it as lost.

7. Headphones are invited to the tracking party too

Someone wearing the Sony WH-1000XM5 headphones against a green backdrop

(Image credit: Gerald Lynch/TechRadar/Future)

Believe it or not, some insidious individuals have used earbuds and headphones to stalk people. To help combat this, Google has equipped Find My Device with a way to detect a select number of earbuds. The list of supporting hardware is not large as it’ll only be able to locate three specific models. They are the JBL Tour Pro 2, the JBL Tour One M2, and the high-end Sony WH-1000XM5. Apple AirPods are not on the list, although support for these could come out at a later time.

Quite the extensive list as you can see but it's all important information to know. Everything will work together to keep you safe. 

Be sure to check out TechRadar's list of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Google’s Gemini will be right back after these hallucinations: image generator to make a return after historical blunders

Google is gearing up to relaunch its image creation tool that’s part of the newly-rebranded generative artificial intelligence (AI) bot, Gemini, in the next few weeks. The generative AI image creation tool is in theory capable of generating almost anything you can dream up and put into words as a prompt, but “almost” is the key word here. 

Google has pumped the brakes on Gemini’s image generation after Gemini was observed creating historical depictions and other questionable images that were considered inaccurate or offensive. However, it looks like Gemini could return to image generation soon, as Google DeepMind CEO Demis Hassabis announced that Gemini will be rebooted in the coming week after taking time to address these issues. 

Image generation came to Gemini earlier in February, and users were keen to test its abelites. Some people attempted to generate images depicting a certain historical period that appeared to greatly deviate from accepted historical fact. Some of these users took to social media to share their results and direct criticism at Google. 

The images caught many people’s attention and sparked many conversations, and Google has recognized the images as a symptom of a problem within Gemini. The tech giant then chose to take the feature offline and fix whatever was causing the model to dream up such strange and controversial pictures. 

Hassabis confirmed that Gemini was not working as intended, and that it would take some weeks to amend it, and bring it back online while speaking at a panel taking place at the Mobile World Congress (MWC) event in Barcelona

Person using a laptop in a coffeeshop

(Image credit: Shutterstock)

If at first, your generative AI bot doesn't succeed…

Google’s first attempt at a generative AI chatbot was Bard, which saw a lukewarm reception and didn’t win users over from the more popular ChatGPT in the way Google had hoped, after which it changed course and debuted its revamped and rebranded family of generative models, Gemini. Like ChatGPT, Google is now offering a premium-tier for Gemini, which offers advanced features for a subscription. 

The examples of Gemini's misadventures have also reignited discussions about AI ethics generally, and Google’s AI ethics specifically, and around issues like the accuracy of generated AI output and AI hallucinations. Companies like Microsoft and Google are pushing ahead to win the AI assistant arms race, but while racing ahead, they’re in danger of releasing products with flaws that could undermine their hard work.

AI-generated content is becoming increasingly popular and, especially due to their size and resources, these companies can (and really, should) be held to a high standard of accuracy. High profile fails like the one Gemini experienced aren’t just embarrassing for Google – it could damage the product’s perception in the eyes of consumers. There’s a reason Google rebranded Bard after its much-mocked debut.

There’s no doubt that AI is incredibly exciting, but Google and its peers should be mindful that rushing out half-baked products just to get ahead of the competition could spectacularly backfire.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More