Confused about Google’s Find My Device? Here are 7 things you need to know

It took a while, but Google has released the long-awaited upgrade to its Find My Device network. This may come as a surprise. The update was originally announced back in May 2023, but was soon delayed with apparent launch date. Then, out of nowhere, Google decided to release the software on April 8 without major fanfare. As a result, you may feel lost, but we can help you find your way.

Here's a list of the seven most important things you need to know about the Find My Device update. We cover what’s new in the update as well as the devices that are compatible with the network, because not everything works and there’s still work to be done.

1. It’s a big upgrade for Google’s old Find My Device network 

Google's Find My Device feature

(Image credit: Google)

The previous network was very limited in what it could do. It was only able to detect the odd Android smartphone or Wear OS smartwatch. However, that limitation is now gone as Find My Device can sniff other devices; most notably Bluetooth location trackers. 

Gadgets also don’t need to be connected to the internet or have location services turned on, since the software can detect them so long as they’re within Bluetooth range. However, Find My Device won’t tell you exactly where the devices are. You’ll instead be given an approximate location on your on-screen map. You'll ultimately have to do the legwork yourself.

Find My Device functions similarly to Apple’s Find My network, so “location data is end-to-end encrypted,” meaning no one, not even Google, can take a peek.

2. Google was waiting for Apple to add support to iPhones 

iPhone 15 from the front

(Image credit: Future)

The update was supposed to launch in July 2023, but it had to be delayed because of Apple. Google was worried about unwanted location trackers, and wanted Apple to introduce “similar protections for iOS.” Unfortunately, the iPhone manufacturer decided to drag its feet when it came to adding unknown tracker alerts to its own iPhone devices.

The wait may soon be over as the iOS 17.5 beta contains lines of code suggesting that the iPhone will soon get these anti-stalking measures. Soon, iOS devices might encourage users to disable unwanted Bluetooth trackers uncertified for Apple’s Find My network. It’s unknown when this feature will roll out as the features in the Beta don’t actually do anything when enabled. 

Given the presence of unwanted location tracker software within iOS 17.5, Apple's release may be imminent. Apple may have given Google the green light to roll out the Find My Device upgrade ahead of time to prepare for their own software launch.

3. It will roll out globally

Android

(Image credit: Future)

Google states the new Find My Device will roll out to all Android devices around the world, starting in the US and Canada. A company representative told us other countries will receive the same update within the coming months, although they couldn’t give us an exact date.

Android devices do need to meet a couple of requirements to support the network. Luckily, they’re not super strict. All you need is a smartphone running Android 9 with Bluetooth capabilities.

If you own either a Pixel 8 or Pixel 8 Pro, you’ll be given an exclusive feature: the ability to find a phone through the network even if the phone is powered down. Google reps said these models have special hardware that allows them to pour power into their Bluetooth chip when they're off. Google is working with other manufacturers in bringing this feature to other premium Android devices.

4. You’ll receive unwanted tracker alerts

Apple AirTags

(Image credit: Apple)

Apple AirTags are meant to be attached to frequently lost items like house keys or luggage so you can find them easily. Unfortunatley, several bad eggs have utilized them as an inexpensive way to stalk targets. Google would eventually update Android by giving users a way to detect unwanted AirTags.

For nearly a year, the OS could only seek out AirTags, but now with the upgrade, Android phones can locate Bluetooth trackers from other third-party brands such as Tile, Chipolo, and Pebblebee. It is, by far, the most single important feature in the update as it'll ensure your privacy and safety.

You won’t be able to find out who placed a tracker on you. According to a post on the company’s Security blog, only the owner can view that information. 

5. Chipolo and Pebblebee are launching new trackers for it soon

Chipolo's new trackers

(Image credit: Chipolo)

Speaking of Chipolo and Pebblebee, the two brands have announced new products that will take full advantage of the revamped network. Google reps confirmed to us they’ll be “compatible with unknown tracker alerts across Android and iOS”.

On May 27th, we’ll see the introduction of the Chipolo ONE Point item tracker as well as the Chipolo CARD Point wallet finder. You’ll be able to find the location of whatever item they’re attached to via the Find My Device app. The pair will also sport speakers on them to ring out a loud noise letting you where they are. What’s more, Chipolo’s products have a long battery life: Chipolo says the CARD finder lasts as long as two years on a single charge.

Pebblebee is achieving something similar with their Tag, Card, and Clip trackers. They’re small and lightweight and attachable to larger items, Plus, the trio all have a loud buzzer for easy locating. These three are available for pre-order right now although no shipping date was given. 

6. It’ll work nicely with your Nest products

Google Nest Wifi

(Image credit: Google )

For smart home users, you’ll be able to connect the Find My Device app to a Google Nest device to find lost items. An on-screen animation will show a sequence of images displaying all of the Nest hardware in your home as the network attempts to find said missing item. Be aware the tech won’t give you an exact location.

A short video on the official announcement shows there'll be a message stating where it was last seen, at what time, and if there was another smart home device next to it. Next to the text will be a refresh option in case the lost item doesn’t show up.

Below the message will be a set of tools to help you locate it. You can either play a sound from the tracker’s speakers, share the device, or mark it as lost.

7. Headphones are invited to the tracking party too

Someone wearing the Sony WH-1000XM5 headphones against a green backdrop

(Image credit: Gerald Lynch/TechRadar/Future)

Believe it or not, some insidious individuals have used earbuds and headphones to stalk people. To help combat this, Google has equipped Find My Device with a way to detect a select number of earbuds. The list of supporting hardware is not large as it’ll only be able to locate three specific models. They are the JBL Tour Pro 2, the JBL Tour One M2, and the high-end Sony WH-1000XM5. Apple AirPods are not on the list, although support for these could come out at a later time.

Quite the extensive list as you can see but it's all important information to know. Everything will work together to keep you safe. 

Be sure to check out TechRadar's list of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Google’s Gemini will be right back after these hallucinations: image generator to make a return after historical blunders

Google is gearing up to relaunch its image creation tool that’s part of the newly-rebranded generative artificial intelligence (AI) bot, Gemini, in the next few weeks. The generative AI image creation tool is in theory capable of generating almost anything you can dream up and put into words as a prompt, but “almost” is the key word here. 

Google has pumped the brakes on Gemini’s image generation after Gemini was observed creating historical depictions and other questionable images that were considered inaccurate or offensive. However, it looks like Gemini could return to image generation soon, as Google DeepMind CEO Demis Hassabis announced that Gemini will be rebooted in the coming week after taking time to address these issues. 

Image generation came to Gemini earlier in February, and users were keen to test its abelites. Some people attempted to generate images depicting a certain historical period that appeared to greatly deviate from accepted historical fact. Some of these users took to social media to share their results and direct criticism at Google. 

The images caught many people’s attention and sparked many conversations, and Google has recognized the images as a symptom of a problem within Gemini. The tech giant then chose to take the feature offline and fix whatever was causing the model to dream up such strange and controversial pictures. 

Hassabis confirmed that Gemini was not working as intended, and that it would take some weeks to amend it, and bring it back online while speaking at a panel taking place at the Mobile World Congress (MWC) event in Barcelona

Person using a laptop in a coffeeshop

(Image credit: Shutterstock)

If at first, your generative AI bot doesn't succeed…

Google’s first attempt at a generative AI chatbot was Bard, which saw a lukewarm reception and didn’t win users over from the more popular ChatGPT in the way Google had hoped, after which it changed course and debuted its revamped and rebranded family of generative models, Gemini. Like ChatGPT, Google is now offering a premium-tier for Gemini, which offers advanced features for a subscription. 

The examples of Gemini's misadventures have also reignited discussions about AI ethics generally, and Google’s AI ethics specifically, and around issues like the accuracy of generated AI output and AI hallucinations. Companies like Microsoft and Google are pushing ahead to win the AI assistant arms race, but while racing ahead, they’re in danger of releasing products with flaws that could undermine their hard work.

AI-generated content is becoming increasingly popular and, especially due to their size and resources, these companies can (and really, should) be held to a high standard of accuracy. High profile fails like the one Gemini experienced aren’t just embarrassing for Google – it could damage the product’s perception in the eyes of consumers. There’s a reason Google rebranded Bard after its much-mocked debut.

There’s no doubt that AI is incredibly exciting, but Google and its peers should be mindful that rushing out half-baked products just to get ahead of the competition could spectacularly backfire.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Are you a Reddit user? Google’s about to feed all your posts to a hungry AI, and there’s nothing you can do about it

Google and Reddit have announced a huge content licensing deal, reportedly worth a whopping $ 60 million – but Reddit users are pissed.

Why, you might ask? Well, the deal involves Google using content posted by users on Reddit to train its AI models, chiefly its newly launched Google Gemini AI suite. It makes sense; Reddit contains a wealth of information and users typically talk colloquially, which Google is probably hoping will make for a more intelligent and more conversational AI service. However, this also essentially means that anything you post on Reddit now becomes fuel for the AI engine, something many users are taking umbrage at.

While the very first thing that came to mind was MIT’s insane Reddit-trained ‘psychopath AI’ from years ago, it’s fair to say that AI model training has come a long way since then – so hooking it up to Reddit hopefully won’t turn Gemini into a raving lunatic.

The deal, announced yesterday by Reddit in a blog post, will have other benefits as well: since many people specifically append ‘reddit’ to their search queries when looking for the answer to a question, Google aims to make getting to the relevant content on Reddit easier. Reddit plans to use Google’s Vertex AI to improve its own internal site search functionality, too, so Reddit users will enjoy a boost to the user experience – rather than getting absolutely nothing in return for their training data. 

Do Redditors deserve a cut of that $ 60 million?

A lot of Reddit users have been complaining about the deal in various threads on the site, for a wide variety of reasons. Some users have privacy worries, some voiced concerns about the quality of output from an AI trained on Reddit content (which, let’s be honest, can get pretty toxic), and others simply don’t want their posts ‘stolen’ to train an AI.

Unfortunately for any unhappy Redditors, the site’s Terms of Service do mean that Reddit can (within reason) do whatever it wants with your posts and comments. Calling the content ‘stolen’ is inaccurate: if you’re a Reddit user, you’re the product, and Reddit is the one selling. 

Personally, I’m glad to see a company actually getting paid for providing AI training data, unlike the legal grey-area dodginess of previous chatbots and AI art tools that were trained on data scraped from the internet for free without user consent. By agreeing to the Reddit TOS, you’re essentially consenting to your data being used for this.

A person introduces Google Gemini next to text saying it is

Google Gemini could stand to benefit hugely from the training data produced by this content use deal. (Image credit: Google)

Some users are positively incensed by this though, claiming that if they’re the ones making the content, surely they should be entitled to a slice of the AI pie. I’m going to hand out some tough love here: that’s a ridiculous and naive argument. Do these people believe they deserve a cut of ad revenue too, since they made a hit post that drew thousands of people to Reddit? This isn’t the same as AI creators quietly nabbing work from independent artists on Twitter.

At the end of the day, you’re never going to please everyone. If this deal has actual potential to improve not just Google Gemini, but Google Search in general (as well as Reddit’s site search), then the benefits arguably outweigh the costs – although I do think Reddit has a moral obligation to ensure that all of its users are fully informed about the use of their data. 

A few paragraphs in the TOS aren’t enough, guys: you know full well nobody reads those.

You might also like

TechRadar – All the latest technology news

Read More

Gemma, Google’s new open-source AI model, could make your next chatbot safer and more responsible

Google has unveiled Gemma, an open-source AI model that will allow people to create their own artificial intelligence chatbots and tools based on the same technology behind Google Gemini (the suite of AI tools formerly known as Bard and Duet AI).

Gemma is a collection of open-source models curated from the same technology and research as Gemini, developed by the team at Google DeepMind. Alongside the new open-source model, Google has also put out a ‘Responsible Generative AI Toolkit’ to support developers looking to get to work and experiment with Gemini, according to an official blog post

The open-source model comes in two variations, Gemma 2B and Gemma 7B, which have both been pre-trained to filter out sensitive or personal information. Both versions of the model have also been tested with reinforcement learning from human feedback, to reduce the potential of any chatbots based on Gemma from spitting out harmful content quite significantly. 

 A step in the right direction 

While it may be tempting to think of Gemma as just another model that can spawn chatbots (you wouldn’t be entirely wrong), it’s interesting to see that the company seems to have genuinely developed Gemma to “[make] AI helpful for everyone” as stated in the announcement. It looks like Google’s approach with its latest model is to encourage more responsible use of artificial intelligence. 

Gemma’s release comes right after OpenAI unveiled the impressive video generator Sora, and while we may have to wait and see what developers can produce using Gemma, it’s comforting to see Google attempt to approach artificial intelligence with some level of responsibility. OpenAI has a track record of pumping features and products out and then cleaning up the mess and implementing safeguards later on (in the spirit of Mark Zuckerberg’s ‘Move fast and break things’ one-liner). 

One other interesting feature of Gemma is that it’s designed to be run on local hardware (a single CPU or GPU, although Google Cloud is still an option), meaning that something as simple as a laptop could be used to program the next hit AI personality. Given the increasing prevalence of neural processing units in upcoming laptops, it’ll soon be easier than ever for anyone to take a stab at building their own AI.

You might also like…

TechRadar – All the latest technology news

Read More

Google’s Gemini AI can now handle bigger prompts thanks to next-gen upgrade

Google’s Gemini AI has only been around for two months at the time of this writing, and already, the company is launching its next-generation model dubbed Gemini 1.5.

The announcement post gets into the nitty-gritty explaining all the AI’s improvements in detail. It’s all rather technical, but the main takeaway is that Gemini 1.5 will deliver “dramatically enhanced performance.” This was accomplished with the implementation of a “Mixture-of-Experts architecture” (or MoE for short) which sees multiple AI models working together in unison. Implementing this structure made Gemini easier to train as well as faster at learning complicated tasks than before.

There are plans to roll out the upgrade to all three major versions of the AI, but the only one being released today for early testing is Gemini 1.5 Pro. 

What’s unique about it is the model has “a context window of up to 1 million tokens”. Tokens, as they relate to generative AI, are the smallest pieces of data LLMs (large language models) use “to process and generate text.” Bigger context windows allow the AI to handle more information at once. And a million tokens is huge, far exceeding what GPT-4 Turbo can do. OpenAI’s engine, for the sake of comparison, has a context window cap of 128,000 tokens. 

Gemini Pro in action

With all these numbers being thrown, the question is what does Gemini 1.5 Pro look like in action? Google made several videos showcasing the AI’s abilities. Admittedly, it’s pretty interesting stuff as they reveal how the upgraded model can analyze and summarize large amounts of text according to a prompt. 

In one example, they gave Gemini 1.5 Pro the over 400-page transcript of the Apollo 11 moon mission. It showed the AI could “understand, reason about, and identify” certain details in the document. The prompter asks the AI to locate “comedic moments” during the mission. After 30 seconds, Gemini 1.5 Pro managed to find a few jokes that the astronauts cracked while in space, including who told it and explained any references made.

These analysis skills can be used for other modalities. In another demo, the dev team gave the AI a 44-minute Buster Keaton movie. They uploaded a rough sketch of a gushing water tower and then asked for the timestamp of a scene involving a water tower. Sure enough, it found the exact part ten minutes into the film. Keep in mind this was done without any explanation about the drawing itself or any other text besides the question. Gemini 1.5 Pro understood it was a water tower without extra help.

Experimental tech

The model is not available to the general public at the moment. Currently, it’s being offered as an early preview to “developers and enterprise customers” through Google’s AI Studio and Vertex AI platforms for free. The company is warning testers they may experience long latency times since it is still experimental. There are plans, however, to improve speeds down the line.

We reached out to Google asking for information on when people can expect the launch of Gemini 1.5 and Gemini 1.5 Ultra plus the wider release of these next-gen AI models. This story will be updated at a later time. Until then, check out TechRadar's roundup of the best AI content generators for 2024.

You might also like

TechRadar – All the latest technology news

Read More

From chatterbox to archive: Google’s Gemini chatbot will hold on to your conversations for years

If you were thinking of sharing your deepest, darkest secrets with Google's freshly-rebranded family of generative AI apps, Gemini, just keep in mind that someone else might also see them. Google has made this explicitly clear in a lengthy Gemini support document where it elaborates on its data collection practices for Gemini chatbot apps across platforms like Android and iOS, as well as directly in-browser.

Google explained that it’s standard practice for human annotators to read, label, and process conversations that users have with Gemini. This information and data are used to improve Gemini to make it perform better in future conversations with users. It does clarify that conversations are “disconnected” from specific Google accounts before being seen by reviewers, but also that they’re stored for up to three years, with “related data” like user devices and languages as well as location. According to TechCrunch, Google doesn’t make it clear if these are in-house annotators or outsourced from elsewhere. 

If you’re feeling some discomfort about relinquishing this sort of data to be able to use Gemini, Google will give users some control over how and which Gemini-related data is retained. You can turn off Gemini App Activity in the My Activity dashboard (which is turned on by default). Turning off this setting will stop Gemini from saving conversations in the long term, starting when you disable this setting. 

However, even if you do this, Google will save conversations associated with your account for up to 72 hours. You can also go in and delete individual prompts and conversations in the Gemini Apps Activity screen (although again, it’s unclear if this fully scrubs them from Google's records). 

A direct warning that's worth heeding

Google puts the following in bold for this reason – your conversations with Gemini are not just your own:

Please don’t enter confidential information in your conversations or any data you wouldn’t want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies.

Google’s AI policies regarding data collection and retention are in line with its AI competitors like OpenAI. OpenAI’s policy for the standard, free tier of ChatGPT is to save all conversations for 30 days unless a user is subscribed to the enterprise-tier plan and chooses a custom data retention policy.

Google and its competitors are navigating what is one of the most contentious aspects of generative AI – the issues raised and the necessity of user data that comes with the nature of developing and training AI models. So far, it’s been something of a Wild West when it comes to the ethics, morals, and legality of AI. 

That said, some governments and regulators have started to take notice, for example, the FTC in the US and the Italian Data Protection Authority. Now’s a good time as ever for tech organizations and generative AI makers to pay attention and be proactive. We know they already do this when it comes to their corporate-orientated, paid customer models as those AI products very explicitly don’t retain data. Right now, tech companies don’t feel they need to do this for free individual users (or to at least give them the option to opt-out), so until they do, they’ll probably continue to scoop up all of the conversational data they can.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Google’s new generative AI aims to help you get those creative juices following

It’s a big day for Google AI as the tech giant has launched a new image-generation engine aimed at fostering people’s creativity.

The tool is called ImageFX and it runs on Imagen 2,  Google's “latest text-to-image model” that Google claims can deliver the company's “highest-quality images yet.” Like so many other generative AIs before, it generates content by having users enter a command into the text box. What’s unique about the engine is it comes with “Expressive Chips” which are dropdown menus over keywords allowing you to quickly alter content with adjacent ideas. For example, ImageFX gave us a sample prompt of a dress carved out of deadwood complete with foliage. After it made a series of pictures, the AI offered the opportunity to change certain aspects; turning a beautiful forest-inspired dress into an ugly shirt made out of plastic and flowers. 

Image 1 of 2

ImageFX - generated dress

(Image credit: Future)
Image 2 of 2

ImageFX - generated shirt

(Image credit: Future)

Options in the Expressive Chips don’t change. They remain fixed to the initial prompt although you can add more to the list by selecting the tags down at the bottom. There doesn’t appear to be a way to remove tags. Users will have to click the Start Over button to begin anew. If the AI manages to create something you enjoy, it can be downloaded or shared on social media.

Be creative

This obviously isn’t the first time Google has released a text-to-image generative AI. In fact, Bard just received the same ability. The main difference with ImageFX is, again, its encouragement of creativity. The clips can help spark inspiration by giving you ideas of how to direct the engine; ideas that you may never have thought of. Bard’s feature, on the other hand, offers little to no guidance. Because it's less user-friendly, directing Bard's image generation will be trickier.

ImageFX is free to use on Google’s AI Test Kitchen. Do keep in mind it’s still a work in progress. Upon visiting the page for the first time, you’ll be met with a warning message telling you the AI “may display inaccurate info”, and in some cases, offensive content. If this happens to you, the company asks that you report it to them by clicking the flag icon. 

Also, Google wants people to keep things clean. They link to their Generative AI Prohibited Use Policy in the warning listing out what you can’t do with ImageFX.

AI updates

In addition to ImageFX, Google made several updates to past experimental AIs. 

MusicFX, the brand’s text-to-music engine, now allows users to generate songs up to 70 seconds in length as well as alter their speed. The tool even received Expressive Chips, helping people get those creative juices flowing. MusicFX even got a performance boost enabling it to pump content faster than before. TextFX, on the other hand, didn’t see a major upgrade or new features. Google mainly updated the website so it’s more navigable.

MusicFX's new layout

(Image credit: Future)

Everything you see here is available to users in the US, New Zealand, Kenya, and Australia. No word on if the AI will roll out elsewhere, although we did ask. This story will be updated at a later time.

Until then, check out TechRadar's roundup of the best AI art generators for 2024 where we compare them to each other. There's no clear winner, but they do have their specialties. 

You might also like

TechRadar – All the latest technology news

Read More

Microsoft Edge could soon get its own version of Google’s Circle to Search feature

As the old saying goes, “Imitation is the sincerest form of flattery”. Microsoft is seemingly giving Google a huge compliment as new info reveals the tech giant is working on its own version of Circle to Search for Edge.

If you’re not familiar, Circle to Search is a recently released AI-powered feature on the Pixel 8 and Galaxy S24 series of phones. It allows people to circle objects on their mobile devices to quickly look them up on Google Search. Microsoft’s rendition functions similarly. According to the news site Windows Report, it’s called Circle To Copilot. The way it works you circle an on-screen object with the cursor – in this case, it’s an image of the Galaxy S24 Ultra

Immediately after, Copilot appears from the right side with the circled image attached as a screenshot in an input box. You then ask the AI assistant what the object is in the picture, and after a few seconds, it’ll generate a response. The publication goes on to state the tool also works with text. To highlight a line, you will also need to draw a circle around the words.

Windows Report states Circle To Copilot is currently available on the latest version of Microsoft Edge Canary which is an experimental build of the browser. It’s meant for users or developers who want early access to potential features. The publication has a series of instructions explaining how you can activate Circle To Copilot. You'll need to enter a specific command into the browser's Properties menu.

If the command works for you, Circle To Copilot can be enabled by going to the Mouse Gesture section of Edge’s Settings menu and then clicking the toggle switch. It’s the fourth entry from the top.

Work in progress

We followed Windows Report's steps ourselves; however, we were unable to try out the feature. All we got was an error message stating the command to activate the tool was not valid. It seems not everyone who installs Edge Canary will gain access, although this isn’t surprising. 

The dev browser is, not surprisingly, unstable. It’s a testing ground for Microsoft so things don’t always work as well as they should; if at all. It is possible Circle To Copilot will function better in a future patch, however, we don’t know when that will be rolling out. We are disappointed the feature was inaccessible on our PC because we had a couple of questions. Is this something that needs to be manually triggered on Copilot? Or will it function like Ask Copilot where you highlight a piece of content, right-click it, and select the correct option in the context menu?

Out of curiosity, we installed Edge Canary on our Android phone to see if it had the update. As it turns out, no. It may be Circle To Copilot is exclusive to Edge on desktop, but this could change in the future.

Be sure to check TechRadar's list of the best AI-powered virtual assistant for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Google’s impressive Lumiere shows us the future of making short-form AI videos

Google is taking another crack at text-to-video generation with Lumiere, a new AI model capable of creating surprisingly high-quality content. 

The tech giant has certainly come a long way from the days of Imagen Video. Subjects in Lumiere videos are no longer these nightmarish creatures with melting faces. Now things look much more realistic. Sea turtles look like sea turtles, fur on animals has the right texture, and people in AI clips have genuine smiles (for the most part). What’s more, there's very little of the weird jerky movement seen in other text-to-video generative AIs. Motion is largely smooth as butter. Inbar Mosseri, Research Team Lead at Google Research, published a video on her YouTube channel demonstrating Lumiere’s capabilities. 

Google put a lot of work into making Lumiere’s content appear as lifelike as possible. The dev team accomplished this by implementing something called Space-Time U-Net architecture (STUNet). The technology behind STUNet is pretty complex. But as Ars Technica explains, it allows Lumiere to understand where objects are in a video, how they move and change and renders these actions at the same time resulting in a smooth-flowing creation. 

This runs contrary to other generative platforms that first establish keyframes in clips and then fill in the gaps afterward. Doing so results in the jerky movement the tech is known for.

Well equipped

In addition to text-to-video generation, Lumiere has numerous features in its toolkit including support for multimodality. 

Users will be able to upload source images or videos to the AI so it can edit them according to their specifications. For example, you can upload an image of Girl with a Pearl Earring by Johannes Vermeer and turn it into a short clip where she smiles instead of blankly staring. Lumiere also has an ability called Cinemagraph which can animate highlighted portions of pictures.

Google demonstrates this by selecting a butterfly sitting on a flower. Thanks to the AI, the output video has the butterfly flapping its wings while the flowers around it remain stationary. 

Things become particularly impressive when it comes to video. Video Inpainting, another feature, functions similarly to Cinemagraph in that the AI can edit portions of clips. A woman’s patterned green dress can be turned into shiny gold or black. Lumiere goes one step further by offering Video Stylization for altering video subjects. A regular car driving down the road can be turned into a vehicle made entirely out of wood or Lego bricks.

Still in the works

It’s unknown if there are plans to launch Lumiere to the public or if Google intends to implement it as a new service. 

We could perhaps see the AI show up on a future Pixel phone as the evolution of Magic Editor. If you’re not familiar with it, Magic Editor utilizes “AI processing [to] intelligently” change spaces or objects in photographs on the Pixel 8. Video Inpainting, to us, seems like a natural progression for the tech.

For now, it looks like the team is going to keep it behind closed doors. As impressive as this AI may be, it still has its issues. Jerky animations are present. In other cases, subjects have limbs warping into mush. If you want to know more, Google’s research paper on Lumiere can be found on Cornell University’s arXiv website. Be warned: it's a dense read.

And be sure to check out TechRadar's roundup of the best AI art generators for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Here’s your first look at Google’s new AI Assistant with Bard, but you’ll have to wait longer for a release date

2024 is set to see AI playing an increasingly prominent role in all kinds of tech devices and services, and Google is getting the ball rolling by enhancing Google Assistant with Google Bard features, having launched its AI chatbot last year. 

During its Made by Google event in October, Google announced that the new Assistant by Bard would blend elements of both tools to create a generative AI search powerhouse. Its Google Assistant search tool has been integrated across the company’s products since its launch in 2016.  

Google’s developments in AI are transforming the way users experience and interact with its repertoire of apps and services, with AI tools available in Gmail, YouTube, and Google Docs, among others. The merging of Google Bard and Google Assistant features marks the next big step in the company’s plan to integrate AI across all its products and services. 

While Assistant with Bard doesn’t have a confirmed release date just yet, images and video shared by 9to5Google give us an idea of how it will look and function. 

9to5Google suggests that Assistant with Bard will replace Google Assistant altogether across Google and Android devices. If this is true, it’s likely that you’ll access the new AI the same way as you would access Google Assistant; either by commanding “Hey Google”, or long-pressing the power button. 

Looking at the images, the Discover page in the Google search app appears to have received a Bard integration in the form of a slider toggle that enables you to easily switch between a standard Google search and the AI chatbot

Assistant with Bard first look

(Image credit: 9to5Google )

Other images show the pop-up that appears when Assistant by Bard is enabled, allowing you to ask questions by talking, typing, or sharing photos using the three options at the bottom of the screen. Google previewed this design during its October event, at which it launched the Google Pixel 8 and Pixel 8 Pro.  

Assistant with Bard first look

(Image credit: 9to5Google )

Assistant with Bard isn’t yet available to use, but going by the images shared by 9to5Google it appears that the rollout of Google’s next AI development is imminent.  

You might also like

TechRadar – All the latest technology news

Read More