Weird audio mixing is a really annoying problem. How many times have you watched a video or movie where the audio sounds fine only for the dialog to be super quiet?
Google is helping audiences out by expanding YouTube’s Stable Volume feature from the mobile app to “Android TV and Google TV devices.” It's a handy tool that automatically adjusts “the volume of videos you watch,” all without requiring you to pick up your remote, according to 9To5Google.
That story explains that 'Stable Volume' ensures a consistent listening experience “by continuously balancing the volume range between quiet and loud parts” in a video. After installing YouTube version 4.40.303 on their Android TV display, they discovered the feature.
If you select the gear icon whenever a video is playing, you should see Stable Volume as an option within the Settings menu. It’ll sit in between Captions and the playback speed function.
It’s turned on by default, but you can deactivate it at any time just by selecting it while watching content. 9To5Google recommends turning off Stable Volume while listening to music or playing a video with a “detailed audio mix.” Having it activated then could potentially mess with the sound quality. Plus, YouTube Music isn't on Android TV or Google TV hardware, so you won't have a dedicated space specifically for songs.
We should mention that the official YouTube Help page for Stable Volume states it isn’t available for all videos, nor will music be negatively affected. We believe this note is outdated because it also says the tool is exclusive to the YouTube mobile app. It’s entirely possible the versions on Android TV and Google TV could behave differently.
Be sure to keep an eye out for the patch when it arrives. It joins other YouTube on TV features launched in 2024 such as Multiview and the auto-generated key moments.
Check out TechRadar's list of the best TV for 2024. We cover a wide array of models for different budgets.
Windows Recall has proven to be a highly controversial AI feature ever since it was first announced in May. What it does is it constantly takes screenshots of everything you do on your PC and then places the images into a searchable on-device database. And yes, that includes pictures displaying sensitive information.
People were quick to call it a “security nightmare” after Microsoft openly admitted the software would not hide “passwords or financial account numbers.” The company attempted to defend its decision but has recently decided to make multiple safety improvements to Recall before its quickly approaching June 18 launch.
Arguably, the most important of these changes is that Recall will no longer be turned on by default upon activating your PC. According to a recent post on the Windows Experience Blog, the feature will instead be off by default, meaning you’ll have to enable it yourself during a computer’s setup process.
Next, enrolling into Windows Hello is now a requirement to activate Recall and to view your screenshot timeline. This means you’ll have to authenticate yourself as the primary user through a biometric input or PIN before accessing the feature.
As for the final update, Microsoft is beefing up security by adding extra “layers of data protection [including] ‘just in time’ decryption” from Windows Hello ESS (Enhanced Sign-in Security). As a result, snapshots can only be viewed whenever a user proves their identity. Additionally, Recall’s search index database is now encrypted.
What's strange is this suggests the database that would’ve stored images containing bank account numbers was initially unprotected and vulnerable to outside forces. It may surprise you to hear how unsafe it was, but at least they’re fixing it before launch and not after.
Analysis: Remaining skeptical
The rest of the blog post reiterates the security functions of Windows Recall that were previously known. For example, snapshots will be stored locally on your computer and not uploaded to Microsoft servers. An icon representing the feature will sit in the system tray, “letting you know when Windows is saving” images. Plus, users can “pause, filter, [or] delete” snapshots whenever they want.
Microsoft also stresses that Recall will only be available on the upcoming Copilot Plus PCs since they have robust security to ensure privacy.
Does this mean we can totally trust Windows Recall to maintain data security? No, not really.
Jake Williams, VP of R&D at the cybersecurity consultancy Hunter Strategy, told Wired he “still sees serious risks [as well as] unresolved privacy problems.” People could be hit with a subpoena forcing them to cough up PINs to gian access to Recall databases.
Although Microsoft claims it can’t see snapshots, who’s to say the tech giant can’t change its mind a year or two down the line and decide to harvest all that sensitive information. They may find some legal loophole giving them carte blanche to do whatever they want with Recall data. It’s scary, though.
If you're looking for ways to improve your online security, check out TechRadar's massive list of the best privacy tools for 2024.
If you’re a Windows 11 user who isn’t quite ready to leave the operating system behind but would like a break from seeing ads all over the place, I have some news that might make you feel better. There’s a free app that cuts out ads to make your Windows 11 experience a little less frustrating – it’s called OFGB, which amusingly stands for ‘Oh Frick Go Back.’
OFGB makes use of your system’s Windows Registry to disable all kinds of ads, including File Explorer ads, Lock Screen tips and tricks, Settings ads, “Finish Setup” ads, “Welcome Experience” ads, personalized ads, “Tailored Experiences, and Start Menu ads. It’s easy to use, and you can pick and pick and choose which of these you’d like to turn off by simply ticking the appropriate boxes (frankly, I’d recommend turning them all off).
How to get your hands on OFGB
You can download OFGB from its official GitHub page, and there are two versions: a self-contained (but larger) version and one that isn’t self-contained (meaning it depends on external software components to run). If you’re not familiar with coding and are unsure which version to get, I’d recommend the first version (OFGB-Deps.exe).
Also, make sure you get one of the versions of the Source code files (I’d recommend the .zip file). Download these files, and click OFGB-Deps.exe to begin the installation.
Oh frick, this is perfect
OFGB was created by Arch Linux user (Arch is a customizable version of Linux) xM4ddy on GitHub, who herself has had enough of Windows ads being injected in every nook and cranny of the OS. She gave the following quote about her frustrations with Tom’s Hardware:
“Windows lost me a long time ago by adding more and more telemetry, ads, and the lack of easily configurable options.”
You can also see a demo and read more from the creator in her Reddit post publicizing the new app.
OFGB joins an existing platoon of third-party workarounds that enable you to make automated edits to the Windows Registry so that you see fewer ads. There’s also Wintoys, an app that recently saw a major update, and Tiny 11 Builder, a tool for creating your own slimmed-down version of Windows 11, which also recently got an upgrade.
OFGB looks like a clean, straightforward solution if the ads are something that bothers you, but only if you’re confident about trying custom third-party apps – if you’re not, it’s best to stick to using Windows as it comes.
That said, you might be looking to take the leap, and you wouldn’t be alone – Windows 11 is reportedly losing market share to its predecessor Windows 10, which is set to no longer be supported by Microsoft next year, and many people have been expressing their anger at Microsoft’s ramping up and insistent ads in Windows 11 for a good while now. I wonder if third-party apps like OFGB will continue to work, because I could see Microsoft making every effort to push ads through – as it clearly isn’t paying much attention to the chorus of existing complaints.
It appears that some Apple users are being signed out of their Apple ID across their devices for no apparent reason – and are subsequently locked out of their accounts if they try signing in with their current passwords.
According to 9to5Mac, the issue has been ongoing since Friday, April 26, and is forcing users to reset their passwords despite entering the correct ones to get back into their accounts. This also creates a headache for users who have Stolen Device Protection enabled, as they’ll need to be in a trusted location and have access to all of their devices at once.
Users have taken to social media to report their experience with this annoying bug, which seems to happen completely at random. Twitter user @MaxWinebach posted that he was in the middle of a FaceTime call when he was suddenly locked out of all his Apple products.
I was mid FaceTime with @milesabovetech and my Apple account got locked and signed out of all of my Apple productstf is happeningApril 26, 2024
See more
A Mastodon user said they were told by a member of Apple Support that “sometimes random security improvements are added to your account”, which may have prompted the random booting. Still, it seems unlikely this would impact so many users, and we can’t be sure of anything until we hear official word from Apple itself.
So, what do you do?
Overall, as an Apple user who frequently forgets passwords, I’m pretty nervous about the prospect of being locked out of my beloved TikTok-watching device. Thankfully, there seems to be no immediate need to panic if this has happened to you (or like me, you’re anxiously waiting for it to happen to you). It could simply be a harmless bug, and from what we can tell so far, it doesn’t seem related to the recent phishing attack that could lock you out of your device.
So, what can you do to protect yourself from this bug? Honestly, not much. Since everything seemed to kick off on Friday, if you’re yet to have it happen to you, there’s a good chance you’re in the clear. If you do find yourself locked out, it seems like all you have to do is reset your password and go through the tedious task of logging into all your other Apple devices as well. While this is an annoying bug it doesn’t seem too serious. Make sure you’re not reusing passwords, though – that’s a recipe for disaster.
We have reached out to Apple for comment and will update this article if an official statement is released.
OpenAI's new Sora text-to-video generation tool won't be publicly available until later this year, but in the meantime it's serving up some tantalizing glimpses of what it can do – including a mind-bending new video (below) showing what TED Talks might look like in 40 years.
To create the FPV drone-style video, TED Talks worked with OpenAI and the filmmaker Paul Trillo, who's been using Sora since February. The result is an impressive, if slightly bewildering, fly-through of futuristic conference talks, weird laboratories and underwater tunnels.
The video again shows both the incredible potential of OpenAI Sora and its limitations. The FPV drone-style effect has become a popular one for hard-hitting social media videos, but it traditionally requires advanced drone piloting skills and expensive kit that goes way beyond the new DJI Avata 2.
Sora's new video shows that these kind of effects could be opened up to new creators, potentially at a vastly lower cost – although that comes with the caveat that we don't yet know how much OpenAI's new tool itself will cost and who it'll be available to.
What will TED look like in 40 years? For #TED2024, we worked with artist @PaulTrillo and @OpenAI to create this exclusive video using Sora, their unreleased text-to-video model. Stay tuned for more groundbreaking AI — coming soon to https://t.co/YLcO5Ju923! pic.twitter.com/lTHhcUm4FiApril 19, 2024
See more
But the video (above) also shows that Sora is still quite far short of being a reliable tool for full-blown movies. The people in the shots are on-screen for only a couple of seconds and there's plenty of uncanny valley nightmare fuel in the background.
The result is an experience that's exhilarating, while also leaving you feeling strangely off-kilter – like touching down again after a sky dive. Still, I'm definitely keen to see more samples as we hurtle towards Sora's public launch later in 2024.
How was the video made?
OpenAI and TED Talks didn't go into detail about how this specific video was made, but its creator Paul Trillo recently talked more broadly about his experiences of being one of Sora's alpha tester.
Trillo told Business Insider about the kinds of prompts he uses, including “a cocktail of words that I use to make sure that it feels less like a video game and something more filmic”. Apparently these include prompts like “35 millimeter”, “anamorphic lens”, and “depth of field lens vignette”, which are needed or else Sora will “kind of default to this very digital-looking output”.
Right now, every prompt has to go through OpenAI so it can be run through its strict safeguards around issues like copyright. One of Trillo's most interesting observations is that Sora is currently “like a slot machine where you ask for something, and it jumbles ideas together, and it doesn't have a real physics engine to it”.
This means that it's still a long way way off from being truly consistent with people and object states, something that OpenAI admitted in an earlier blog post. OpenAI said that Sora “currently exhibits numerous limitations as a simulator”, including the fact that “it does not accurately model the physics of many basic interactions, like glass shattering”.
These incoherencies will likely limit Sora to being a short-form video tool for some time, but it's still one I can't wait to try out.
NVIDIA sees AI as a means of putting new tools into the hands of gamers and creators alike. NVIDIA Instant NeRF is one such tool, leveraging the power of NVIDIA’s GPUs to make complex 3D creations orders of magnitude easier to generate. Instant NeRF is an especially powerful tool in its ability to create these 3D scenes and objects.
In effect, NVIDIA Instant NeRF takes a series of 2D images, figures out how they overlap, and uses that knowledge to create an entire 3D scene. A NeRF (or Neural Radiance Field) isn’t a new thing, but the process to create one was not fast. By applying machine learning techniques to the process and specialized hardware, NVIDIA was able to make it much quicker, enough to be almost instant — thus Instant NeRF.
Being able to snap a series of photos or even record a video of a scene and then turn it into a freely explorable 3D environment offers a new realm of creative possibility for artists. It also provides a quick way to turn a real-world object into a 3D one.
Some artists are already realizing the potential of Instant NeRF. In a few artist showcases, NVIDIA highlights artists’ abilities to share historic artworks, capture memories, and allow viewers of the artworks to more fully immerse themselves in the scenes without being beholden to the original composition.
Karen X. Cheng explores the potential of this tool in her creation, Through the Looking Glass, which uses NVIDIA Instant NeRF to create the 3D scene through which her camera ventures, eventually slipping through a mirror into an inverted world.
Hugues Bruyère uses Instant NeRF in his creation, Zeus, to present a historic sculpture from the Royal Ontario Museum in a new way. This gives those who may never have a chance to see it in person the ability to view it from all angles nonetheless.
With tools like Instant NeRF, it’s clear that NVIDIA’s latest hardware has much more than just gamers in mind. With more and more dedicated AI power built into each chip, NVIDIA RTX GPUs are bringing new levels of AI performance to the table that can serve gamers and creators alike.
The same Tensor Cores that make it possible to infer what a 4K frame in a game would look like using a 1080p frame as a reference are also making it possible to infer what a fully fleshed out 3D scene would look like using a series of 2D images. And NVIDIA’s latest GPUs put those tools right into your hands.
Instant NeRF isn’t something you just get to hear about. It’s actually a tool you can try for yourself. Developers can dive right in with this guide, and less technical users can grab a simpler Windows installer here which even includes a demo photo set. Since Instant NeRF runs on RTX GPUs, it’s widely available, though the latest RTX 40 Series and RTX Ada GPUs can turn out results even faster.
The ability of NVIDIA’s hardware to accelerate AI is key to powering a new generation of AI PCs. Instant NeRF is just one of many examples of how NVIDIA’s GPUs are enabling new capabilities or dramatically speeding up existing tools. To help you explore the latest developments in AI and present them in an easy-to-understand format, NVIDIA has introduced the AI Decoded blog series. You can also see all the ways NVIDIA is boosting AI performance at NVIDIA’s RTX for AI page.
It took a while, but Google has released the long-awaited upgrade to its Find My Device network. This may come as a surprise. The update was originally announced back in May 2023, but was soon delayed with apparent launch date. Then, out of nowhere, Google decided to release the software on April 8 without major fanfare. As a result, you may feel lost, but we can help you find your way.
Here's a list of the seven most important things you need to know about the Find My Device update. We cover what’s new in the update as well as the devices that are compatible with the network, because not everything works and there’s still work to be done.
1. It’s a big upgrade for Google’s old Find My Device network
The previous network was very limited in what it could do. It was only able to detect the odd Android smartphone or Wear OS smartwatch. However, that limitation is now gone as Find My Device can sniff other devices; most notably Bluetooth location trackers.
Gadgets also don’t need to be connected to the internet or have location services turned on, since the software can detect them so long as they’re within Bluetooth range. However, Find My Device won’t tell you exactly where the devices are. You’ll instead be given an approximate location on your on-screen map. You'll ultimately have to do the legwork yourself.
Find My Device functions similarly to Apple’s Find My network, so “location data is end-to-end encrypted,” meaning no one, not even Google, can take a peek.
2. Google was waiting for Apple to add support to iPhones
The update was supposed to launch in July 2023, but it had to be delayed because of Apple. Google was worried about unwanted location trackers, and wanted Apple to introduce “similar protections for iOS.” Unfortunately, the iPhone manufacturer decided to drag its feet when it came to adding unknown tracker alerts to its own iPhone devices.
The wait may soon be over as the iOS 17.5 beta contains lines of code suggesting that the iPhone will soon get these anti-stalking measures. Soon, iOS devices might encourage users to disable unwanted Bluetooth trackers uncertified for Apple’s Find My network. It’s unknown when this feature will roll out as the features in the Beta don’t actually do anything when enabled.
Given the presence of unwanted location tracker software within iOS 17.5, Apple's release may be imminent. Apple may have given Google the green light to roll out the Find My Device upgrade ahead of time to prepare for their own software launch.
3. It will roll out globally
Google states the new Find My Device will roll out to all Android devices around the world, starting in the US and Canada. A company representative told us other countries will receive the same update within the coming months, although they couldn’t give us an exact date.
Android devices do need to meet a couple of requirements to support the network. Luckily, they’re not super strict. All you need is a smartphone running Android 9 with Bluetooth capabilities.
If you own either a Pixel 8 or Pixel 8 Pro, you’ll be given an exclusive feature: the ability to find a phone through the network even if the phone is powered down. Google reps said these models have special hardware that allows them to pour power into their Bluetooth chip when they're off. Google is working with other manufacturers in bringing this feature to other premium Android devices.
4. You’ll receive unwanted tracker alerts
Apple AirTags are meant to be attached to frequently lost items like house keys or luggage so you can find them easily. Unfortunatley, several bad eggs have utilized them as an inexpensive way to stalk targets. Google would eventually update Android by giving users a way to detect unwanted AirTags.
For nearly a year, the OS could only seek out AirTags, but now with the upgrade, Android phones can locate Bluetooth trackers from other third-party brands such as Tile, Chipolo, and Pebblebee. It is, by far, the most single important feature in the update as it'll ensure your privacy and safety.
You won’t be able to find out who placed a tracker on you. According to a post on the company’s Security blog, only the owner can view that information.
5. Chipolo and Pebblebee are launching new trackers for it soon
Speaking of Chipolo and Pebblebee, the two brands have announced new products that will take full advantage of the revamped network. Google reps confirmed to us they’ll be “compatible with unknown tracker alerts across Android and iOS”.
On May 27th, we’ll see the introduction of the Chipolo ONE Point item tracker as well as the Chipolo CARD Point wallet finder. You’ll be able to find the location of whatever item they’re attached to via the Find My Device app. The pair will also sport speakers on them to ring out a loud noise letting you where they are. What’s more, Chipolo’s products have a long battery life: Chipolo says the CARD finder lasts as long as two years on a single charge.
Pebblebee is achieving something similar with their Tag, Card, and Clip trackers. They’re small and lightweight and attachable to larger items, Plus, the trio all have a loud buzzer for easy locating. These three are available for pre-order right now although no shipping date was given.
6. It’ll work nicely with your Nest products
For smart home users, you’ll be able to connect the Find My Device app to a Google Nest device to find lost items. An on-screen animation will show a sequence of images displaying all of the Nest hardware in your home as the network attempts to find said missing item. Be aware the tech won’t give you an exact location.
A short video on the official announcement shows there'll be a message stating where it was last seen, at what time, and if there was another smart home device next to it. Next to the text will be a refresh option in case the lost item doesn’t show up.
Below the message will be a set of tools to help you locate it. You can either play a sound from the tracker’s speakers, share the device, or mark it as lost.
7. Headphones are invited to the tracking party too
Believe it or not, some insidious individuals have used earbuds and headphones to stalk people. To help combat this, Google has equipped Find My Device with a way to detect a select number of earbuds. The list of supporting hardware is not large as it’ll only be able to locate three specific models. They are the JBL Tour Pro 2, the JBL Tour One M2, and the high-end Sony WH-1000XM5. Apple AirPods are not on the list, although support for these could come out at a later time.
Quite the extensive list as you can see but it's all important information to know. Everything will work together to keep you safe.
Meta is prompting Meta Quest 3 and Oculus Quest 2 users to verify their birthday before they can use their VR headsets. This helps it verify the age on your account so it can serve you appropriate content – and give your parent or guardian access to tools and account protections, if necessary.
This also isn’t just one of those popups you can ignore. Once you’ve been asked to verify your age you’ll have 30 days to do so according to the official page. Once the time limit expires your account will be blocked – and restrictions will only be lifted when you provide your birthday.
You don’t want to rush through the process either. If you enter the wrong date, changing it can be a pain. Scroll down for more information, but the TL;DR version is that you’ll need a credit card or an ID to make any alterations – and getting stuck with the wrong account age could cause problems depending what apps and social settings you like to use.
These changes follow Meta asking app developers to self-identify what age category their apps are suitable for – either Preteen, Teen, or Adult – and after its request for age verification to be built into mobile app stores like the Apple App Store and Google Play Store.
Speaking to The Verge, Meta's Global Head of Safety Antigone Davis explained that Meta’s practicing what it preaches by implementing these changes to the Quest platform.
What does your age mean for your Meta account?
There are three types of Meta Quest account, and you’re assigned one based on your age.
The most restricted are Preteen accounts – for users aged 10 to 12. Also known as Parent-managed accounts, if you’re 10, 11, or 12 you’ll need a parent or guardian to set up their account so they can approve yours. All of your profile settings are set to private by default, and if you want to change this, or download or use an app, you’ll need your parent’s permission.
Then there are Teen accounts for people aged 13 to 17. Profile settings are still set to private by default, but you have the power to change them – you don’t need a parent or guardian’s approval. Instead your parent can set up supervision tools that offer a way for them to customize your experience without needing to be involved in every decision.
Lastly, Adult accounts are for users 18 or over and you’re given complete control over your profile settings and what apps you want to use.
How do I change my birthday on Quest?
If you’ve inputted the wrong birthday or one that’s different from the one that’s already on your account then we have some good news, and some bad news. The good news is you can change it – the bad news is you’ll need a credit card or an ID (such as a driver’s license, school ID or state ID) so Meta can make sure you’re telling the truth.
Using a credit card is faster according to Meta’s Verification page, but it will need to make a charge – which is refunded. If you’d rather not have Meta take your money – even temporarily – you can use the ID verification method though this will apparently take longer to verify. Though we don’t know exactly how long this verification process takes in either case.
Whichever option you choose, Meta has said it doesn’t store the information for long after it completes the age verification.
If you can’t verify your age then you can choose to stick with your newly-entered birthday and be sorted into the Teen or Preteen account brackets – though this will mean there will be some restrictions on your account.
Windows 11 could soon run updates without rebooting, if the rumor mill is right – and there’s already evidence this is the path Microsoft is taking in a preview build.
This comes from a regular source of Microsoft-related leaks, namely Zac Bowden of Windows Central, who first of all spotted that Windows 11 preview build 26058 (in the Canary and Dev channels) was recently updated with an interesting change.
Microsoft is pushing out updates to testers that do nothing and are merely “designed to test our servicing pipeline for Windows 11, version 24H2.” The the key part is we’re informed that those who have VBS (Virtualization Based Security) turned on “may not experience a restart upon installing the update.”
Running an update without requiring a reboot is known as “hot patching” and this method of delivery – which is obviously far more convenient for the user – could be realized in the next major update for Windows 11 later this year (24H2), Bowden asserts.
The leaker has tapped sources for further details, and observes that we’re talking about hot patching for the monthly cumulative updates for Windows 11 here. So the bigger upgrades (the likes of 24H2) wouldn’t be hot-patched in, as clearly there’s too much work going on under the hood for that to happen.
Indeed, not every cumulative update would be applied without a reboot, Bowden further explains. This is because hot patching uses a baseline update, one that can be patched on top of, but that baseline model needs to be refreshed every few months.
Add seasoning with all this info, naturally, but it looks like Microsoft is up to something here based on the testing going on, which specifically mentions 24H2, as well.
Analysis: How would this work exactly?
What does this mean for the future of Windows 11? Well, possibly nothing. After all, this is mostly chatter from the grapevine, and what’s apparently happening in early testing could simply be abandoned if it doesn’t work out.
However, hot patching is something that is already employed with Windows Server, and the Xbox console as well, so it makes sense that Microsoft would want to use the tech to benefit Windows 11 users. It’s certainly a very convenient touch, though as noted, not every cumulative update would be hot-patched.
Bowden believes the likely scenario would be quarterly cumulative updates that need a reboot, followed by hot patches in between. In other words, we’d get a reboot-laden update in January, say, followed by two hot-patched cumulative updates in February and March that could be completed quickly with no reboot needed. Then, April’s cumulative update would need a reboot, but May and June wouldn’t, and so on.
As mentioned, annual updates certainly wouldn’t be hot-patched, and neither would out-of-band security fixes for example (as the reboot-less updates rely on that baseline patch, and such a fix wouldn’t be based on that, of course).
This would be a pretty cool feature for Windows 11 users, because dropping the need to reboot – to be forced to restart in some cases – is obviously a major benefit. Is it enough to tempt upgrades from Windows 10? Well, maybe not, but it is another boon to add to the pile for those holding out on Microsoft’s older operating system. (Assuming they can upgrade to Windows 11 at all, of course, which is a stumbling block for some due to PC requirements like TPM).
Google has been a sleeping AI giant, but this week it finally woke up. Google Gemini is here and it's the tech giant's most powerful range of AI tools so far. But Gemini is also, in true Google style, really confusing, so we're here to quickly break it all down for you.
Gemini is the new umbrella name for all of Google's AI tools, from chatbots to voice assistants and full-blown coding assistants. It replaces both Google Bard – the previous name for Google's AI chatbot – and Duet AI, the name for Google's Workspace-oriented rival to CoPilot Pro and ChatGPT Plus.
But this is also way more than just a rebrand. As part of the launch, Google has released a new free Google Gemini app for Android (in the US, for now. For the first time, Google is also releasing its most powerful large language model (LLM) so far called Gemini Ultra 1.0. You can play with that now as well, if you sign up for its new Google One AI Premium subscription (more on that below).
This is all pretty head-spinning stuff, and we haven't even scratched the surface of what you can actually do with these AI tools yet. So for a quick fast-charge to get you up to speed on everything Google Gemini, plug into our easily-digestible explainer below…
1. Gemini replaces Google Bard and Duet AI
In some ways, Google Gemini makes things simpler. It's the new umbrella name for all of Google's AI tools, whether you're on a smartphone or desktop, or using the free or paid versions.
Gemini replaces Google Bard (the previous name for Google's “experimental” AI chatbot) and Duet AI, the collection of work-oriented tools for Google Workspace. Looking for a free AI helper to make you images or redraft emails? You can now go to Google Gemini and start using it with a standard Google account.
But if you want the more powerful Gemini Advanced AI tools – and access to Google's newest Gemini Ultra LLM – you'll need to pay a monthly subscription. That comes as part of a Google One AI Premium Plan, which you can read more about below.
To sum up, there are three main ways to access Google Gemini:
As we mentioned above, Google has launched a new free Gemini app for Android. This is rolling out in the US now and Google says it'll be “fully available in the coming weeks”, with more locations to “coming soon”. Google is known for having a broad definition of “soon”, so the UK and EU may need to be patient.
There's going to be a similar rollout for iOS and iPhones, but with a different approach. Rather than a separate standalone app, Gemini will be available in the Google app.
The Android app is a big deal in particular because it'll let you set Gemini as your default voice assistant, replacing the existing Google Assistant. You can set this during the app's setup process, where you can tap “I agree” for Gemini to “handle tasks on your phone”.
Do this and it'll mean that whenever you summon a voice assistant on your Android phone – either by long-pressing your home button or saying “Hey Google” – you'll speak to Gemini rather than Google Assistant. That said, there is evidence that you may not want to do that just yet…
3. You may want to stick with Google Assistant (for now)
The Google Gemini app has only been out for a matter of days – and there are early signs of teething issues and limitations when it comes to using Gemini as your voice assistant.
The Play Store is filling up with complaints stating that Gemini asks you to tap 'submit' even when using voice commands and that it lacks functionality compared to Assistant, including being unable to handle hands-free reminders, home device control and more. We've also found some bugs during our early tests with the app.
Fortunately, you can switch back to the old Google Assistant. To do that, just go the Gemini app, tap your Profile in the top-right corner, then go to Settings > Digital assistants from Google. In here you'll be able to choose between Gemini and Google Assistant.
Sissie Hsiao (Google's VP and General Manager of Gemini experiences) claims that Gemini is “an important first step in building a true AI assistant – one that is conversational, multimodal and helpful”. But right now, it seems that “first step” is doing a lot of heavy lifting.
4. Gemini is a new way to quiz Google's other apps
Like the now-retired Bard, Gemini is designed to be a kind of creative co-pilot if you need help with “writing, brainstorming, learning, and more”, as Google describes it. So like before, you can ask it to tell you a joke, rewrite an email, help with research and more.
As always, the usual caveats remain. Google is still quite clear that “Gemini will make mistakes” and that, even though it's improving by the day, Gemini “can provide inaccurate information, or it can even make offensive statements”.
This means its other use case is potentially more interesting. Gemini is also a new way to interact with Google's other services like YouTube, Google Maps and Gmail. Ask it to “suggest some popular tourist sites in Seattle” and it'll show them in Google Maps.
Another example is asking it to “find videos of how to quickly get grape juice out of a wool rug”. This means Gemini is effectively a more conversational way to interact with the likes of YouTube and Google Drive. It can also now generate images, which was a skill Bard learnt last week before it was renamed.
5. The free version of Gemini has limitations
The free version of Gemini (which you access in the Google Gemini app on Android, in the Google app on iOS, or on the Gemini website) has quite a few limitations compared to the subscription-only Gemini Advanced.
This is partly because it's based on a simpler large language model (LLM) called Gemini Pro, rather than Google's new Gemini Ultra 1.0. Broadly speaking, the free version is less creative, less accurate, unable to handle multi-step questions, can't really code and has more limited data-handling powers.
This means the free version is best for basic things like answering simple questions, summarizing emails, making images, and (as we discussed above) quizzing Google's other services using natural language.
Looking for an AI assistant that can help with advanced coding, complex creative projects, and also work directly within Gmail and Google Docs? Google Gemini Advanced could be more up your street, particularly if you already subscribe to Google One…
6. Gemini Advanced is tempting for Google One users
The subscription-only Gemini Advanced costs $ 19.99 / £18.99 / AU$ 32.99 per month, although you can currently get a two-month free trial. Confusingly, you get Advanced by paying for a new Google One AI Premium Plan, which includes 2TB of cloud storage.
This means Gemini Advanced is particularly tempting if you already pay for a Google One cloud storage plan (or are looking to sign up for it anyway). With a 2TB Google One plan already costing $ 9.99 / £7.99 / AU$ 12.49 per month, that means the AI features are effectively setting you back an extra $ 10 / £11 / AU$ 20 a month.
There's even better news for those who already have a Google One subscription with 5TB of storage or more. Google says you can “enjoy AI Premium features until July 21, 2024, at no extra charge”.
This means that Google, in a similar style to Amazon Prime, is combining its subscriptions offerings (cloud storage and its most powerful AI assistant) in order to make them both more appealing (and, most likely, more sticky too).
7. The Gemini app could take a little while to reach the UK and EU
While Google has stated that the Gemini Android app is “coming soon” to “more countries and languages”, it hasn't given any timescale for when that'll happen – and a possible reason for the delay is that it's waiting for the EU AI Act to become clearer.
Sissie Hsiao (Google's VP and General Manager of Gemini experiences) told the MIT Technology Review “we’re working with local regulators to make sure that we’re abiding by local regime requirements before we can expand.”
While that sounds a bit ominous, Hsiao added that “rest assured, we are absolutely working on it and I hope we’ll be able to announce expansion very, very soon.” So if you're in the UK or EU, you'll need to settle for tinkering with the website version for now.
Given the early reviews of the Google Gemini Android app, and its inconsistencies as a Google Assistant replacement, that might well be for the best anyway.