Meta’s recent Quest 3 update includes a secret AI upgrade for mixed reality

Meta’s VR headsets recently received update v64, which according to Meta added several improvements to their software – such as better-quality mixed-reality passthrough in the case of the Meta Quest 3 (though I didn’t see a massive difference after installing the update on my headset).

It’s now been discovered (first by Twitter user @Squashi9) that the update also included another upgrade for Meta’s hardware, with Space Scan, the Quest 3’s room scanning feature, getting a major buff thanks to AI.

The Quest 3’s Space Scan is different to its regular boundary scan, which sets up your safe play space for VR. Instead, Space Scan maps out your room for mixed-reality experiences, marking out walls, floors, and ceilings so that experiences are correctly calibrated.

You also have the option to add and label furniture, but you had to do this part manually until update v64 rolled out. Now, when you do a room scan your Quest 3 will automatically highlight and label furniture – and based on my tests it works flawlessly.

Annoyingly, the headset wouldn’t let me take screenshots of the process, so you’ll have to trust me when I say that every piece of furniture was not only picked up by the scan and correctly marked out, it was also labelled accurately – it even picked up on my windows and doors, which I wasn’t expecting.

The only mistake I spotted was that a chair I have in my living room was designated a 'couch', though this seems to be more an issue with Meta’s lack of more specific labels than with Space Scan’s ability to detect what type of object each item of furniture is.

Post by @edwardrichardmiller
View on Threads

This feature isn’t a complete surprise, as Reality Labs showed a version of it off on Threads in March. What is surprising, however, is how quickly it’s been rolled out after being unveiled – though I’m not complaining, considering how well it works and how easy it makes scanning your room. 

So what? 

Adding furniture has a use for MR and VR apps. Tables can be used by apps like Horizon Workrooms as designated desks, while sitting down in or getting up from a designated couch will change your VR experience between a standing or seated mode.

Meanwhile, some apps can use the detected doors, windows, walls, and furniture such as a bookshelf to adjust how mixed-reality experiences interact with your space.

With Meta making it less tedious to add these data points, app developers have more of a reason to take furniture into account when designing VR and MR experiences, which should lead to them feeling more immersive.

This also gives Meta a leg up over the Apple Vision Pro, as it’s not yet able to create a room scan that’s as detailed as the one found on Meta’s hardware – though until software starts to take real advantage of this feature it’s not that big a deal.

We’ll have to wait and see what comes of this improvement, but if you’ve already made a space scan or two on your Quest 3 you might want to redo them, as the new scans should be a lot more accurate.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT’s newest GPT-4 upgrade makes it smarter and more conversational

AI just keeps getting smarter: another significant upgrade has been pushed out for ChatGPT, its developer OpenAI has announced, and specifically to the GPT-4 Turbo model available to those paying for ChatGPT Plus, Team, or Enterprise.

OpenAI says ChatGPT will now be better at writing, math, logical reasoning, and coding – and it has the charts to prove it. The release is labeled with the date April 9, and it replaces the GPT-4 Turbo model that was pushed out on January 25.

Judging by the graphs provided, the biggest jumps in capabilities are in mathematics and GPQA, or Graduate-Level Google-Proof Q&A – a benchmark based on multiple-choice questions in various scientific fields.

According to OpenAI, the new and improved ChatGPT is “more direct” and “less verbose” too, and will use “more conversational language”. All in all, a bit more human-like then. Eventually, the improvements should trickle down to non-paying users too.

More up to date

See more

In an example given by OpenAI, AI-generated text for an SMS intended to RSVP to a dinner invite is half the length and much more to the point – with some of the less essential words and sentences chopped out for simplicity.

Another important upgrade is that the training data ChatGPT is based on now goes all the way up to December 2023, rather than April 2023 as with the previous model, which should help with topical questions and answers.

It's difficult to test AI chatbots from version to version, but in our own experiments  with ChatGPT and GPT-4 Turbo we found it does now know about more recent events – like the iPhone 15 launch. As ChatGPT has never held or used an iPhone though, it's nowhere near being able to offer the information you'd get from our iPhone 15 review.

The momentum behind AI shows no signs of slowing down just yet: in the last week alone Meta has promised human-like cognition from upcoming models, while Google has made its impressive AI photo-editing tools available to more users.

You might also like

TechRadar – All the latest technology news

Read More

Meta’s Smart Glasses will get a sci-fi upgrade soon, but they’re still not smart enough

There's a certain allure to smart glasses that bulky mixed-reality headsets lack. Meta's Ray-Ban Smart Glasses (formerly Stories), for instance, are a perfect illustration of how you can build smarts into a wearable without making the wearer look ridiculous. The question is, can you still end up being ridiculous while wearing them?

Ray-Ban Meta Smart Glasses' big upcoming Meta AI update will let you talk to your stylish frames, querying them about the food you're consuming, the buildings you're facing, and the animals you encounter. The update is set to transform the wearable from just another pair of voice-enabled glasses into an always-on-your-face assistant.

The update isn't public and will only apply to Ray-Ban Smart Glasses and not the Ray-Ban Meta Stories predecessors that do not feature Qualcomm's new AR1 Gen 1 chip. This week, however, Meta gave a couple of tech reporters at The New York Times early access to the Meta AI integration and they came away somewhat impressed.

I must admit, I found the walkthrough more intriguing than I expected.

Even though they didn't tear the glasses apart, or get into the nitty gritty tech details I crave, the real-world experience depicts Meta AI as a fascinating and possibly useful work in progress.

Answers and questions

In the story, the authors use the Ray Ban smart glasses to ask Meta AI to identify a variety of animals, objects, and landmarks with varying success. In the confines of their homes, they spoke full voice and asked Meta AI. “What am I looking at?” They also enabled transcription so we could see what they asked and the responses Meta AI provided.

It was, in their experience, quite good at identifying their dogs' breed. However, when they took the smart glasses to the zoo, Meta AI struggled to identify far-away animals. In fact, Meta AI got a lot wrong. To be fair, this is beta and I wouldn't expect the large language model (Llama 2) to get everything right. At least it's not hallucinating (“that's a unicorn!”), just getting it wrong.

The story features a lot of photos taken with the Ray-Ban Meta Smart Glasses, along with the queries and Meta AI's responses. Of course, that's not really what was happening. As the authors note, they were speaking to Meta AI wherever they went and then heard the responses spoken back to them. This is all well and good when you're at home, but just weird when you're alone at a zoo talking to yourself.

The creep factor

This, for me, remains the fundamental flaw in many of these wearables. Whether you wear Ray-Ban Smart Glasses or Amazon Echo Frames, you'll still look as if you're talking to yourself. For a decent experience, you may engage in a lengthy “conversation” with Meta AI to get the information you need. Again, if you're doing this at home, letting Meta AI help you through a detailed recipe, that's fine. Using Meta AI as a tour guide when you're in the middle of, say, your local Whole Foods might label you as a bit of an oddball.

We do talk to our best phones and even our best smartwatches, but I think that when people see you holding your phone or smartwatch near your face, they understand what's going on.

The New York Times' authors noted how they found themselves whispering to their smart glasses, but they still got looks.

I don't know a way around this issue and wonder if this will be the primary reason people swear off what is arguably a very good-looking pair of glasses (or sunglasses) even if they could offer the passive smart technology we need.

So, I'm of two minds. I don't want to be seen as a weirdo talking to my glasses, but I can appreciate having intelligence there and ready to go; no need to pull my phone out, raise my wrist, or even tap a smart lapel pin. I just say, “Hey Meta” and the smart glasses wake up, ready to help.

Perhaps the tipping point here will be when Meta can integrate very subtle AR screens into the frames that add some much-needed visual guidance. Plus, the access to visuals might cut down on the conversation, and I would appreciate that.

You might also like

TechRadar – All the latest technology news

Read More

Windows 11 is forcing users to upgrade Mail app to new Outlook client which comes with a nasty addition – adverts

Windows 11 and Windows 10 users are being forced to upgrade to a new version of Microsoft’s built-in email app, with the Mail app becoming Outlook.

Windows Latest highlighted the situation whereby this happened to the tech site – and when we opened Mail, it was the same deal for us (albeit the upgrade process happened in a different way – we’ll come back to that shortly).

As Windows Latest explains, when opening the Mail app, they were informed by a pop-up that the Mail and Calendar apps are changing to be replaced by a new unified Outlook app. (We’ve previously been told about those old apps going out of support before 2024 comes to a close).

This new Outlook web app replaces both of those clients, and before they knew it, Windows Latest was looking at the new app rather than the old Mail client. The all-in-one replacement has a fair few changes from the Mail app, as we’ve explored before.

Now, this isn’t an irreversible change, though – not yet, because there is a slider top-left of the app window which says ‘New Outlook’ and if you switch it off, you’ll be sent back to the old Mail app.

That said, when doing this, Microsoft warns you that while you can switch back now, you will be returned to the new Outlook in the future. So that forced upgrade is coming soon, and it will be irreversible.


Analysis: Gloomy Outlook – cloudy with a chance of ads

We hadn’t opened the Mail app for some time, so upon reading Windows Latest’s tale, we tried it – and indeed we got a small message: “A newer version of Outlook is required to continue. Outlook will now check for updates.”

Our Mail client was then automatically upgraded to the new web Outlook, just as with Windows Latest. We weren’t treated to the fancier (graphical) pop-ups the tech site experienced though – we just got a simple text-based dialog box. (Possibly because the PC we were on is still running Windows 10)

So, it seems this is a wide rollout of the forced upgrade, albeit it as noted, a change that can be temporarily rescinded – although later this year, you will be transferred to the new Outlook email app, whether you want it, or not.

Why aren’t people keen on the new email client? Well, it’s a whole different layout, and change can take some getting used to, as always. Others seem to be complaining that it diverts important messages away from the main inbox (’Focused’ pane) too readily. However, the biggest stumbling block for many is that the new Outlook has adverts, apparently, although those with a Microsoft 365 subscription don’t see them (we have the latter, so weren’t bothered by adverts).

Certainly, adverts is a nasty sting in the tail, but you may just have to get used to them if you’re not an Office (sorry, Microsoft 365) subscriber. Microsoft’s constantly experimenting with using more ads or promotional tactics in Windows 11 (and 10) sadly, and increasingly it seems that’s something we’ll have to live with.

You might also like…

TechRadar – All the latest technology news

Read More

Microsoft Paint could get Midjourney-like powers soon thanks to a surprise AI upgrade

Microsoft has been paying quite a lot of attention to its once-forgotten Paint app recently, which had gone years without any meaningful updates or new features. Now, it seems like the app is getting yet another upgrade – a Midjourney-like ability to generate AI art in real-time. 

So, what does that mean? If you’re unfamiliar with the popular image generator Midjourney, it’s an AI-powered tool that allows you to type in a text prompt to generate an image in a style of your choosing – be it paintwork, photorealism, or even pixel art.

The rumor comes from the credible Windows leaker PhantomOfEarth on X (formerly Twitter), who made a post stating that “The upcoming AI feature for paint may be something known as ‘LiveCanvas’”. While the leaker isn’t entirely sure what exactly the feature will be, it does sound very familiar to Leonardo.Ai’s Real-Time Canvas.

See more

Real-Time Canvas allows you to draw in one window and watch in a second window as generative AI brings your art to life – like a sort of artistic auto-fill. This would fit perfectly in Microsoft Paint – users would be able to sketch out their ideas or create art and use the generative AI technology to add to it. Microsoft already has some basic (and, if I’m being honest, kind of average) AI-powered image generation within Paint, so it would make sense to add a more interactive feature like this rather than simply a repeat of something they already have. 

We’re quite excited to see how this tool could help budding artists looking to experiment with generative AI, since it’ll be available free in Windows. With the ability to draw in one window and edit in another, you can create the barebones of your outwork and add finer details with the AI. It's approaching a more 'moral' application of generative AI – one that doesn't simply cut out the human creator entirely.

We don’t know much about expected release dates or even have a rough idea of what the feature would look like outside of PhantomOfEarth’s post – and, as always, we should take leaks like this with a side of salt. Likely, the feature will eventually make its way to the Windows Insider Program, which allows Windows enthusiasts and developers to sign up and get an early look at upcoming releases and new features that may be on the way. So, we’ll have to wait and see if it comes to fruition – and get doodling. 

You might also like…

TechRadar – All the latest technology news

Read More

Microsoft is giving Windows Copilot an upgrade with Power Automate, promising to banish boring tasks thanks to AI

Microsoft has revealed a new plug-in for Copilot, its artificial intelligence (AI) assistant, named Power Automate that will enable users to (as the name suggests) automate repetitive and tedious tasks, such as creating and manipulating entries in Excel, handling PDFs, and file management. 

This development is part of a bigger Copilot update package that will see several new capabilities being added to the digital AI assistant.

Microsoft gives the following examples of tasks this new Copilot plug-in could automate: 

  • Write an email to my team wishing everyone a happy weekend.
  • List the top 5 highest mountains in the world in an Excel file.
  • Rename all PDF files in a folder to add the word final at the end.
  • Move all word documents to another folder.
  • I need to split a PDF by the first page. Can you help?

Who can get the Power Automate plug-in and how

As of now, it seems like this plug-in is only available to some users with access to Windows 11 Preview Build 26058, available to Windows Insiders in the Canary and Dev Channels of the Windows Insider Program. The Windows Insider Program is a Microsoft-run community for Windows enthusiasts and professionals where users can get early access to upcoming versions of Windows, features, and more, and provide feedback to Microsoft developers to improve these before a wider rollout.

Hopefully, the Power Automate plug-in for Copilot will prove a hit with testers – and if it is, we should hopefully see it rolled out to all Windows 11 users soon.

As per the blog post announcing the Copilot update, this is the first release of the plug-in, which is part of Microsoft’s Power Platform, a comprehensive suite of tools designed to help users make their workflows more efficient and versatile – including Power Automate. To be able to use this plug-in, you’ll need to download Power Automate for Desktop from the Microsoft Store (or make sure you have the latest version of Power Automate). 

There are multiple options for using Power Automate:  the free plan, suitable for personal use or smaller projects, and there are premium plans that offer packages with more advanced features. From what we can tell, the ability to enable the Power Automate plug-in for Copilot will be available for all users, free and premium, but Microsoft might change this.

Once you’ve made sure you have the latest version of Power Automate downloaded, you’ll also need to be signed into Copilot for Windows with a Microsoft Account. Then you’ll need to add the plug-in to Copilot To do this, you’ll have to go to the Plug in section in the Copilot app for Windows, and turn on the Power Automate plug-in which should now be visible. Once enabled, you should be able to ask it to perform a task like one of the above examples and see how Copilot copes for yourself.

Once you try the plug-in for yourself, if you have any thoughts about it, you can share them with Microsoft directly at [email protected]

Copilot in Windows

(Image credit: Microsoft)

Hopefully, a sign of more to come

The language Microsoft is using about the plug-in implies that it will see improvements in the future to enable it and, therefore, Copilot to carry out more tasks. Upgrades like this are steps in the right direction if they’re as effective as they sound. 

This could address one of the biggest complaints people have about Copilot since it was launched. Microsoft presented it as a Swiss Army Knife-like digital assistant with all kinds of AI capabilities, and, at least for now, it’s not anywhere near that. While we admire Microsoft’s AI ambitions, the company did make big promises, and many users are growing impatient. 

I guess we’ll have to just continue to watch whether Copilot will live up to Microsoft’s messaging, or if it’ll go the way of Microsoft’s other digital assistants like Cortana and Clippy.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Your Meta Quest 3 is getting a hand-tracking upgrade that could unlock foot-tracking

In our Apple Vision Pro review, we commended the headset for wowing us with its dual hand-and-eye-tracking system. Meta has now launched its own dual-tracking system for the Meta Quest 3 and Meta Quest Pro, though the eye-tracking has been swapped for handset tracking so you can use controllers and hands simultaneously – and people are already using it for foot-tracking.

Admittedly this feature isn’t entirely new. Since hand tracking launched it has been possible to swap between the two within apps that support both – though there was a delay when switching modes, and as soon as you put the controllers down they’d disappear from your view (making it a challenge to find them again, in VR).

This new ‘Multimodal’ method that tracks both at the same time has technically been around for a while too. It launched back in July 2023, however, it was in beta which meant official Quest Store apps and App Lab software couldn’t implement it. Instead, software using Multimodal tracking would have to be shared via third-party app stores like SideQuest.

See more

Now with Quest update v62 it has launched fully (via UploadVR) meaning VR games and apps distributed through the native Quest Store can add Multimodal tracking for Meta Quest 3 and Quest Pro users. This not only allows apps to transition instantly from one method to the other, but it also means you can use controllers and your hands at the same time opening up new ways to interact with virtual worlds.

Perhaps we’ll see an adventure game where you wield a sword in one hand and perform Doctor Strange-like spells with your free hand, or existing apps that only use one controller could add some hand-tracking features – even something as simple as the ability to make hand gestures to improve communication in multiplayer games.

People who have been testing the feature have pointed out this new system could allow tracking of multiple body parts at once. In one example, Twitter user @Lunayian attaches Quest Pro controllers to their feet so they can use their hands and feet in VR without a complex tracking rig.

See more

Unfortunately, the Oculus Quest 2 lacks the processing power to enable simultaneous hand and controller tracking with its base handsets. However, you could unlock this feature if you buy and pair Touch Pro controllers with the headset – they’ll cost you $ 299.99 / £299.99 / AU$ 479.99 for two – as they track themselves allowing the Quest 2 to focus on your hands.

You might want to hold off on picking up the Touch Pro controllers though, as while this feature is now live for developers to use in official Quest Store apps it’ll take time to see it in your favorite VR and MR software. Hopefully, we won't be waiting long.

You might also like

TechRadar – All the latest technology news

Read More

Google’s Gemini AI can now handle bigger prompts thanks to next-gen upgrade

Google’s Gemini AI has only been around for two months at the time of this writing, and already, the company is launching its next-generation model dubbed Gemini 1.5.

The announcement post gets into the nitty-gritty explaining all the AI’s improvements in detail. It’s all rather technical, but the main takeaway is that Gemini 1.5 will deliver “dramatically enhanced performance.” This was accomplished with the implementation of a “Mixture-of-Experts architecture” (or MoE for short) which sees multiple AI models working together in unison. Implementing this structure made Gemini easier to train as well as faster at learning complicated tasks than before.

There are plans to roll out the upgrade to all three major versions of the AI, but the only one being released today for early testing is Gemini 1.5 Pro. 

What’s unique about it is the model has “a context window of up to 1 million tokens”. Tokens, as they relate to generative AI, are the smallest pieces of data LLMs (large language models) use “to process and generate text.” Bigger context windows allow the AI to handle more information at once. And a million tokens is huge, far exceeding what GPT-4 Turbo can do. OpenAI’s engine, for the sake of comparison, has a context window cap of 128,000 tokens. 

Gemini Pro in action

With all these numbers being thrown, the question is what does Gemini 1.5 Pro look like in action? Google made several videos showcasing the AI’s abilities. Admittedly, it’s pretty interesting stuff as they reveal how the upgraded model can analyze and summarize large amounts of text according to a prompt. 

In one example, they gave Gemini 1.5 Pro the over 400-page transcript of the Apollo 11 moon mission. It showed the AI could “understand, reason about, and identify” certain details in the document. The prompter asks the AI to locate “comedic moments” during the mission. After 30 seconds, Gemini 1.5 Pro managed to find a few jokes that the astronauts cracked while in space, including who told it and explained any references made.

These analysis skills can be used for other modalities. In another demo, the dev team gave the AI a 44-minute Buster Keaton movie. They uploaded a rough sketch of a gushing water tower and then asked for the timestamp of a scene involving a water tower. Sure enough, it found the exact part ten minutes into the film. Keep in mind this was done without any explanation about the drawing itself or any other text besides the question. Gemini 1.5 Pro understood it was a water tower without extra help.

Experimental tech

The model is not available to the general public at the moment. Currently, it’s being offered as an early preview to “developers and enterprise customers” through Google’s AI Studio and Vertex AI platforms for free. The company is warning testers they may experience long latency times since it is still experimental. There are plans, however, to improve speeds down the line.

We reached out to Google asking for information on when people can expect the launch of Gemini 1.5 and Gemini 1.5 Ultra plus the wider release of these next-gen AI models. This story will be updated at a later time. Until then, check out TechRadar's roundup of the best AI content generators for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft’s Sticky Notes teases upcoming upgrade: will it impress users with sparkly new features or another sticky situation for Microsoft?

The Sticky Notes app for Windows is about to get possibly its most significant update yet. The default Windows app functions similarly to how most people use post-its in real life – you can quickly jot down notes and make them visible on your desktop. It’s been four years since we’ve seen any major updates to Sticky Notes, and Microsoft is promising that it’s got big things in mind for the handy app. 

The update was announced by the official Microsoft Sticky Notes account on X (formerly Twitter), the first post from the account since April 2020. The post generated buzz from users who quickly got to speculating about what Microsoft might be cooking, with many users being quick to express concern that the new Sticky Notes will be a web-based app.

See more

Windows fans launch into speculation 

Some users guessed that the app was getting an AI-powered injection similar to those seen in apps like Notepad and Paint, and in line with Microsoft’s great AI-aided tool push. In fact, our own Muskaan Saxena wrote about her hopes for an AI-powered Sticky Notes app earlier this year. It looks like neither this nor the notion of a web-based version is the case, however, with the @stickynotes profile replying to its first announcement post that the Sticky Notes app will not be a web app (for now, at least).  

See more

It then followed with a number of playful posts teasing users about the upcoming upgrade, including one that looks like a screen grab of the app that reads: 

“Lots of rumors swirling about our update. Can you guess what it is?

Wrong answers only. 

We’ll go first… 

Sticky Notes AI upgrade.” 

See more

Right now, Sticky Notes seems to enjoy a good reputation among users and Windows fans – even if it does have a relatively basic feature set. Neowin says the app has “reliability and simplicity,” and Microsoft would do well to prioritize and preserve these aspects of the app.  

Microsoft logo

(Image credit: Shutterstock)

Microsoft's recent track record

Microsoft recently launched the new web-based Outlook app, replacing existing desktop apps like Mail, with a less-than-enthusiastic reception. Users have expressed their disappointment with the new Outlook app's feature-related shortcomings and its functioning as a powerful data harvester for Microsoft, as reported by Proton AG (a company offering online services with an emphasis on privacy). This recent Outlook-related news has users skeptical about future developments that come from Microsoft.

Fans and watchers of the Sticky Notes app are evidently open to seeing what Microsoft has in store, while not hiding their strong potential concerns, and Microsoft might just pull something truly impressive out of the bag. Some users have raised the question of whether Sticky Notes actually needs new and fancy features, but perhaps it’ll be easy enough to just not use whatever they don’t need.

Personally, I agree that an app like Sticky Notes might be best fit for purpose when kept simpler, and even if Microsoft adds features, there’s probably plenty of scope for development without needing to invoke AI. We’ll have to see just how exciting this upgrade is when it actually arrives, but till then, we’ll just have to wait and hope Microsoft hears the very much available user feedback.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Should you upgrade to Google One AI Premium? Its AI features and pricing explained

Google has been busy revamping its AI offerings, renaming Bard to Gemini, pushing out a dedicated Android app, and lots more besides. There's also now a paid tier for Google's generative AI engine for the first time, which means another digital subscription for you to weigh up.

You can read our Google Gemini explained explainer for a broad overview of Google's AI tools. But here we'll be breaking down the Google Gemini Advanced features that come as part of the new Google One AI Premium tier. 

We'll be exploring how much this new cloud tier costs, plus all the AI features and benefits it brings, so you can decide whether or not you'd like to sign up. It's been added as one of the Google One plans, so you get some digital storage in the cloud included, too. Here's how Google One AI Premium is shaping up so far…

Google One AI Premium: price and availability

The Google One AI Premium plan is available to buy now and will cost you $ 19.99 / £18.99 / AU$ 32.99 a month. Unlike some other Google One plans, you can't pay annually to get a discount on the overall price, but you can cancel whenever you like.

At the time of writing, Google is offering free two-month trials of Google One AI Premium, so you won't have to pay anything for the first two months. You can sign up and compare plans on the Google One site.

Google One AI Premium: features and benefits

First of all, you get 2TB of storage to use across your Google services: Gmail, Google Drive, and Google Photos. If you've been hitting the limits of the free storage plan – a measly 15GB – then that's another reason to upgrade.

You'll notice a variety of other Google One plans are available, offering storage from 2TB to 30TB, but it's only the Google One AI Premium plan that comes with all of the Gemini Advanced features.

Besides the actual storage space, all Google One plans include priority support, 10% back in the Google Store, extra Google Photos editing features (including Magic Eraser), a dark web monitoring service that'll look for any leaks of your personal information, and use of the Google One VPN.

Google Gemini Advanced on the web

Google Gemini Advanced on the web (Image credit: Google)

It's the AI features that you're here for though, and the key part of Google One AI Premium is that you get access to Gemini Advanced: that means the “most capable” version of Google's Gemini model, known as Ultra 1.0. You can think of it a bit like paying for ChatGPT Plus compared to sticking on the free ChatGPT plan.

Google describes Gemini Ultra 1.0 as offering “state-of-the-art performance” that's capable of handling “highly complex tasks” – tasks that can involve text, images, and code. Longer conversations are possible with Gemini Advanced, and it understands context better too. If you want the most powerful AI that Google has to offer, this is it.

Google Gemini app

A premium subscription will supercharge the Gemini app (Image credit: Google)

“The largest model Ultra 1.0 is the first to outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects — including math, physics, history, law, medicine and ethics — to test knowledge and problem-solving abilities,” writes Google CEO Sundar Pichai.

The dedicated Google Gemini app for Android, and the Gemini features built into the Google app for iOS, are available to everyone, whether they pay for a subscription or not – and it's the same with the web interface. However, if you're on the premium plan, you'll get the superior Ultra 1.0 model in all these places.

By the way, a standard 2TB Google One plan – with everything from the photo editing tricks to the VPN, but without the AI – will cost you $ 9.99 / £7.99 / AU$ 19.99 a month, so you're effectively paying $ 10 / £11 / AU$ 13 for Gemini Advanced.

A laptop on an orange background showing Gmail with Google Gemini

An example of Google Gemini in Gmail (Image credit: Google)

Gemini integration with Google's productivity apps – including Gmail, Google Docs, Google Meet, and Google Slides – is going to be “available soon”, Google says, and when it does become available, you'll get it as part of a Google One AI Premium plan. It'll give you help in composing your emails, designing your slideshows, and so on.

This is a rebranding of the Duet AI features that Google has previously rolled out for users of its apps, and it's now known as Gemini for Workspace. Whether you're an individual or a business user though, you'll be able to get these integrated AI tools if you sign up for the Google One AI Premium plan.

So there you have it: beyond the standard 2TB Google One plan, the main takeaway is that you get access to the latest and greatest Gemini AI features from Google, and the company is promising that there will be plenty more on the way in the future, too.

Google One AI Premium early verdict

On one hand, Google's free two-month trial of the One AI Premium Plan (which contains Gemini Advanced) feels like a no-brainer for those who want to tinker with some of the most powerful AI tools available right now. As long as you're fairly disciplined about canceling unwanted free trials, of course.

But it's also still very early days for Gemini Advanced. We haven't yet been able to put it through its paces or compare it to the likes of ChatGPT Plus. Its integration with Google's productivity apps is also only “available soon”, so it's not yet clear when that will happen.

The Google Gemini logo on a laptop screen that's on an orange background

(Image credit: Google)

If you want to deep dive into the performance of Google's latest AI models – including Gemini Advanced – you can read the company's Gemini benchmarking report. Some lucky testers like AI professor Ethan Mollick have also been tinkering with Gemini Advanced for some time after getting advanced access.

The early impressions seem to be that Gemini Advanced is shaping up to be a GPT-4 class AI contender that's capable of competing with ChatGPT Plus for demanding tasks like coding and advanced problem-solving. It also promises to integrate nicely with Google's apps. How well it does that in reality is something we'll have to wait a little while to find out, but that free trial is there for early adopters who want to dive straight in.

You might also like

TechRadar – All the latest technology news

Read More