Six major ChatGPT updates OpenAI unveiled at its Spring Update – and why we can’t stop talking about them

OpenAI just held its eagerly-anticipated spring update event, making a series of exciting announcements and demonstrating the eye- and ear-popping capabilities of its newest GPT AI models. There were changes to model availability for all users, and at the center of the hype and attention: GPT-4o. 

Coming just 24 hours before Google I/O, the launch puts Google's Gemini in a new perspective. If GPT-4o is as impressive as it looked, Google and its anticipated Gemini update better be mind-blowing. 

What's all the fuss about? Let's dig into all the details of what OpenAI announced. 

1. The announcement and demonstration of GPT-4o, and that it will be available to all users for free

OpenAI demoing GPT-4o on an iPhone during the Spring Update event.

OpenAI demoing GPT-4o on an iPhone during the Spring Update event. (Image credit: OpenAI)

The biggest announcement of the stream was the unveiling of GPT-4o (the 'o' standing for 'omni'), which combines audio, visual, and text processing in real time. Eventually, this version of OpenAI's GPT technology will be made available to all users for free, with usage limits.

For now, though, it's being rolled out to ChatGPT Plus users, who will get up to five times the messaging limits of free users. Team and Enterprise users will also get higher limits and access to it sooner. 

GPT-4o will have GPT-4's intelligence, but it'll be faster and more responsive in daily use. Plus, you'll be able to provide it with or ask it to generate any combination of text, image, and audio.

The stream saw Mira Murati, Chief Technology Officer at OpenAI, and two researchers, Mark Chen and Barret Zoph, demonstrate GPT-4o's real-time responsiveness in conversation while using its voice functionality. 

The demo began with a conversation about Chan's mental state, with GPT-4o listening and responding to his breathing. It then told a bedtime story to Barret with increasing levels of dramatics in its voice upon request – it was even asked to talk like a robot.

It continued with a demonstration of Barret “showing” GPT-4o a mathematical problem and the model guiding Barret through solving it by providing hints and encouragement. Chan asked why this specific mathematical concept was useful, which it answered at length. 

A look at the updated mobile app interface for ChatGPT.

A look at the updated mobile app interface for ChatGPT. (Image credit: OpenAI)

They followed this up by showing GPT-4o some code, which it explained in plain English, and provided feedback on the plot that the code generated. The model talked about notable events, the labels of the axis, and a range of inputs. This was to show OpenAI's continued conviction to improving GPT models' interaction with code bases and the improvement of its mathematical abilities.

The penultimate demonstration was an impressive display of GPT-4o's linguistic abilities, as it simultaneously translated two languages – English and Italian – out loud. 

Lastly, OpenAI provided a brief demo of GPT-4o's ability to identify emotions from a selfie sent by Barret, noting that he looked happy and cheerful.

If the AI model works as demonstrated, you'll be able to speak to it more naturally than many existing generative AI voice models and other digital assistants. You'll be able to interrupt it instead of having a turn-based conversation, and it'll continue to process and respond – similar to how we speak to each other naturally. Also, the lag between query and response, previously about two to three seconds, has been dramatically reduced. 

ChatGPT equipped with GPT-4o will roll out over the coming weeks, free to try. This comes a few weeks after Open AI made ChatGPT available to try without signing up for an account. 

2. Free users will have access to the GPT store, the memory function, the browse function, and advanced data analysis

OpenAI unveils the GPT Store

OpenAI unveils the GPT Store at its Spring Update event. (Image credit: Open AI)

GPTs are custom chatbots created by OpenAI and ChatGPT Plus users to help enable more specific conversations and tasks. Now, many more users can access them in the GPT Store.

Additionally, free users will be able to use ChatGPT's memory functionality, which makes it a more useful and helpful tool by giving it a sense of continuity. Also being added to the no-cost plan are ChatGPT's vision capabilities, which let you converse with the bot about uploaded items like images and documents. The browse function allows you to search through previous conversations more easily.

ChatGPT's abilities have improved in quality and speed in 50 languages, supporting OpenAI’s aim to bring its powers to as many people as possible. 

3. GPT-4o will be available in API for developers


3. GPT-4o will be available in API for developers (Image credit: OpenAI)

OpenAI's latest model will be available for developers to incorporate into their AI apps as a text and vision model. The support for GPT-4o's video and audio abilities will be launched soon and offered to a small group of trusted partners in the API.

4. The new ChatGPT desktop app 

A look at the new ChatGPT desktop app running on a Mac.

A look at the new ChatGPT desktop app running on a Mac. (Image credit: OpenAI)

OpenAI is releasing a desktop app for macOS to advance its mission to make its products as easy and frictionless as possible, wherever you are and whichever model you're using, including the new GPT-4o. You’ll be able to assign keyboard shortcuts to do processes even more quickly. 

According to OpenAI, the desktop app is available to ChatGPT Plus users now and will be available to more users in the coming weeks. It sports a similar design to the updated interface in the mobile app as well.

5. A refreshed ChatGPT user interface

ChatGPT is getting a more natural and intuitive user interface, refreshed to make interaction with the model easier and less jarring. OpenAI wants to get to the point where people barely focus on the AI and for you to feel like ChatGPT is friendlier. This means a new home screen, message layout, and other changes. 

6. OpenAI's not done yet

Open AI

(Image credit: Open AI)

The mission is bold, with OpenAI looking to demystify technology while creating some of the most complex technology that most people can access. Murati wrapped up by stating that we will soon be updated on what OpenAI is preparing to show us next and thanking Nvidia for providing the most advanced GPUs to make the demonstration possible. 

OpenAI is determined to shape our interaction with devices, closely studying how humans interact with each other and trying to apply its learnings to its products. The latency of processing all of the different nuances of interaction is part of what dictates how we behave with products like ChatGPT, and OpenAI has been working hard to reduce this. As Murati puts it, its capabilities will continue to evolve, and it’ll get even better at helping you with exactly what you’re doing or asking about at exactly the right moment. 

You Might Also Like

TechRadar – All the latest technology news

Read More

Windows 11’s Copilot AI just took its first step towards being an indispensable assistant for Android – but Google Gemini hasn’t got anything to worry about yet

Microsoft’s Copilot AI could soon help Windows 11 users deal with texting on their Android smartphone (and much more besides in the future).

Windows Latest noticed that there’s a new plug-in for Copilot (the recently introduced add-ons that bring extra functionality to the AI assistant), which is reportedly rolling out to more people this week. It’s called the ‘Phone’ plug-in – which is succinct and very much to the point.

As you might guess, the plug-in works by leveraging the Phone Link app that connects your mobile to your Windows 11 PC and offers all sorts of nifty features therein.

So, you need to have Phone Link app up and running before you can install the Copilot Phone plug-in. Once that’s done, Windows Latest explains that the abilities you’ll gain include being able to use Copilot to read and send text messages on your Android device (via the PC, of course), or look up contact information.

Right now, the plug-in doesn’t work properly, mind you, but doubtless Microsoft will be ironing out any problems. When Windows Latest tried to initiate a phone call, the plug-in didn’t facilitate this, but did provide the correct contact info, so they could dial themselves.

The fact that this functionality is very basic looking right now means Google will hardly be losing any sleep – and moreover, this isn’t a direct rival for the Gemini AI app anyway, as it works to facilitate managing your Android device on your PC desktop.

Expect far greater powers to come in the future

Microsoft has previously teased the kind of powers Copilot will eventually have when it comes to hooking up your Windows 11 PC and Android phone together. For example, the AI will be able to sift through texts on your phone and extract relevant information (like the time of a dinner reservation, if you’ve made arrangements via text).

Eventually, this plug-in could be really handy, but right now, it’s still in a very early working state as noted.

While it’s for Android only for the time being, the Phone plug-in for Copilot should be coming to iOS as well, as Microsoft caters for iPhones with Phone Link (albeit in a more limited fashion). Still, this isn’t confirmed, but we can’t imagine Microsoft will leave iPhone owners completely out in the cold when it comes to AI features such as this.

You might also like…

TechRadar – All the latest technology news

Read More

Logic Pro 2 is a reminder that Apple’s AI ambitions aren’t just about chatbots

While the focus of Apple’s May 7 special event was mostly hardware — four new iPads, a new Apple Pencil, and a new Magic Keyboard — there were mentions of AI with the M2 and M4 chips as well as new versions of Final Cut Pro and Logic Pro for the tablets. 

The latter is all about new AI-infused or powered features that let you create a drum beat or a piano riff or even add a warmer, more distorted feel to a recorded element. Even neater, Logic Pro for iPad 2 can now take a single recording and split it into individual tracks based on the instruments in a matter of seconds. 

It’s a look behind the curtain at the kind of AI features Apple sees the biggest appeal and affordance with. Notably, unlike some rollouts from Google or OpenAI, it’s not a chatbot or an image generator. With Logic Pro, you're getting features that can be genuinely helpful and further expand what you can do within an app.

A trio of AI-powered additions for Logic Pro for iPad

Stem Splitter in Logic Pro for iPad 2.

Stem Splitter can separate a single track into four individual ones split up by instrument.  (Image credit: Apple)

Arguably the most helpful feature for musicians will be Stem Splitter, which aims to solve the problem of separating out elements within a given track. Say you’re working through a track or giving an impromptu performance at a cafe; you might just hit record in Voice Memos on an iPhone or using a single microphone.

The result is one track that contains all the instruments mixed. Logic Pro 2 can now import that track, analyze it, and split it into four tracks: vocals, drums, bass and other instruments. It won’t change the sound but essentially puts each element on a separate track, allowing you to easily modify or edit it. You can even place plugins, something that Logic is known for, on iPad and the Mac.

The iPad Pro with M4 will likely be mighty speedy when tackling this thanks to its 16-core neural processing unit, but it will work on any iPad with Apple Silicon through a mixture of on-device AI and deep learning. For musicians big or small, it’s poised to be a simple, intuitive way to convert voice memos into workable and mixable tracks.

AI-powered instruments to complete a track

Bass Session Player in Logic Pro for iPad 2

A look at the Bass Session Player within Logic Pro for iPad 2. (Image credit: Apple)

Building on Stem Splitter is a big expansion with Session Players. Logic Pro has long offered Dummer — both on Mac and iPad — as a way to easily add drums to a track via a virtual player that can be customized by style and even complexity. Logic Pro for iPad 2 adds a piano and bass player to the mix, which are extremely adjustable session players for any given track. With piano, in particular, you can customize the individual left or right hand’s playing style, pick between four types of piano, and use a plethora of other sliding tools. It's even smart enough to recognize where on a track it is, be it a chorus or a bridge. It only took a few seconds to come up with a decent-sized track as well on an iPad Pro.

If you’re only a singer or desperately need a bass line for your track, Logic Pro for iPad 2 aims to solve this with an output that plays with and complements any existing track.

Rounding out this AI expansion for Logic Pro on the iPad is a Chromaglow effect, which takes a common, expensive piece of hardware reserved for studios and places it on the iPad to add a bit more space, color, and even warmth to the track. Like other Logic plugins, you can pick between a few presets and further adjust them.

Interestingly enough, alongside these updates, Apple didn’t show off any new Apple Pencil integrations for Logic Pro for iPad 2. I’d have to imagine that we might see a customized experience with the palette tool at some point.

It’s clear that Apple’s approach to AI, like its other software, services, and hardware, is centered around crafting a meaningful experience for whoever uses it. In this case, for musicians, it’s solving pain points and opening doors for creativity further.

Stem Splitter, new session players, and Chromaglow feel right at home within Logic Pro, and I expect to see similar enhancements to other Apple apps announced at WWDC. Just imagine an easier way to edit photos or videos baked into the Photos app or a way to streamline or condense a presentation within Keynote.

Pricing and Availability

All of these features are bundled in with Logic Pro for iPad 2, which is set to roll out and launch on May 13, 2024. If you’re already subscribed at $ 4.99 a month or $ 49 for the year, you’ll get the update for free, and there is no price increase if you’re new to the app. Additionally, you can get a one-month free trial of first-time Logic Pro for iPad users.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft improves File Explorer in Windows 11 testing, but appears to have second thoughts about some Copilot ideas

Windows 11 just received a new preview build and it makes a number of important changes to the central pillar of the operating system’s interface, File Explorer – and there’s an interesting announcement about Copilot here, too.

As you may be aware, File Explorer is what you’re using when opening folders on your desktop, and Windows 11 got web browser-style tabs in these folders courtesy of the first major update for the OS (at the end of 2022).

In the new build 22635 in the Beta channel, Microsoft has introduced the ability to easily duplicate a tab in File Explorer.

All you need to do is right-click on an existing tab, and there’s a new option to duplicate it – click that and a second copy of the tab will be opened. It’s a neat shortcut if you want to dive deeper into other folders inside a particular folder, while keeping that original folder open.

On top of this, the preview build ushers in multiple fixes for this part of the interface, including the solution for a memory leak when working with ZIP folders in a File Explorer window. A fix has also been implemented for an issue which means the spacing between icons in File Explorer becomes very wide.

There’s also a cure for a bug where a search wouldn’t work the first time you tried it, and it’d return no results. Microsoft also notes that it: “Fixed a few issues impacting File Explorer reliability.”

There’s not much else happening in build 22635 – check out the blog post for the full list of other tweaks – but Microsoft has taken a notable step back with Copilot.

The company notes that over the past few months in Windows 11 preview builds, it has tried out a few new ideas with the AI assistant, observing that: “Some of these experiences include the ability for Copilot in Windows to act like a normal application window and the taskbar icon animating to indicate that Copilot can help when you copy text or images. We have decided to pause the rollouts of these experiences to further refine them based on user feedback.”

Analysis: Some careful thought is required for Copilot visibility

It’s interesting to see that feedback has resulted in a halt on those Copilot experiments, though obviously Microsoft is careful not to say exactly why these changes have been rescinded (for now).

We were particularly skeptical about having Copilot effectively waving its hands at you from the taskbar, with that animation declaring it can help with something, so we aren’t too surprised Microsoft is having a careful think about how to proceed here.

If there is any behavior along those sorts of lines, it’ll have to be subtle, and users will need the ability to switch it off, if they don’t want animations on the icon (which is also happening with widgets on the taskbar, too). We’ll be keeping a close eye on Microsoft’s moves in this respect.

The work on File Explorer is good to see, and should make it more stable and reliable overall. Duplicate tabs are a useful shortcut to have brought in, as well, and were only recently spotted hidden in test builds, so Microsoft has moved pretty swiftly to officially introduce this change.

You might also like…

TechRadar – All the latest technology news

Read More

Bad news, Windows 11 users: ads are coming to the Start menu, but there’s something you can do about it

Microsoft seems intent on pushing its luck with its users, as it’s just released an optional Windows 11 update (KB5036980) which adds yet more adverts to the Start Menu – a move that hasn’t gone down at all well with many people.

The update is available for users running Windows 11 version 23H2 and 22H2 in Windows Update, and it’s also available to download directly from its Update Catalog.

If you’d like to install the update using Windows Update, follow these steps:

1. Go to Settings > Windows Update.

2. Click ‘Check for updates.’

3. After your system detects the availability of the update, click ‘Download & Install.’

The patch should appear with the full name “2024-04 Cumulative Update for Windows 11 Version 23H2 for x64-based Systems (KB5036980).”

For the moment, this is an optional update that will advance Windows 11 23H2 to Build 22631.3527 and Windows 11 22H2 to Build 22621.3527. This release is the last patch in Microsoft’s April 2024 update cycle, and if you forgo the optional update, you will get what’s included in a mandatory update on May 2024’ ‘Patch Tuesday’ – a monthly event where Microsoft releases a variety of software updates for its products.

A man looking thoughtfully at a computer in an office

(Image credit: Shutterstock/dotshock)

The most talked about part of the update

This optional update has already proved controversial because it brings ads to the Start Menu – seemingly for all users. Windows Latest writes that Windows 11 users can expect adverts to begin appearing  at the tail end of May. 

A screenshot of the optional update shared by Windows Latest shows the Start Menu featuring a new ad for a third-party app, the Opera browser, neatly tucked in the Recommended section. There’s a little disclaimer underneath that says “Promoted” and an Opera tagline, “Browse safely.” Apparently, a similar ad for another service, Password1 Manager, was also spotted.

You might already be feeling uneasy about this, but there is some reassuring news. If you dislike seeing the ads, you can turn them off by doing the following: 

1. Go to Settings > Personalization > Start.

2. Turn off “Show recommendations for tips, app promotions, and more” by switching the toggle off. “

Microsoft logo outside building

(Image credit: gguy / Shutterstock)

Questioning Microsoft's strategy

This optional update also adds app recommendations to the Start menu, and this section will include ‘promoted’ apps that are essentially more adverts. This ‘Recommended’ section is supposed to show the best apps from the Microsoft Store that might enhance users’ experience. 

The optional update will also include a new taskbar widget icon that will no longer appear pixelated and more options for lock screen management, giving users greater control over lock screen widgets in particular.

I’m not too fond of this move from Microsoft, but I guess it’s not as egregious as it could be. That’s not me trying to encourage Microsoft to push its luck further, and I think this move could already cause a lot of bad will with users, but at least you can turn it off.

Microsoft is also testing putting Xbox Game Pass ads in the Settings app, and some observers have called the approach billboard-like. Features like the Start menu and the Settings app are key parts of Windows 11, and having to see ads in important places like that can feel intrusive and disruptive. I personally hope Microsoft considers reversing its decision on this, as I don’t like that Windows 11 is becoming just one more aspect of my life where I can’t escape advertisements – and I’m sure I’m not alone. 


TechRadar – All the latest technology news

Read More

Copilot is everywhere in Windows 11 and it’s about to get harder to ignore – but is Microsoft in danger of wearing out the AI assistant’s welcome?

Windows 11 is going to see a lot more of Copilot in the future – that’s pretty obviously the line Microsoft is taking with its desktop-based assistant – and there’s fresh evidence of the AI creeping into more corners of the OS.

Firstly, we have a sighting of a new wallpaper, which came yesterday, when a couple of inbound laptops with the promising Snapdragon X Elite CPU were leaked. Both of those Lenovo notebooks had a Copilot-themed wallpaper on the desktop, so it’s a safe assumption that Microsoft has an official new background for the AI in the pipeline.

As Windows Latest observes, this is actually a traditional ‘bloom’ wallpaper, except Microsoft has redone the image in the Copilot colors (mirroring the Copilot button in the taskbar).

The tech site also points out other ways in which Copilot is creeping into Windows 11 and Microsoft Edge. For example, in the Edge browser, as highlighted by leaker Leopeva64, there’s now a bar of options pertaining to the AI when you open the Settings panel.

See more

This bar contains suggestions for how you might use Copilot, allowing you to get advice on security settings for example, or managing your passwords in the browser. These suggestions change depending on what section of Edge’s settings you’re in, by the way, making them more relevant to what you might be looking to do.

Note that this idea is just in testing right now, and in the Canary channel to boot (the earliest test avenue).

Another ability brought in for Copilot in Edge (again, in the Canary channel) is an expanded Ask Copilot context menu. This means that when you select a section of text in a web page, there are new options for directly interacting with Copilot in this menu.

As Windows Latest explains, these choices are: Explain, Summarize, Expand, and Ask anything in Chat.

The last option acts like the current incarnation of Ask Copilot – it just fires up the AI’s panel with a query on the selected text.

With the new options, however, Explain prompts Copilot to do just that – offer an explanation of the text – and Summarize provides a summary, as you’d expect. In a similar vein, Expand goes the other way, furnishing you with extra facts or information about the selected text.

Again with Edge, Leopeva64 also spotted that AI is going to be integrated into the browser’s ‘Magnify Image’ option, with a button spotted that offers to ‘AI Enhance’ the image after it’s been blown up. This is in very early testing, though, and the button doesn’t yet do anything at all.

See more

Another recent addition Windows Latest flagged up is ‘Circle to Copilot’ in Edge in Windows 11 (and iOS), allowing you to literally draw a circle around something to activate a Copilot query about the highlighted item.

All this comes on top of a recent move in the Beta channel of Windows 11 previews, trying out a new way of highlighting that Copilot can help with something – by animating the taskbar button for the AI when this is the case. New options have also been added to the menu that appears when you hover over the Copilot button, too, expanding that further.

Analysis: Making Copilot a more visible presence

All of this is still to come, we should note – these are changes in testing for Windows 11 or its Edge browser, and in the case of the wallpaper, a glimpse of what’s very likely to come.

Indeed, that Copilot background will likely be the default wallpaper for AI PCs starting with Snapdragon X Elite-powered laptops that launch in June. (Not forgetting Microsoft’s own Surface Pro 10 and Surface Laptop 6, the consumer spins on which will land then, and may have a custom version of the Elite SoC inside).

Overall, though, it’s clear that Microsoft is pushing forward with expanding Copilot’s capabilities, and sussing out ways in which the AI can be made more visible on the desktop. Whether that’s about an animation for the taskbar button (effectively declaring “It’s-a-me, Copilot, I can help with that”), or a fancy desktop wallpaper that could be a permanent reminder of the AI, if you fall for the color scheme (which does look quite funky, to be fair).

We’d be surprised if most of these tested changes didn’t come to fruition, frankly, and as noted, there’s a theme of Microsoft increasingly pushing Copilot which comes as no surprise.

The big rumored addition on the horizon is, of course, AI Explorer – but that feature (supposedly debuting in the Windows 11 24H2 update) may have an unexpected twist in its initial incarnation that’s a bit of a shocker. (Spoiler alert: If you don’t have an ARM CPU like the aforementioned Snapdragon, then you can forget it – Intel and AMD-powered PCs might be left out in the cold).

You might also like…

TechRadar – All the latest technology news

Read More

Meta AR glasses: everything we know about the AI-powered AR smart glasses

After a handful of rumors and speculation suggested Meta was working on a pair of AR glasses, it unceremoniously confirmed that Meta AR glasses are on the way – doing so via a short section at the end of a blog post celebrating the 10th anniversary of Reality Labs (the division behind its AR/VR tech).

While not much is known about them, the glasses were described as a product merging Meta’s XR hardware with its developing Meta AI software to “deliver the best of both worlds” in a sleek wearable package.

We’ve collected all the leaks, rumors, and some of our informed speculation in this one place so you can get up to speed on everything you need to know about the teased Meta AR glasses. Let’s get into it.

Meta AR glasses: Price

We’ll keep this section brief as right now it’s hard to predict how much a pair of Meta AR glasses might cost because we know so little about them – and no leakers have given a ballpark estimate either.

Current smart glasses like the Ray-Ban Meta Smart Glasses, or the Xreal Air 2 AR smart glasses will set you back between $ 300 to $ 500 / £300 to £500 / AU$ 450 to AU$ 800; Meta’s teased specs, however, sound more advanced than what we have currently.

Lance Ulanoff showing off Google Glass

Meta’s glasses could cost as much as Google Glass (Image credit: Future)

As such, the Meta AR glasses might cost nearer $ 1,500 (around £1,200 / AU$ 2300)  – which is what the Google Glass smart glasses launched at.

A higher price seems more likely given the AR glasses novelty, and the fact Meta would need to create small yet powerful hardware to cram into them – a combo that typically leads to higher prices.

We’ll have to wait and see what gets leaked and officially revealed in the future.

Meta AR glasses: Release date

Unlike price, several leaks have pointed to when we might get our hands – or I suppose eyeballs – on Meta’s AR glasses. Unfortunately, we might be waiting until 2027.

That’s according to a leaked Meta internal roadmap shared by  The Verge back in March 2023. The document explained that a precursor pair of specs with a display will apparently arrive in 2025, with ‘proper’ AR smart glasses due in 2027.

RayBan Meta Smart Glasses close up with the camera flashing

(Image credit: Meta)

In February 2024  Business Insider cited unnamed sources who said a pair of true AR glasses could be shown off at this year’s Meta Connect conference. However, that doesn’t mean they’ll launch sooner than 2027. While Connect does highlight soon-to-release Meta tech, the company takes the opportunity to show off stuff coming further down the pipeline too. So, its demo of Project Orion (as those who claim to be in the know call it) could be one of those ‘you’ll get this when it’s ready’ kind of teasers.

Obviously, leaks should be taken with a pinch of salt. Meta could have brought the release of its specs forward, or pushed it back depending on a multitude of technological factors – we won’t know until Meta officially announces more details. Considering it has teased the specs suggests their release is at least a matter of when not if.

Meta AR glasses: Specs and features

We haven't heard anything about the hardware you’ll find in Meta’s AR glasses, but we have a few ideas of what we’ll probably see from them based on Meta’s existing tech and partnerships.

Meta and LG recently confirmed that they’ll be partnering to bring OLED panels to Meta’s headsets, and we expect they’ll bring OLED screens to its AR glasses too. OLED displays appear in other AR smart glasses so it would make sense if Meta followed suit.

Additionally, we anticipate that Meta’s AR glasses will use a Qualcomm Snapdragon chipset just like Meta’s Ray-Ban smart glasses. Currently, that’s the AR1 Gen 1, though considering Meta’s AR specs aren’t due until 2027 it seems more likely they’d be powered by a next-gen chipset – either an AR2 Gen 1 or an AR1 Gen 2.

A Meta Quest 3 player sucking up Stay Puft Marshmallow Men from Ghostbusters in mixed reality using virtual tech extending from their controllers

The AR glasses could let you bust ghost wherever you go (Image credit: Meta)

As for features, Meta’s already teased the two standouts: AR and AI abilities.

What this means in actual terms is yet to be seen but imagine virtual activities like being able to set up an AR Beat Saber jam wherever you go, an interactive HUD when you’re navigating from one place to another, or interactive elements that you and other users can see and manipulate together – either for work or play.

AI-wise, Meta is giving us a sneak peek of what's coming via its current smart glasses. That is you can speak to its Meta AI to ask it a variety of questions and for advice just as you can other generative AI but in a more conversational way as you use your voice.

It also has a unique ability, Look and Ask, which is like a combination of ChatGPT and Google Lens. This allows the specs to snap a picture of what’s in front of you to inform your question, allowing you to ask it to translate a sign you can see, for a recipe using ingredients in your fridge, or what the name of a plant is so you can find out how best to care for it.

The AI features are currently in beta but are set to launch properly soon. And while they seem a little imperfect right now, we’ll likely only see them get better in the coming years – meaning we could see something very impressive by 2027 when the AR specs are expected to arrive.

Meta AR glasses: What we want to see

A slick Ray-Ban-like design 

RayBan Meta Smart Glasses

The design of the Ray-Ban Meta Smart Glasses is great (Image credit: Meta)

While Meta’s smart specs aren't amazing in every way – more on that down below – they are practically perfect in the design department. The classic Ray-Ban shape is sleek, they’re lightweight, super comfy to wear all day, and the charging case is not only practical, it's gorgeous.

While it’s likely Ray-Ban and Meta will continue their partnership to develop future smart glasses – and by extension the teased AR glasses – there’s no guarantee. But if Meta’s reading this, we really hope that you keep working with Ray-Ban so that your future glasses have the same high-quality look and feel that we’ve come to adore.

If the partnership does end, we'd like Meta to at least take cues from what Ray-Ban has taught it to keep the design game on point.

Swappable lenses 

Orange RayBan Meta Smart Glasses in front of a wall of colorful lenses including green, blue, yellow and pink

We want to change our lenses Meta! (Image credit: Meta)

While we will rave about Meta’s smart glasses design we’ll admit there’s one flaw that we hope future models (like the AR glasses) improve on; they need easily swappable lenses.

While a handsome pair of shades will be faultless for your summer vacations, they won’t serve you well in dark and dreary winters. If we could easily change our Meta glasses from sunglasses to clear lenses as needed then we’d wear them a lot more frequently – as it stands, they’re left gathering dust most months because it just isn’t the right weather.

As the glasses get smarter, more useful, and pricier (as we expect will be the case with the AR glasses) they need to be a gadget we can wear all year round, not just when the sun's out.

Speakers you can (quietly) rave too 

JBL Soundgear Sense

These open ear headphones are amazing, Meta take notes (Image credit: Future)

Hardware-wise the main upgrade we want to see in Meta’s AR glasses is better speakers. Currently, the speakers housed in each arm of the Ray-Ban Meta Smart Glasses are pretty darn disappointing – they can leak a fair amount of noise, the bass is practically nonexistent and the overall sonic performance is put to shame by even basic over-the-ears headphones.

We know open-ear designs can be a struggle to get the balance right with. But when we’ve been spoiled by open-ear options like the JBL SoundGear Sense – that have an astounding ability to deliver great sound and let you hear the real world clearly (we often forget we’re wearing them) – we’ve come to expect a lot and are disappointed when gadgets don’t deliver.

The camera could also get some improvements, but we expect the AR glasses won’t be as content creation-focused as Meta’s existing smart glasses – so we’re less concerned about this aspect getting an upgrade compared to their audio capabilities.

You might also like

TechRadar – All the latest technology news

Read More

Confused about Google’s Find My Device? Here are 7 things you need to know

It took a while, but Google has released the long-awaited upgrade to its Find My Device network. This may come as a surprise. The update was originally announced back in May 2023, but was soon delayed with apparent launch date. Then, out of nowhere, Google decided to release the software on April 8 without major fanfare. As a result, you may feel lost, but we can help you find your way.

Here's a list of the seven most important things you need to know about the Find My Device update. We cover what’s new in the update as well as the devices that are compatible with the network, because not everything works and there’s still work to be done.

1. It’s a big upgrade for Google’s old Find My Device network 

Google's Find My Device feature

(Image credit: Google)

The previous network was very limited in what it could do. It was only able to detect the odd Android smartphone or Wear OS smartwatch. However, that limitation is now gone as Find My Device can sniff other devices; most notably Bluetooth location trackers. 

Gadgets also don’t need to be connected to the internet or have location services turned on, since the software can detect them so long as they’re within Bluetooth range. However, Find My Device won’t tell you exactly where the devices are. You’ll instead be given an approximate location on your on-screen map. You'll ultimately have to do the legwork yourself.

Find My Device functions similarly to Apple’s Find My network, so “location data is end-to-end encrypted,” meaning no one, not even Google, can take a peek.

2. Google was waiting for Apple to add support to iPhones 

iPhone 15 from the front

(Image credit: Future)

The update was supposed to launch in July 2023, but it had to be delayed because of Apple. Google was worried about unwanted location trackers, and wanted Apple to introduce “similar protections for iOS.” Unfortunately, the iPhone manufacturer decided to drag its feet when it came to adding unknown tracker alerts to its own iPhone devices.

The wait may soon be over as the iOS 17.5 beta contains lines of code suggesting that the iPhone will soon get these anti-stalking measures. Soon, iOS devices might encourage users to disable unwanted Bluetooth trackers uncertified for Apple’s Find My network. It’s unknown when this feature will roll out as the features in the Beta don’t actually do anything when enabled. 

Given the presence of unwanted location tracker software within iOS 17.5, Apple's release may be imminent. Apple may have given Google the green light to roll out the Find My Device upgrade ahead of time to prepare for their own software launch.

3. It will roll out globally


(Image credit: Future)

Google states the new Find My Device will roll out to all Android devices around the world, starting in the US and Canada. A company representative told us other countries will receive the same update within the coming months, although they couldn’t give us an exact date.

Android devices do need to meet a couple of requirements to support the network. Luckily, they’re not super strict. All you need is a smartphone running Android 9 with Bluetooth capabilities.

If you own either a Pixel 8 or Pixel 8 Pro, you’ll be given an exclusive feature: the ability to find a phone through the network even if the phone is powered down. Google reps said these models have special hardware that allows them to pour power into their Bluetooth chip when they're off. Google is working with other manufacturers in bringing this feature to other premium Android devices.

4. You’ll receive unwanted tracker alerts

Apple AirTags

(Image credit: Apple)

Apple AirTags are meant to be attached to frequently lost items like house keys or luggage so you can find them easily. Unfortunatley, several bad eggs have utilized them as an inexpensive way to stalk targets. Google would eventually update Android by giving users a way to detect unwanted AirTags.

For nearly a year, the OS could only seek out AirTags, but now with the upgrade, Android phones can locate Bluetooth trackers from other third-party brands such as Tile, Chipolo, and Pebblebee. It is, by far, the most single important feature in the update as it'll ensure your privacy and safety.

You won’t be able to find out who placed a tracker on you. According to a post on the company’s Security blog, only the owner can view that information. 

5. Chipolo and Pebblebee are launching new trackers for it soon

Chipolo's new trackers

(Image credit: Chipolo)

Speaking of Chipolo and Pebblebee, the two brands have announced new products that will take full advantage of the revamped network. Google reps confirmed to us they’ll be “compatible with unknown tracker alerts across Android and iOS”.

On May 27th, we’ll see the introduction of the Chipolo ONE Point item tracker as well as the Chipolo CARD Point wallet finder. You’ll be able to find the location of whatever item they’re attached to via the Find My Device app. The pair will also sport speakers on them to ring out a loud noise letting you where they are. What’s more, Chipolo’s products have a long battery life: Chipolo says the CARD finder lasts as long as two years on a single charge.

Pebblebee is achieving something similar with their Tag, Card, and Clip trackers. They’re small and lightweight and attachable to larger items, Plus, the trio all have a loud buzzer for easy locating. These three are available for pre-order right now although no shipping date was given. 

6. It’ll work nicely with your Nest products

Google Nest Wifi

(Image credit: Google )

For smart home users, you’ll be able to connect the Find My Device app to a Google Nest device to find lost items. An on-screen animation will show a sequence of images displaying all of the Nest hardware in your home as the network attempts to find said missing item. Be aware the tech won’t give you an exact location.

A short video on the official announcement shows there'll be a message stating where it was last seen, at what time, and if there was another smart home device next to it. Next to the text will be a refresh option in case the lost item doesn’t show up.

Below the message will be a set of tools to help you locate it. You can either play a sound from the tracker’s speakers, share the device, or mark it as lost.

7. Headphones are invited to the tracking party too

Someone wearing the Sony WH-1000XM5 headphones against a green backdrop

(Image credit: Gerald Lynch/TechRadar/Future)

Believe it or not, some insidious individuals have used earbuds and headphones to stalk people. To help combat this, Google has equipped Find My Device with a way to detect a select number of earbuds. The list of supporting hardware is not large as it’ll only be able to locate three specific models. They are the JBL Tour Pro 2, the JBL Tour One M2, and the high-end Sony WH-1000XM5. Apple AirPods are not on the list, although support for these could come out at a later time.

Quite the extensive list as you can see but it's all important information to know. Everything will work together to keep you safe. 

Be sure to check out TechRadar's list of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft’s lock screen cards for Windows 11 are about to arrive – and a much-needed addition will follow

Microsoft recently tested a new feature for the lock screen in the form of info cards in both Windows 11 and Windows 10, which are now imminent, but this functionality is missing an important piece of the puzzle – something the software giant is going to remedy, thankfully.

The widget-style lock screen cards give you a snapshot of the current weather, or stocks (finance), local traffic, and so on, but the problem was you could either switch them all on, or all off – with no fine-tuned control.

So, if you wanted weather and sports scores, but not traffic updates and stocks, you were stuck with the latter two.

However, according to a report from Windows Latest, Microsoft has told the tech site that you’ll be able to customize which cards appear on the lock screen in the future. However, no timeframe was mentioned for when this might happen.

Analysis: Get on with it (please)

Do this already, Microsoft. We made the observation before that it seemed pretty odd to introduce lock screen cards as an all-or-nothing affair, because many folks won’t want a load of these – and will regard that as clutter – but might be happy with one or two tucked away on the lock screen.

So, this was an obvious – and very necessary – move in our books. And hopefully, it won’t take long to usher in this change, as we can’t see that it’ll be all that complex to implement a choice here. Maybe the feature will arrive with the 24H2 update, at the latest we’d hope.

Meantime, all Windows 11 users will get the new lock screen cards later today in the cumulative update for April, or they almost certainly will, unless Microsoft delays the rollout if any problems were encountered in the March preview update (we haven’t heard about any issues in testing). And the same is presumably true for the Windows 10 update also coming today.

Speaking of the Windows 10 incarnation of these lock screen cards, something else we’d like to see is Microsoft working on the layout and presentation here, so it looks neater like the Windows 11 design. The reality may be that Windows 10 is not that high a priority any longer, though (when you consider that for a time, Microsoft froze all feature development on the older OS, before having a rethink).

You might also like

TechRadar – All the latest technology news

Read More

Meta Quest Pro 2: everything we know about the Apple Vision Pro competitor

Meta’s Quest 3 may be less than a year old, but Meta appears to be working on a few follow-ups. Leaks and rumors point to the existence of a Meta Quest 3 Lite – a cheaper version of the Meta Quest 3 – and a Meta Quest Pro 2 – a follow-up to the high-end Meta Quest Pro.

The original Meta Quest Pro doesn’t seem to have been all that popular – evidenced by the fact its price was permanently cut by a third less than six months after its launch – but the Apple Vision Pro seems to have fueled a renaissance of high-end standalone VR hardware. This means we’re getting a Samsung XR headset (developed in partnership with Google), and mostly likely a Meta Quest Pro 2 of some kind.

While one leak suggested the Meta Quest Pro 2 had been delayed – after Meta cancelled a project that the leak suggested was set to be the next Quest Pro – there’s more than a little evidence that the device is on the way. Here’s all of the evidence, as well as everything you need to know about the Meta Quest Pro 2 – including some of our insight, and the features we’d most like to see it get.

Meta Quest Pro 2: Price

Because the Meta Quest Pro 2 hasn’t been announced we don’t know exactly how much it’ll cost, but we expect it’ll be at least as pricey as the original which launched at $ 1,499.99 / £1,499.99 / AU$ 2,449.99.

The Meta Quest Pro being worn by a person in an active stance

(Image credit: Meta)

The Meta Quest Pro was permanently discounted to $ 999.99 / £999.99 / AU$ 1729.99 five months after it launched, but we expect this was Meta attempting to give the Quest Pro a much-needed sales boost rather than an indication of the headsets actual cost. So we expect this is much cheaper than Quest Pro 2 will be.

What’s more, given that the device is expected to be more of an Apple Vision Pro competitor — which costs $ 3,500 or around £2,800 / AU$ 5,350 – with powerful specs, LG-made OLED panels, and could boast next-gen mixed reality capabilities there’s a good chance it could cost more than its predecessor.

As such we’re expecting it to come in at nearer $ 2,000 / £2,000 / AU$ 3,000. Over time, and as more leaks about the hardware come out, we should start to get a better idea of its price – though as always we won’t know for certain how much it’ll cost until Meta says something officially.

Meta Quest Pro 2: Release date

The Meta Quest 3 on a notebook surrounded by pens and school supplies on a desk

The Meta Quest 3 (Image credit: Meta)

Meta hasn’t announced the Quest Pro 2 yet – or even teased it. Given its usual release schedule this means the earliest we’re likely to see a Pro model is October 2025; that’s because it would tease the device at this year’s Meta Connect in September/October 2024, and then launch it the following year’s event as it did with the original Quest Pro and Quest 3.

But there are a few reasons we could see it launch sooner or later. On the later release date side of things we have the rumored Meta Quest 3 Lite – a cheaper version of the Meta Quest 3. Meta may want to push this affordable model out the gate sooner rather than later, meaning that it might need to take a release slot that could have been used by the Quest Pro 2.

Alternatively, Meta may want to push a high-end model out ASAP so as to not let the Apple Vision Pro and others like the Samsung XR headset corner the high-end VR market. If this is the case it could forgo its usual tease then release strategy and just release the headset later this year – or tease it at Connect 2024 then launch it in early 2025 rather than a year later in late 2025 as it usually would.

This speculation all assumes a Meta Quest Pro 2 is even on the way – though Meta has strongly suggested that another Pro model would come in the future; we’ll just have to wait and see what’s up its sleeve.

Meta Quest Pro 2: Specs

Based on LG and Meta’s announcement of their official partnership to bring OLED displays to Meta VR headsets in the future, it’s likely that the Meta Quest Pro 2 would feature OLED screens. While these kind of displays are typically pricey, the Quest Pro 2 is expected to be a high-end model (with a high price tag), and boasting OLED panels would put it on par with other high-end XR products like the Apple Vision Pro.

Key Snapdragon XR2 Plus Gen 2 specs, including that it has support fo 4.3k displays, 8x better AI performance, and 2.5x better GPU performance

(Image credit: Qualcomm)

It also seems likely the Meta Quest Pro 2 will boast a Snapdragon XR2 Plus Gen 2 chipset – the successor to the Gen 1 used by the Quest Pro. If it launches further in the future than we expect it would instead boast a currently unannounced Gen 3 model.

While rumors haven’t teased any other specs, we also assume the device would feature full-color mixed reality like Meta’s Quest 3 and Quest Pro – though ideally the passthrough would be higher quality than either of these devices (or at least, better than the Quest Pro’s rather poor mixed reality).

Beyond this, we predict the device would have specs at least as good as its predecessor. By that we mean we expect the base Quest Pro 2 would come with 12GB of RAM, 256GB of storage and a two-hour minimum battery life.

Meta Quest Pro 2: What we want to see

We’ve already highlighted in depth what we want to see from the Meta Quest Pro 2 – namely it should ditch eye-tracking and replace it with four different features. But we’ll recap some of those points here, and make a few new ones of things we want to see from the Quest Pro 2.

Vastly better mixed-reality passthrough, more entertainment apps and, 4K OLED displays would go a long way to making the Meta Quest Pro 2 feel a lot more like a Vision Pro competitor – so we hope to see them on the Quest Pro 2. 

Eye-tracking could also help, but Meta really needs to prove it’s worthwhile. So far every instance of the tech feels like an expensive tech demo for a feature that’s neat, but not all that useful.

The Meta Quest Pro being worn by Hamish Hector, his cheeks are puffed up

What we want from the next Quest Pro (Image credit: Meta)

Ignoring specs and design for a second, our most important hope is that the Quest Pro 2 isn’t as prohibitively expensive as the Apple Vision Pro. While the Vision Pro is great, $ 3,500 is too much even for a high-end VR headset when you consider the realities of how and how often the device will be used. Ideally the Quest Pro 2 would be at most $ 2,000 / £2,000 / AU$ 3,000, though until we know more about its specs we won’t know how realistic our request is.

Lastly we hope the device is light, perhaps with a removable battery pack like the one seen in the HTC Vive XR Elite. This would allow someone who wants to work at their desk or sit back and watch a film in VR wear a much lighter device for the extended period of time (provided their near a power source). Alternatively they can plug the battery in and enjoy a typical standalone VR experience – to us this would be a win-win.

TechRadar – All the latest technology news

Read More