Windows 11’s AI-powered feature to make games run more smoothly is for Copilot+ PCs only, we’re afraid

Windows 11 is getting a trick to help the best PC games run more smoothly, although this previously rumored feature comes with a catch – namely that it will only be available to those who have a Copilot+ PC with a Snapdragon X Elite processor.

The feature in question, which was leaked in preview builds of Windows 11 earlier this year, is called Auto Super Resolution (or Auto SR), and the idea is that it automatically upscales the resolution of a game (or indeed app) in real-time.

An upscaling feature like this effectively means the game – and it seems gaming is very much the focus (we’ll come back to that) – is run at a certain (lower) resolution, with the image upscaled to a higher resolution.

This means that something running at, say, 720p, can be upscaled to 1080p or Full HD resolution, and look nearly as good as native 1080p – but it can be rendered faster (because it’s really still 720p). If this sounds familiar, it’s because there are similar solutions already out there, such as Nvidia DLSS, AMD FSR, and Intel XeSS to name a few.

As outlined by Microsoft in its fresh details about Copilot+ PCs (highlighted by VideoCardz), the catch is that Auto SR is exclusive to these laptops. In fact, you need to be running the Qualcomm Snapdragon X Elite, so the lesser Plus version of this CPU is ruled out (for now anyway).

The other caveat to bear in mind here is that to begin with this is just for a “curated set of games,” so it’ll have a rather limited scope initially.


Analysis: The start of a long upscaling journey

When it was just a leak, there was some debate about whether Auto SR might be a feature for upscaling anything – games or apps – but Microsoft specifically talks about PC games here, and so that’s the intended use in the main. We also expected it to be some kind of all-encompassing tech in terms of game support, and that clearly isn’t the case.

Eventually, though, we’d think Auto SR will have a much broader rollout, and maybe that’ll happen before too long. After all, AI is being pushed heavily as helping gamers too – as a kind of gaming Copilot – so this is another string to that bow, and an important one we can imagine Microsoft working hard on.

Of course, the real fly in the ointment is the requirement for a Snapdragon X Elite chip, which rules out most PCs, of course. This is likely due to the demanding nature of the task, and the feature being built around the presence of a beefy NPU (Neural Processing Unit) to accelerate the AI workloads involved. Only Qualcomm’s new Snapdragon X has a peppy enough NPU to deal with this, or that’s what we can assume – but this won’t be the case for long.

Newer laptop chips from Intel, such as Lunar Lake (and Arrow Lake), and AMD’s Strix Point are inbound for later this year, and will deliver the goods in terms of the NPU and qualifying as the engine for a Copilot+ PC – and therefore being able to run Auto SR.

Naturally, we still need to see how well Microsoft implements this feature, and how upscaling games leveraging a powerful NPU works out. But as mentioned, the company has so much riding on AI, and the gaming side of the equation appears to be important enough, that we’d expect Microsoft will be trying its best to impress.

You might also like…

TechRadar – All the latest technology news

Read More

Microsoft reveals AI-powered ‘Recall’ feature to transform Windows 11’s searchability, while confirming hardware requirements

Microsoft’s annual developer conference, Build, has only just kicked off but we’ve already learned lots of exciting things, including the company showing off a new AI-powered ‘Recall’ feature to be integrated into Copilot+ PCs with Windows 11.

Copilot+ is a new software platform that was introduced yesterday, aiming to infuse Windows 11 with new AI features, ushering in a raft of new devices with more advanced AI functionality.

You’ve doubtless already heard of AI PCs, but the new breed of portables, which are powered by Qualcomm’s Snapdragon X chips with an integrated Neural Processing Unit (NPU), were officially debuted yesterday. Windows 11 Recall will be exclusive to PCs that have Snapdragon X processors as the current generation of Intel and AMD mobile CPUs don’t have a powerful enough NPU to deal with the feature. (It needs an NPU capable of 40 TOPS, or trillions of operations per second). 

This isn’t the only hardware requirement that the Recall feature will necessitate, with the full spec requirements being as follows:

  • Snapdragon X Elite or X Plus processor
  • NPU capable of 40 TOPs
  • 225GB storage
  • 16GB RAM

While these new Qualcomm chips are the only mobile silicon that can drive the Recall feature (and other AI capabilities in Copilot+ PCs) right now, future generations of Intel and AMD processors will be on board (Intel’s Lunar Lake for example, or AMD’s Strix Point chips).

Windows Latest notes that the above hardware requirements are not only needed to ensure a quality experience – with enough performance to drive snappy responses with these AI features such as Recall – but also for data security reasons.

Microsoft unveils new Surface Laptop and Surface Pro on a stage

(Image credit: Future / John Loeffler)

So, how does Recall work? 

In the past we’ve seen reports of a rumored feature, often referred to as ‘AI Explorer,’ that would enable you to search through your past activity on your PC. It looks like this has manifested as the Recall feature, and it’ll be privy to all the activity on your PC including what apps you use, how you use those apps, and what you do in them (for example, conversations in WhatsApp). Recall will record all of this activity going forward, saving snapshots of it in your PC’s local storage. 

Additionally, the Settings app will have a dedicated update history section for Recall, and a toggle for new Privacy and Security settings. You’ll be able to update Recall for Windows 11 and other AI features besides using the Windows Update app. 

If you’re feeling wary about allowing Recall to access everything, and concerned about having control over what it records and stores, Windows Latest reports that you’ll be able to delete snapshots manually from Recall’s storage, and set Recall to exclude certain apps and websites from its recording activity. In your device’s Settings, you’ll also be able to adjust the time ranges over which Recall stores snapshots, or indeed pause Recall altogether by clicking on its icon in your Taskbar. 

In practice, Recall is designed to help you go back in time and find elements of your past activity. So for example, if you previously had a conversation with a colleague on a certain topic, but couldn’t remember the details, you could ask Recall to go and find it within Windows 11.  Recall would then comb over your past conversations with the colleague, searching across all of your apps, open tabs within apps, and more besides.

Recall will also be able to help you find files you’ve lost, and to search your browser history, and so forth. You’ll be able to ask for Recall’s assistance using natural language, the way we converse with one another in real life, instead of having to use precise commands. 

All of this will run natively on your PC and won’t have to tap the cloud for computing power, meaning your data will be more secure, as everything can be kept locally, and nothing is sent to an external data center. It’s all happening right there on your Copilot+ PC with the help of that powerful NPU.

Microsoft presenting Surface Laptop and Surface Pro devices.

(Image credit: Microsoft)

When can you try Recall for yourself? 

The hubbub and excitement of Recall is just one of many things that have been revealed at Microsoft Build 2024 already, but you’ll have to wait until the Windows 11 24H2 update to try the feature (and don’t forget, you’ll need a PC that meets the hardware requirements). The 24H2 update is expected to arrive in September or October, or thereabouts.

If Recall and other AI features, deliver on all that’s promised (or even most of it), we think many people will be impressed and it could convince them to try to adapt to the new way of computing that Microsoft is trying to usher in.

Right now, Copilot isn’t regarded as particularly impressive, but in some ways, that’s due to the hardware needed to facilitate Microsoft’s plans for its AI assistant not being available – until now. We’re excited to get our hands on all these new AI features, as we’re one of those people that floods our PC with media – and we’d imagine Recall could be very handy for us indeed.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Windows 11 is getting a new look according to a leak – but it might be exclusive to AI-powered PCs

It looks like Windows 11 could get a new official default wallpaper, according to leaked images that have emerged just before Microsoft Build 2024, the company’s annual conference for developers – where it’s expected that we’ll see some big debuts showing off the fruits of collaboration between Microsoft, Qualcomm, and other partners. 

Microsoft has been pretty tight-lipped about what it plans to show off, and hasn’t even officially announced the new wallpaper, although it’s already available for download in high resolution. 

This was uncovered by German tech blog WinFuture, whose sources leaked information about Samsung Galaxy Book4 Edge Pro laptops with ARM chips. The leak included multiple photographs of the laptop from a variety of angles, as well as the new wallpaper, which joins the series of Windows 11 signature Bloom wallpapers (Neowin has a collection of others you can view and download). Neowin speculates that the new AI-focused PCs will ship with the new background. 

See more

Accompanying the leak, X user @cadenzza_ shared a high-resolution version of the brand new Bloom wallpaper variation (apparently shared in a private Windows Insider Telegram group originally) that you can download by saving the image below or from @cadenzza_’s post, and set on your device.

The full image of the new Windows 11 colorful Bloom background wallpaper

(Image credit: Microsoft/X(Twitter) user @cadenzza_)

Microsoft's lips are sealed and it's got our attention

It’s interesting how closely Microsoft’s been guarding what it’s about to share, with this static wallpaper being one of the things that we can confirm at all. Neowin has proposed that Microsoft might be crafting new desktop background effects for Windows 11, perhaps making use of the next-generation devices’ AI capabilities, and creating effects simulating depth, and possibly making the background reactive to how you move your cursor. 

We’ll have to see if this is the case at some point in the next few days as Microsoft Build goes on. We expect the announcement of consumer versions of the Surface Pro 10 and Surface Laptop 6 laptops with Qualcomm Snapdragon X processors, and whatever AI innovations Microsoft wants to bring to our attention. Another new hardware introduction we expect is a Copilot keyboard button, which has been discussed for a while now. Other Copilot-related news could have to do with OpenAI’s recent debut of GPT-4o, and possibly a souped-up Windows Copilot AI assistant.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Google teases new AI-powered Google Lens trick in feisty ChatGPT counter-punch

It's another big week in artificial intelligence in a year that's been full of them, and Google has teased a new AI feature coming to mobile devices just hours ahead of its Google I/O 2024 event – where we're expecting some major announcements.

A social media post from Google shows someone asking their phone about what's being shown through the camera. In this case, it's people setting up the Google I/O stage, which the phone correctly identifies.

User and phone then go on to have a real-time chat about Google I/O 2024, complete with a transcription of the conversation on screen. We don't get any more information than that, but it's clearly teasing some of the upcoming reveals.

As far as we can tell, it looks like a mix of existing Google Lens and Google Gemini technologies, but with everything running instantly. Lens and Gemini can already analyze images, but studying real-time video feeds would be something new.

The AI people

See more

It's all very reminiscent of the multimodal features – mixing audio, text, and images – that OpenAI showed off with its own ChatGPT bot yesterday. ChatGPT now has a new AI model called GPT-4 Omni (GPT-4o), which makes all of this natural interaction even easier.

We've also seen the same kind of technology demoed on the Rabbit R1 AI device. The idea is that these AIs become less like boxes that you type text into, and more like synthetic people who can see, recognize, and talk.

Based on this teaser, it looks likely that this is the way the Google Gemini AI model and bot is going. While we can't identify the smartphone in the video, it may be that these new features come to Pixel phones (like the new Google Pixel 8a) first.

All will be revealed later today, May 14: everything gets underway at 10am PT / 1pm ET / 6pm BST, which is May 15 at 3am AEST. We've put together a guide to how to watch Google I/O 2024 online, and we'll be reporting live from the event too.

You might also like

TechRadar – All the latest technology news

Read More

Apple is forging a path towards more ethical generative AI – something sorely needed in today’s AI-powered world

Copyright is something of a minefield right now when it comes to AI, and there’s a new report claiming that Apple’s generative AI – specifically its ‘Ajax’ large language model (LLM) – may be one of the only ones to have been both legally and ethically trained. It’s claimed that Apple is trying to uphold privacy and legality standards by adopting innovative training methods. 

Copyright law in the age of generative AI is difficult to navigate, and it’s becoming increasingly important as AI tools become more commonplace. One of the most glaring issues that comes up, again and again, is that many companies train their large language models (LLMs) using copyrighted works, typically not disclosing whether they license that training material. Sometimes, the outputs of these models include entire sections of copyright-protected works. 

The current justification for why copyrighted material is so widely used as far as some of these companies to train their LLMs is that, not dissimilar to humans, these models need a substantial amount of information (called training data for LLMs) to learn and generate coherent and convincing responses – and as far as these companies are concerned, copyrighted materials are fair game.

Many critics of generative AI consider it copyright infringement if tech companies use works in training and output of LLMs without explicit agreements with copyright holders or their representatives. Still, this criticism hasn’t put tech companies off from doing exactly that, and it’s assumed to be the case for most AI tools, garnering a growing pool of resentment towards the companies in the generative AI space.  

OpenAI CEO Sam Altman attends the artificial intelligence Revolution Forum. New York, US - 13 Jan 2023

(Image credit: Shutterstock/photosince)

There have even been a growing number of legal challenges mounted in these tech companies’ direction. OpenAI and Microsoft have actually been sued by the New York Times for copyright infringement back in December 2023, with the publisher accusing the two companies of training their LLMs on millions of New York Times articles. In September 2023, OpenAI and Microsoft were also sued by a number of prominent authors, including George R. R. Martin, Michael Connelly, and Jonathan Franzen. In July of 2023, over 15,000 authors signed an open letter directed at companies such as Microsoft, OpenAI, Meta, Alphabet, and others, calling on leaders of the tech industry to protect writers, calling on these companies to properly credit and compensate authors for their works when using them to train generative AI models. 

In April of this year, The Register reported that Amazon was hit with a lawsuit by an ex-employee alleging she faced mistreatment, discrimination, and harassment, and in the process, she testified about her experience when it came to issues of copyright infringement.  This employee alleges that she was told to deliberately ignore and violate copyright law to improve Amazon’s products to make them more competitive, and that her supervisor told her that “everyone else is doing it” when it came to copyright violations. Apple Insider echoes this claim, stating that this seems to be an accepted industry standard. 

As we’ve seen with many other novel technologies, the legislation and ethical frameworks always arrive after an initial delay, but it looks like this is becoming a more problematic aspect of generative AI models that the companies responsible for them will have to respond to.

A man editing a photo on a Mac Mini

(Image credit: Apple)

The Apple approach to ethical AI training (that we know of so far)

It looks like at least one major tech player might be trying to take the more careful and considered route to avoid as many legal (and moral!) challenges as possible – and somewhat surprisingly, it’s Apple. According to Apple Insider, Apple has been pursuing diligently licensing major news publications’ works when looking for AI training material. Back in December, Apple petitioned to license the archives of several major publishers to use these as training material for its own LLM, known internally as Ajax. 

It’s speculated that Ajax will be the software for basic on-device functionality for future Apple products, and it might instead license software like Google’s Gemini for more advanced features, such as those requiring an internet connection. Apple Insider writes that this allows Apple to avoid certain copyright infringement liabilities as Apple wouldn’t be responsible for copyright infringement by, say, Google Gemini. 

A paper published in March detailed how Apple intends to train its in-house LLM: a carefully chosen selection of images, image-text, and text-based input. In its methods, Apple simultaneously prioritized better image captioning and multi-step reasoning, at the same time as paying attention to preserving privacy. The last of these factors is made all the more possible for the Ajax LLM by it being entirely on-device and therefore not requiring an internet connection. There is a trade-off, as this does mean that Ajax won’t be able to check for copyrighted content and plagiarism itself, as it won’t be able to connect to online databases that store copyrighted material. 

There is one other caveat that Apple Insider reveals about this when speaking to sources who are familiar with Apple’s AI testing environments: there don’t currently seem to be many, if any, restrictions on users utilizing copyrighted material themselves as the input for on-device test environments. It's also worth noting that Apple isn't technically the only company taking a rights-first approach: art AI tool Adobe Firefly is also claimed to be completely copyright-compliant, so hopefully more AI startups will be wise enough to follow Apple and Adobe's lead.

I personally welcome this approach from Apple as I think human creativity is one of the most incredible capabilities we have, and I think it should be rewarded and celebrated – not fed to an AI. We’ll have to wait to know more about what Apple’s regulations regarding copyright and training its AI look like, but I agree with Apple Insider’s assessment that this definitely sounds like an improvement – especially since some AIs have been documented regurgitating copyrighted material word-for-word. We can look forward to learning more about Apple’s generative AI efforts very soon, which is expected to be a key driver for its developer-focused software conference, WWDC 2024

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Google might have a new AI-powered password-generating trick up its sleeve – but can Gemini keep your secrets safe?

If you’ve been using Google Chrome for the past few years, you may have noticed that whenever you’ve had to think up a new password, or change your existing one, for a site or app, a little “Suggest strong password” dialog box would pop up – and it looks like it could soon offer AI-powered password suggestions. 

A keen-eyed software development observer has spotted that Google might be gearing up to infuse this feature with the capabilities of Gemini, its latest large language model (LLM).

The discovery was made by @Leopeva64 on X. They found references to Gemini in patches of Gerrit, a web-based code review system developed by Google and used in the development of Google products like Android

These findings appear to be backed up by screenshots that show glimpses of how Gemini could be incorporated into Chrome to give you even better password suggestions when you’re looking to create a new password or change from one you’ve previously set.

See more

Gemini guesswork

One line of code that caught my attention is that “deleting all passwords will turn this feature off.” I wonder if this does what it says on the tin: shutting the feature off if a user deletes all of their passwords, or if this just means all of the passwords generated by the “Suggest strong passwords” feature. 

The final screenshot that @Leopeva64 provides is also intriguing as it seems to show the prompt that Google engineers have included to get Gemini to generate a suitable password. 

This is a really interesting move by Google and it could play out well for Chrome users who use the strong password suggestion feature. I’m a little wary of the potential risks associated with this method of password generation, similar to risks you find with many such methods. LLMs are susceptible to information leaks caused by prompt or injection hacks. These hacks are designed to trick the AI models to give out information that their creators, individuals, or organizations might want to keep private, like someone’s login information.

A woman working on a laptop in a shared working space sitting next to a man working at a computer

(Image credit: Shutterstock/Gorodenkoff)

An important security consideration 

Now, that sounds scary and as far as we know, this hasn’t happened yet with any widely-deployed LLM, including Gemini. It’s a theoretical fear and there are standard password security practices that tech organizations like Google employ to prevent data breaches. 

These include encryption technologies, which encode data so that only authorized parties can access it for multiple stages of the password generation and storage process, and hashing, a one-way data conversion process that’s intended to make data reverse-engineering hard to do. 

You could also use any other LLM like ChatGPT to generate a strong password manually, although I feel like Google knows more about how to do this, and I’d only advise experimenting with that if you’re a software data professional. 

It’s not a bad idea as a proposition and a use of AI that could actually be very beneficial for users, but Google will have to put an equal (if not greater) amount of effort into making sure Gemini is bolted down and as impenetrable to outside attacks as can be. If it implements this and by some chance it does cause a huge data breach, that will likely damage people’s trust of LLMs and could impact the reputations of the tech companies, including Google, who are championing them.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Meta AR glasses: everything we know about the AI-powered AR smart glasses

After a handful of rumors and speculation suggested Meta was working on a pair of AR glasses, it unceremoniously confirmed that Meta AR glasses are on the way – doing so via a short section at the end of a blog post celebrating the 10th anniversary of Reality Labs (the division behind its AR/VR tech).

While not much is known about them, the glasses were described as a product merging Meta’s XR hardware with its developing Meta AI software to “deliver the best of both worlds” in a sleek wearable package.

We’ve collected all the leaks, rumors, and some of our informed speculation in this one place so you can get up to speed on everything you need to know about the teased Meta AR glasses. Let’s get into it.

Meta AR glasses: Price

We’ll keep this section brief as right now it’s hard to predict how much a pair of Meta AR glasses might cost because we know so little about them – and no leakers have given a ballpark estimate either.

Current smart glasses like the Ray-Ban Meta Smart Glasses, or the Xreal Air 2 AR smart glasses will set you back between $ 300 to $ 500 / £300 to £500 / AU$ 450 to AU$ 800; Meta’s teased specs, however, sound more advanced than what we have currently.

Lance Ulanoff showing off Google Glass

Meta’s glasses could cost as much as Google Glass (Image credit: Future)

As such, the Meta AR glasses might cost nearer $ 1,500 (around £1,200 / AU$ 2300)  – which is what the Google Glass smart glasses launched at.

A higher price seems more likely given the AR glasses novelty, and the fact Meta would need to create small yet powerful hardware to cram into them – a combo that typically leads to higher prices.

We’ll have to wait and see what gets leaked and officially revealed in the future.

Meta AR glasses: Release date

Unlike price, several leaks have pointed to when we might get our hands – or I suppose eyeballs – on Meta’s AR glasses. Unfortunately, we might be waiting until 2027.

That’s according to a leaked Meta internal roadmap shared by  The Verge back in March 2023. The document explained that a precursor pair of specs with a display will apparently arrive in 2025, with ‘proper’ AR smart glasses due in 2027.

RayBan Meta Smart Glasses close up with the camera flashing

(Image credit: Meta)

In February 2024  Business Insider cited unnamed sources who said a pair of true AR glasses could be shown off at this year’s Meta Connect conference. However, that doesn’t mean they’ll launch sooner than 2027. While Connect does highlight soon-to-release Meta tech, the company takes the opportunity to show off stuff coming further down the pipeline too. So, its demo of Project Orion (as those who claim to be in the know call it) could be one of those ‘you’ll get this when it’s ready’ kind of teasers.

Obviously, leaks should be taken with a pinch of salt. Meta could have brought the release of its specs forward, or pushed it back depending on a multitude of technological factors – we won’t know until Meta officially announces more details. Considering it has teased the specs suggests their release is at least a matter of when not if.

Meta AR glasses: Specs and features

We haven't heard anything about the hardware you’ll find in Meta’s AR glasses, but we have a few ideas of what we’ll probably see from them based on Meta’s existing tech and partnerships.

Meta and LG recently confirmed that they’ll be partnering to bring OLED panels to Meta’s headsets, and we expect they’ll bring OLED screens to its AR glasses too. OLED displays appear in other AR smart glasses so it would make sense if Meta followed suit.

Additionally, we anticipate that Meta’s AR glasses will use a Qualcomm Snapdragon chipset just like Meta’s Ray-Ban smart glasses. Currently, that’s the AR1 Gen 1, though considering Meta’s AR specs aren’t due until 2027 it seems more likely they’d be powered by a next-gen chipset – either an AR2 Gen 1 or an AR1 Gen 2.

A Meta Quest 3 player sucking up Stay Puft Marshmallow Men from Ghostbusters in mixed reality using virtual tech extending from their controllers

The AR glasses could let you bust ghost wherever you go (Image credit: Meta)

As for features, Meta’s already teased the two standouts: AR and AI abilities.

What this means in actual terms is yet to be seen but imagine virtual activities like being able to set up an AR Beat Saber jam wherever you go, an interactive HUD when you’re navigating from one place to another, or interactive elements that you and other users can see and manipulate together – either for work or play.

AI-wise, Meta is giving us a sneak peek of what's coming via its current smart glasses. That is you can speak to its Meta AI to ask it a variety of questions and for advice just as you can other generative AI but in a more conversational way as you use your voice.

It also has a unique ability, Look and Ask, which is like a combination of ChatGPT and Google Lens. This allows the specs to snap a picture of what’s in front of you to inform your question, allowing you to ask it to translate a sign you can see, for a recipe using ingredients in your fridge, or what the name of a plant is so you can find out how best to care for it.

The AI features are currently in beta but are set to launch properly soon. And while they seem a little imperfect right now, we’ll likely only see them get better in the coming years – meaning we could see something very impressive by 2027 when the AR specs are expected to arrive.

Meta AR glasses: What we want to see

A slick Ray-Ban-like design 

RayBan Meta Smart Glasses

The design of the Ray-Ban Meta Smart Glasses is great (Image credit: Meta)

While Meta’s smart specs aren't amazing in every way – more on that down below – they are practically perfect in the design department. The classic Ray-Ban shape is sleek, they’re lightweight, super comfy to wear all day, and the charging case is not only practical, it's gorgeous.

While it’s likely Ray-Ban and Meta will continue their partnership to develop future smart glasses – and by extension the teased AR glasses – there’s no guarantee. But if Meta’s reading this, we really hope that you keep working with Ray-Ban so that your future glasses have the same high-quality look and feel that we’ve come to adore.

If the partnership does end, we'd like Meta to at least take cues from what Ray-Ban has taught it to keep the design game on point.

Swappable lenses 

Orange RayBan Meta Smart Glasses in front of a wall of colorful lenses including green, blue, yellow and pink

We want to change our lenses Meta! (Image credit: Meta)

While we will rave about Meta’s smart glasses design we’ll admit there’s one flaw that we hope future models (like the AR glasses) improve on; they need easily swappable lenses.

While a handsome pair of shades will be faultless for your summer vacations, they won’t serve you well in dark and dreary winters. If we could easily change our Meta glasses from sunglasses to clear lenses as needed then we’d wear them a lot more frequently – as it stands, they’re left gathering dust most months because it just isn’t the right weather.

As the glasses get smarter, more useful, and pricier (as we expect will be the case with the AR glasses) they need to be a gadget we can wear all year round, not just when the sun's out.

Speakers you can (quietly) rave too 

JBL Soundgear Sense

These open ear headphones are amazing, Meta take notes (Image credit: Future)

Hardware-wise the main upgrade we want to see in Meta’s AR glasses is better speakers. Currently, the speakers housed in each arm of the Ray-Ban Meta Smart Glasses are pretty darn disappointing – they can leak a fair amount of noise, the bass is practically nonexistent and the overall sonic performance is put to shame by even basic over-the-ears headphones.

We know open-ear designs can be a struggle to get the balance right with. But when we’ve been spoiled by open-ear options like the JBL SoundGear Sense – that have an astounding ability to deliver great sound and let you hear the real world clearly (we often forget we’re wearing them) – we’ve come to expect a lot and are disappointed when gadgets don’t deliver.

The camera could also get some improvements, but we expect the AR glasses won’t be as content creation-focused as Meta’s existing smart glasses – so we’re less concerned about this aspect getting an upgrade compared to their audio capabilities.

You might also like

TechRadar – All the latest technology news

Read More

Samsung Galaxy Ring could help cook up AI-powered meal plans to boost your diet

As we get closer to the full launch of the Samsung Galaxy Ring, we're slowly learning more about its many talents – and some fresh rumors suggest these could include planning meals to improve your diet.

According to the Korean site Chosun Biz (via GSMArena), Samsung plans to integrate the Galaxy Ring with its new Samsung Food app, launched in August 2023

Samsung calls this app an “AI-powered food and recipe platform”, as it can whip up tailored meal plans and even give you step-by-step guides to making specific dishes. The exact integration with the Galaxy Ring isn't clear, but according to the Korean site, the wearable will help make dietary suggestions based on your calorie consumption and body mass index (BMI).

The ultimate aim is apparently to integrate this system with smart appliances (made by Samsung, of course) like refrigerators and ovens. While they aren't yet widely available, appliances like Samsung Bespoke 4-Door Flex Refrigerator and Bespoke AI Oven include cameras that can design or cook recipes based on your dietary needs.

It sounds like the Galaxy Ring, and presumably smartwatches like the incoming Galaxy Watch 7 series, are the missing links in a system that can monitor your health and feed that info into the Samsung Food app, which you can download now for Android and iOS.

The Ring's role in this process will presumably be more limited than smartwatches, whose screens can help you log meals and more. But the rumors hint at how big Samsung's ambitions are for its long-awaited ring, which will be a strong new challenger in our best smart rings guide when it lands (most likely in July).

Hungry for data

A phone on a grey background showing the Samsung Food app

(Image credit: Samsung)

During our early hands-on with the Galaxy Ring, it was clear that Samsung is mostly focusing on its sleep-tracking potential. It goes beyond Samsung's smartwatches here, offering unique insights including night movement, resting heart rate during sleep, and sleep latency (the time it takes to fall asleep).

But Samsung has also talked up the Galaxy Ring's broader health potential more recently. It'll apparently be able to generate a My Vitality Score in Samsung's Health app (by crunching together data like your activity and heart rate) and eventually integrate with appliances like smart fridges.

This means it's no surprise to hear that the Galaxy Ring could also play nice with the Samsung Food app. That said, the ring's hardware limitations mean this will likely be a minor feature initially, as its tracking is more focused on sleep and exercise. 

We're actually more excited about the Ring's potential to control our smart home than integrate with appliances like smart ovens, but more features are never a bad thing – as long as you're happy to give up significant amounts of health data to Samsung.

You might also like

TechRadar – All the latest technology news

Read More

Meta’s Ray-Ban smart glasses are becoming AI-powered tour guides

While Meta’s most recognizable hardware is its Quest VR headsets, its smart glasses created in collaboration with Ray-Ban are proving to be popular thanks to their sleek design and unique AI tools – tools that are getting an upgrade to turn them into a wearable tourist guide.

In a post on Threads – Meta’s Twitter-like Instagram spinoff – Meta CTO Andrew Bosworth showed off a new Look and Ask feature that can recognize landmarks and tell you facts about them. Bosworth demonstrated it using examples from San Francisco such as the Golden Gate Bridge, the Painted Ladies, and Coit Tower.

As with other Look and Ask prompts, you give a command like “Look and tell me a cool fact about this bridge.” The Ray-Ban Meta Smart Glasses then use their in-built camera to scan the scene in front of you, and cross-reference the image with info in the Meta AI’s knowledge database (which includes access to the Bing search engine). 

The specs then respond with the cool fact you requested – in this case explaining the Golden Gate Bridge (which it recognized in the photo it took) is painted “International Orange” so that it would be more visible in foggy conditions.

Screen shots from Threads showing the Meta Ray-Ban Smart Glasses being used to give the suer information about San Francisco landmarks

(Image credit: Andrew Bosworth / Threads)

Bosworth added in a follow-up message that other improvements are being rolled out, including new voice commands so you can share your latest Meta AI interaction on WhatsApp and Messenger. 

Down the line, Bosworth says you’ll also be able to change the speed of Meta AI readouts in the voice settings menu to have them go faster or slower.

Still not for everyone 

One huge caveat is that – much like the glasses’ other Look and Ask AI features – this new landmark recognition feature is still only in beta. As such, it might not always be the most accurate – so take its tourist guidance with a pinch of salt.

Orange RayBan Meta Smart Glasses

(Image credit: Meta)

The good news is Meta has at least opened up its waitlist to join the beta so more of us can try these experimental features. Go to the official page, input your glasses serial number, and wait to get contacted – though this option is only available if you’re based in the US.

In his post Bosworth did say that the team is working to “make this available to more people,” but neither he nor Meta have given a precise timeline of when the impressive AI features will be more widely available.

You might also like

TechRadar – All the latest technology news

Read More

Oppo’s new AI-powered AR smart glasses give us a glimpse of the next tech revolution


  • Oppo has shown off its Air Glass 3 AR glasses at MWC 2024
  • They’re powered by its AndesGPT AI model and can answer questions
  • They’re just a prototype, but the tech might not be far from launching

While there’s a slight weirdness to the Meta Ray-Ban Smart Glasses – they are a wearable camera, after all – the onboard AI is pretty neat, even if some of its best features are still in beta. So it’s unsurprising that other companies are looking to launch their own AI-powered specs, with Oppo being the latest in unveiling its new Air Glass 3 at MWC 2024.

In a demo video, Oppo shows how the specs have seemingly revolutionized someone's working day. When they boot up, the Air Glass 3's 1,000-nit displays show the user a breakdown of their schedule, and while making a coffee ahead of a meeting they get a message saying that it's started early.

While in the meeting the specs pick up on a question that’s been asked, and Oppo's AndesGPT AI model (which runs on a connected smartphone) is able to provide some possible answers. Later it uses the design details that have been discussed to create an image of a possible prototype design which the wearer then brings to life.

After a good day’s work they can kick back to some of their favorite tunes that play through the glasses’ in-built speakers. All of this is crammed into a 50g design. 

Now, the big caveat here is the Air Glass 3 AR glasses are just a prototype. What’s more, neither of the previous Air Glass models were released outside of China – so there’s a higher than likely chance the Air Glass 3 won’t be either.

But what Oppo is showing off isn’t far from being mimicked by its rivals, and a lot of it is pretty much possible in tech that you can go out and buy today – including those Meta Ray-Ban Smart Glasses.

The future is now

The Ray-Ban Meta Smart Glasses already have an AI that can answer questions like a voice-controlled ChatGPT

They can also scan the environment around you using the camera to get context for questions – for example, “what meal can I make with these ingredients?” – via their 'Look and Ask' feature. These tools are currently in beta, but the tech is working and the AI features will hopefully be more widely available soon.

They can also alert you to texts and calls that you’re getting and play music, just like the Oppo Air Glass 3 concept.

Orange RayBan Meta Smart Glasses in front of a wall of colorful lenses including green, blue, yellow and pink

The Ray-Ban Meta glasses ooze style and have neat AI tools (Image credit: Meta)

Then there’s the likes of the Xreal Air 2. While their AR display is a little more distracting than the screen found on the Oppo Air Glass 3, they are a consumer product that isn’t mind-blowingly expensive to buy – just $ 399 / £399 for the base model.

If you combine these two glasses then you’re already very close to Oppo’s concept; you’d just need to clean up the design a little, and probably splash out a little more as I expect lenses with built-in displays won’t come cheap.

The only thing I can’t see happening soon is the AI creating a working prototype product design for you. It might be able to provide some inspiration for a designer to work off, but reliably creating a fully functional model seems more than a little beyond existing AI image generation tools' capabilities.

While the Oppo Air Glass 3 certainly look like a promising glimpse of the future, we'll have to see what they're actually capable of if and when they launch outside China.

You might also like

TechRadar – All the latest technology news

Read More