Windows 11’s next big update is here – these are the top 5 features introduced with Moment 5

Windows 11 just received its latest major upgrade, Moment 5, which is part of the cumulative update for April that has just been released.

So, what are you getting with this update? We’ve picked out the five best features incoming with Moment 5 – which is formally known as patch KB5036893 – and after going over those, we’ll briefly discuss the other goodies you’ll get besides these highlights.

Voice Access shortcuts

(Image credit: Microsoft)

1. Acing accessibility – Voice Access and Narrator improvements

Microsoft has been consistently doing good work driving forward with accessibility features in Windows 11, and Moment 5 does well in this department. Voice Access is where a lot of the changes have happened, giving users the ability to use this feature across multiple displays. Using the mouse grid, it’s now possible to, for example, drag and drop a file from one monitor to another.

Another major introduction is the ability to create custom voice commands, so you can have a command to paste a set section of text into a document, for example. There’s a lot of stuff taking the finer points of Voice Access to another level, and some changes for Narrator, too, with the addition of a bunch of new natural voices for the screen reading tool (and more besides).

2. Snap Layouts powered up with AI

Not everyone uses Snap Layouts, but they’re actually a pretty nifty idea for when multitasking across a range of apps on the desktop, allowing you to swiftly snap those windows into place in an arrangement that makes sense. 

With Moment 5, Microsoft has brought in AI-driven suggestions for premade layouts, a handy move. If you don’t use Snap Layouts, now’s the time to give it a whirl.

Windows Photos App

(Image credit: Windows)

3. Photos app gets magic eraser

Windows 11’s default Photos app is being gifted a notable new AI-powered feature with this update, namely generative erase. This allows you to highlight an area that you want to remove in an image. 

Say there’s a photo bomber in the background of a snap – simply brush over them, and the AI will remove the person, then filling in the background intelligently to match the rest of the photo. Of course, AI tricks can be unpredictable at times, but this is a pretty handy feature to at least give a go – if you don’t like the end result, just undo the change.

4. Nearby Share is speedier and works better

If you’re not familiar with it, Nearby Share is a feature that allows you to wirelessly share files or website links with other nearby devices. With Moment 5, Microsoft has made it so Wi-Fi and Bluetooth – which the feature uses – are automatically turned on if you switch on Nearby Share, to ensure you don’t run into problems. Furthermore, files now transfer at faster speeds (when using public as well as private wireless networks).

Windows 11 laptop showing Copilot

(Image credit: Microsoft)

5. Copilot goodies

Not everyone is keen on Copilot, or uses the AI assistant, but those who do are in for a treat with Moment 5. Microsoft’s latest update introduces plug-ins for third-party services – a small collection to begin with, such as OpenTable, which can be used to get Copilot to make a dinner reservation for you.

Copilot’s library of commands pertaining to Windows 11 settings has also been expanded, as previously seen in testing. This includes commands relating to accessibility options, and various settings and device info options (and the ability for the AI to take out the desktop trash, too – also known as emptying the Recycle Bin).

Other new Moment 5 features

Microsoft has also changed Windows Share so that it now supports sharing via WhatsApp, and tweaked the Cast feature so it’s more discoverable (when it might be sensible to use the ability, which facilitates casting the screen to another display, such as a TV or tablet).

Those who use the widgets board in Windows 11 will also be pleased to hear this is receiving some attention too, with users getting the ability to organize widgets on the panel into categories.

Finally, it’s worth noting that you can now use Copilot without being signed into a Microsoft account – but only 10 times. After that, you’ll have to sign in, but this at least gives those with a local account the chance to try out the AI.

Also, bear in mind that while those in Europe will get extra functionality that extends to stripping out Bing and Edge from Windows 11, among other bits and pieces, those in the US or other regions don’t get these options.

As ever, you can grab the latest cumulative update for Windows 11 – containing all these Moment 5 features – by checking for updates in Windows Update.

Via Bleeping Computer

You might also like…

TechRadar – All the latest technology news

Read More

How much will it cost to keep Windows 10 alive next year? You’ll have to wait to find out

Microsoft is keeping its cards close to its chest regarding how much consumers will need to pay if they want to keep Windows 10 support alive when it officially runs out in October 2025.

Windows Latest noticed that Microsoft penned a blog post detailing the options and costs for businesses looking to have extended support in terms of security updates being piped through into 2026 and potentially beyond.

This is nothing to do with consumers, however, although everyday users of Windows 10 will also have a choice to pay for extending security updates should they want to keep the OS after October 2025.

Microsoft has clarified that point in an update to the post, stating that: “The details and pricing structure outlined in this post apply to commercial organizations only.”

So when will we find out about the cost for consumers? We don’t know is the short answer – you’ll have to wait. Microsoft wrote: “Details will be shared at a later date for consumers on our consumer end of support page.”

Note that even with paying for extended support, this is just security patches you’ll be getting, and Microsoft won’t be developing or applying any new features to Windows 10.

Analysis: Should you pay for extended Windows 10 support?

Windows 10 Logo on Laptop

(Image credit: Shutterstock – Wachiwit)

To be fair to Microsoft, we are still a year and a half away from support expiring for Windows 10, so it’s not exactly a surprise that pricing options aren’t worked out fully yet. Although if Microsoft has managed to count the relevant beans and do the math for business customers, hopefully consumers won’t be left in the dark for too much longer. It's a little frustrating to see pricing for some customers, and not for others.

As to the wider issue of whether you want to pay for extended support for Windows 10, well, there are some folks in the unhappy position of not being able to upgrade to Windows 11 due to the hardware requirements. If you’re in that boat, then it might be worth exploring the options available to make your PC compatible and then migrate to Windows 11 – depending on what that entails.

If it’s a matter of adding a TPM (trusted platform module), that wouldn't be very expensive compared to the ongoing cost of subscribing (on a monthly or perhaps yearly basis) to post-support security updates for Windows 10. You could even pay a computer repair shop to help with the upgrade, as that’ll likely still work out cheaper than a support subscription in the longer run.

On the other hand, if you'll likely need to upgrade much of your PC to be able to install Windows 11, that would be more challenging (both financially and practically). For example, you may have an older unsupported CPU, which would likely requite a new motherboard or RAM. That being the case, staying on Windows 10 could make sense until you can afford a new Windows 11 PC – or indeed a Windows 12 device by that time, no doubt.

The other alternative is to shift away from Microsoft completely to one of the best Linux distros, which won’t cost you a penny – and you can always choose a distro that’s a fair bit like Windows in its interface. Although bear in mind that you’ll still face a lot of limitations using Linux rather than Windows.

You might also like…

TechRadar – All the latest technology news

Read More

Meta teases its next big hardware release: its first AR glasses, and we’re excited

Meta’s Reality Labs division – the team behind its VR hardware and software efforts – has turned 10 years old, and to celebrate the company has released a blog post outlining its decade-long history. However, while a trip down memory lane is fun, the most interesting part came right at the end, as Meta teased its next major new hardware release: its first-ever pair of AR glasses.

According to the blog post, these specs would merge the currently distinct product pathways Meta’s Reality Labs has developed – specifically, melding its AR and VR hardware (such as the Meta Quest 3) with the form factor and AI capabilities of its Ray-Ban Meta Smart Glasses to, as Meta puts it, “deliver the best of both worlds.”

Importantly for all you Quest fans out there, Meta adds that its AR glasses wouldn’t replace its mixed-reality headsets. Instead, it sees them being the smartphones to the headsets’ laptop/desktop computers – suggesting that the glasses will offer solid performance in a sleek form factor, but with less oomph than you’d get from a headset.

Before we get too excited, though, Meta hasn’t said when these AR specs will be released – and unfortunately they might still be a few years away.

When might we see Meta’s AR glasses?

A report from The Verge back in March 2023 shared an apparent Meta Reality Labs roadmap that suggested the company wanted to release a pair of smart glasses with a display in 2025, followed by a pair of 'proper' AR smart glasses in 2027.

The Meta Quest 3 dangling down as a user looks towards a sunny window while holding it

We’re ready for Meta’s next big hardware release (Image credit: Meta)

However, while we may have to wait some time to put these things on our heads, we might get a look at them in the next year or so,

A later report that dropped in February this year, this time via Business Insider, cited unnamed sources who said a pair of true AR glasses would be demoed at this year’s Meta Connect conference. Dubbed 'Orion' by those who claim to be in the know, the specs would combine Meta’s XR (a catchall for VR, AR, and MR) and AI efforts – which is exactly what Meta described in its recent blog post.

As always, we should take rumors with a pinch of salt, but given that this latest teaser came via Meta itself it’s somewhat safe to assume that Meta AR glasses are a matter of when, not if. And boy are we excited.

We want Meta AR glasses, and we want ‘em now 

Currently Meta has two main hardware lines: its VR headsets and its smart glasses. And while it’s rumored to be working on new entries to both – such as a budget Meta Quest 3 Lite, a high-end Meta Quest Pro 2, and the aforementioned third-generation Ray-Ban glasses with a screen – these AR glasses would be its first big new hardware line since it launched the Ray-Ban Stories in 2021.

And the picture Meta has painted of its AR glasses is sublime.

Firstly, while Meta’s current Ray-Ban smart glasses aren’t yet the smartest, a lot of major AI upgrades are currently in beta – and should be launching properly soon.

Ray-Ban meta glasses up close

The Ray-Ban Meta Smart Glasses are set to get way better with AI (Image credit: Future / Philip Berne)

Its Look and Ask feature combines the intelligence of ChatGPT – or in this instance its in-house Meta AI – with the image-analysis abilities of an app like Google Lens. This apparently lets you identify animals, discover facts about landmarks, and help you plan a meal based on the ingredients you have – it all sounds very sci-fi, and actually useful, unlike some AI applications.

We then take those AI-abilities and combine them with Meta’s first-class Quest platform, which is home to the best software and developers working in the XR space. 

While many apps likely couldn’t be ported to the new system due to hardware restrictions – as the glasses might not offer controllers, will probably be AR-only, and might be too small to offer as powerful a chipset or as much RAM as its Quest hardware – we hope that plenty will make their way over. And Meta’s existing partners would plausibly develop all-new AR software to take advantage of the new system.

Based on the many Quest 3 games and apps we’ve tried, even if just a few of the best make their way to the specs they’d help make Meta’s new product feel instantly useful. a factor that’s a must for any new gadget.

Lastly, we’d hopefully see Meta’s glasses adopt the single-best Ray-Ban Meta Smart Glasses feature: their design. These things are gorgeous, comfortable, and their charging case is the perfect combination of fashion and function. 

A closeup of the RayBan Meta Smart Glasses

We couldn’t ask for better-looking smart specs than these (Image credit: Meta)

Give us everything we have already design-wise, and throw in interchangeable lenses so we aren’t stuck with sunglasses all year round – which in the UK where I'm based are only usable for about two weeks a year – and the AR glasses could be perfect.

We’ll just have to wait and see what Meta shows off, either at this year’s Meta Connect or in the future – and as soon as they're ready for prime time, we’ll certainly be ready to test them.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT just took a big step towards becoming the next Google with its new account-free version

The most widely available (and free) version of ChatGPT, ChatGPT-3.5, is being made available to use without having to create and log into a personal account. That means you can have conversations with the AI chatbot without it being tied to personal details like your email. However, OpenAI, the tech organization behind ChatGPT, limits what users can do without registering for an account. For example, unregistered users will be limited in the kinds of questions they can ask and in their access to advanced features. 

This means there are still some benefits to making and using a ChatGPT account, especially if you’re a regular user. OpenAI writes in an official blog post that this change is intended to make it easy for people to try out ChatGPT and get a taste of what modern AI can do, without going through the sign-up process. 

In its announcement post on April 1, 2024, OpenAI explained that it’s rolling out the change gradually, so if you want to try it for yourself and can’t yet, don’t panic. When speaking to PCMag, an OpenAI spokesperson explained that this change is in the spirit of OpenAI’s overall mission to make it easier for people “to experience ChatGPT and the benefits of AI.”

Woman sitting by window, legs outstretched, with laptop

(Image credit: Shutterstock/number-one)

To create an OpenAI account or not to create an OpenAI account

If you don’t want your entries into the AI chatbot to be tied to the details you would have to disclose when setting up an account, such as your birthday, phone number, and email address, then this is a great development. That said, lots of people create dummy accounts to be able to use apps and web services, so I don’t think it’s that hard to circumvent, but you’d have to have multiple emails and phone numbers to ‘burn’ for this purpose. 

OpenAI does have a disclaimer that states that it is storing your inputs to potentially use to improve ChatGPT by default whether you’re signed in or not, which I suspected was the case. It also states that you can turn this off via ChatGPT’s settings, and this can be done whether you have an account or not.

If you do choose to make an account, you get some useful benefits, including being able to see your previous conversations with the chatbot, link others to specific conversations you’ve had, make use of the newly-introduced voice conversational features, custom instructions, and the ability to upgrade to ChatGPT Plus, the premium subscription tier of ChatGPT which allows users to use GPT-4 (its latest large language learning (LLM) model). 

If you decide not to create an account and forgo these features, you can expect to see the same chat interface that users with accounts use. OpenAI will also be putting in additional content safeguards for users who aren’t logged in, detailing that it’s put in measures to block prompts and generated responses in more categories and topics. Its announcement post didn’t include any examples of the types of topics or categories that will get this treatment, however.

Man holding a phone which is displaying ChatGPT is, prototype artificial intelligence chatbot developed by OpenAI

(Image credit: Shutterstock/R Photography Background)

An invitation to users, a power play to rivals?

I think this is an interesting change that will possibly tempt more people to try ChatGPT, and when they try it for the first time, it can seem pretty impressive. It allows OpenAI to give users a glimpse of its capabilities, which I imagine will convince some people to make accounts and access its additional features. 

This will continue expanding ChatGPT’s user pool that may choose to go on and become ChatGPT Plus paid subscribers. Perhaps this is a strategy that will pay off for OpenAI, and it might institute a sort of pass-it-down approach through the tiers as it introduces new generations of its models.

This easier user accessibility could mean the type of user growth that could see OpenAI become as commonplace as Google products in the near future. One of Google Search’s appeals, for example, is that you could just fire up your browser and make a query in an instant. It’s a user-centric way of doing things, and if OpenAI can do something similar by making it that easy to use ChatGPT, then things could get seriously interesting.


TechRadar – All the latest technology news

Read More

Windows 11’s next major feature drop is available now – for those brave enough to grab a preview update

Windows 11’s next feature update known as ‘Moment 5’ is now rolling out, albeit it’s still an optional update at this point.

The preview update KB5035942 became available yesterday, so pretty much everyone on Windows 11 (23H2 and 22H2) should see it now – if they check for it.

As mentioned, this is an optional installation, so it will only show up if you manually fire up a check in the Windows Update panel, whereupon you can then choose to download KB5035942.

Bear in mind that as it’s still in testing, there could be wrinkles in the preview update. But if you want those new Moment 5 features and can’t wait, well, they’re up for grabs now.

Currently, there are no known issues with KB5035942, but that’s not a guarantee you won’t encounter technical hitches, of course – it’s just that they might not have been flagged up yet.

At the time of writing, there are no reported issues on the Reddit thread announcing the update at any rate, which is a good early sign – there’s just a warning that this one is a hefty download. Given that it’s a major feature update, that’s to be expected, of course.

Analysis: Lock and load – or wait for next month?

What new features are provided by Moment 5? There’s an extensive list of the fresh additions in Microsoft’s support document for the March 2024 preview update, but let’s touch on some of the highlights here.

They include new functionality for the lock screen in Windows 11 in the form of cards that pipe through info on weather, stocks, traffic and more – a somewhat controversial addition as some regard it as bloat. Mind you, if you don’t like the idea, you don’t have to enable the lock screen cards, and we should note that this is rolling out gradually within those adopting Moment 5 right now – so you may not see it yet anyway.

The Voice Access feature has also received a good deal of attention here, including nifty new shortcuts for custom commands (like pasting a boilerplate piece of text), and the ability to use voice controls over multiple monitors for the first time. Narrator has a raft of new features too, and that includes being able to use voice commands with the screen reading tool, so you can verbally ask it to “speak faster” for example.

For those not signed into a Microsoft account, it’s also worth noting that Copilot now lets you run 10 queries, so you can give the AI assistant a quick trial without being logged in. (Copilot is now rolling out to more users, incidentally, so if you haven’t seen it yet, you might do very soon).

So, should you bag all these features now? Well, you need to balance your desire for new toys to play with against the possibility of faulty bits in testing. Generally speaking, the safest course of action is to wait for this to become a finished cumulative update in April, and install Moment 5 then. Still, if you can’t wait for any particular piece of functionality – or important bug fix, as there are some glitches resolved here, too – then you might want to go early on this one.

Via Neowin

You might also like…

TechRadar – All the latest technology news

Read More

The next Apple Pencil could work on the Vision Pro for spatial sketching

A rumor claims the Apple Vision Pro headset will one day support a future model of the Apple Pencil. This news comes from MacRumors, who got their information from an anonymous “source familiar with the matter,” so we'll take it with a grain of salt. Details on the update are scarce as you can imagine, but if it is indeed real, it could quite literally turn the world into your personal canvas.

The report states the upcoming Apple Pencil could be used on flat surfaces, like a desk, “complete with pressure and tilt sensitivity” to accurately display your artistic vision on one of the headset’s illustration apps. Support for a stylus would require a software upgrade, “but it is unclear which version” of visionOS will see the patch. MacRumors points out the first beta for visionOS 1.2 could come out this week  with the Apple Pencil support. However, nothing can be said with total confidence. We can only surmise that testing is currently ongoing internally.

No word on when the update will roll out, if at all, and it’s entirely possible this will never see the light of day. However, MacRumors seems to believe we could see something during the expected reveal of visionOS 2 at WWDC 2024 this June.

It is worth mentioning an Apple Pencil refresh is supposed to come out alongside new iPads models very soon. Whether or not this refresh and a Vision Pro update are one and the same remains to be seen. 

Analysis: Picking up the digital pen

Assuming this is all true (and fingers crossed that it is), an Apple Pencil on the Vision Pro would do wonders for achieving precise control. The hands-free control scheme is one of the main selling points for the headset. You don’t need special controllers to navigate the user interface. Thanks to an array of cameras and sensors, owners can simply use their eyes and hands to command the software. This method of navigation is fine for most things, but when it comes to drawing, it turns into a nightmare.

TechRadar’s Editor At Large Lance Ulanoff dealt with this first hand when he tried to illustrate on the Vision Pro. He ended up calling the whole experience “insanely frustrating and difficult.” The main problem is that the gaze controls clash with the hand gestures. If your eyes move between a reference image and the digital canvas, the art piece falls apart because the headset prioritizes what you’re looking at. Then there are other problems, like the numerous bugs affecting the current slate of art apps.

The hope with the future Apple Pencil is it’ll help keep the canvas steady. That way, there isn’t this weird back and forth between the two methods of controls.

If you're looking to pick up illustration as a hobby, check out TechRadar's list of the best free drawing software for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Windows 11’s next big update is almost ready to roll – but most people won’t get it for a long time yet

Windows 11’s next major update is coming close to completion, and in fact it’s rumored that it’ll hit its final stage of development very shortly – though its launch for all users will still be a good way down the line (we’ll come back to that).

As well-known Microsoft leaker Zac Bowden shared on X (formerly Twitter), Windows 11 24H2 is on track to hit RTM (release to manufacturing) in April.

See more

What this means is that the 24H2 update is ready to go to PC manufacturers so that they can work on installing it on their devices. In other words, Windows 11 24H2 is all but done at this point, save for final testing and changes that might need to be applied if PC makers run into any last-minute stumbling blocks.

Bowden mentions the ‘ge_release’ which refers to Germanium, a new platform that Windows 11 is built on with 24H2. While this won’t make any difference to the visible parts of the OS, under the hood, Germanium will offer tighter security and better overall performance.

With RTM for 24H2 happening in April, in theory, the plan is that it’ll take two months to finalize the new Windows 11 Germanium build, and it will be installed on ARM-based AI PCs when they start shipping in June.

Analysis: Clarifying the 24H2 release timeline

Note that as Bowden outlines on X, this does not mean Windows 11 24H2 (Germanium) will be released for everyone in June.

It will only be out on ARM-based laptops running Snapdragon X Elite chips (or variants) initially – like the consumer spin on the Surface Pro 10 or Surface Laptop 6. Which is why only the business models were unveiled recently – they have Intel CPUs that don’t need Germanium. Whereas the Germanium platform is actually required for these new ARM chips – which have been stoking a great deal of excitement – so this is why Microsoft is pushing it out ahead of time so as not to hold up those notebooks any longer than necessary.

As Bowden makes clear in a later tweet, Windows 11 24H2 won’t actually be ‘done’ until August, so the leaker suspects Microsoft wants to limit where Germanium is present until then.

What we can surmise from this is that while Windows 11 24H2 will be out on those mentioned AI PCs as early as June (if everything stays on schedule), not all of 24H2’s full library of features will be enabled – presumably.

Whatever the case, the full rollout of Windows 11 24H2 to all users won’t happen until after it’s fully done in August, meaning a September or October rollout to all Windows 11 users. This is the timeframe Microsoft is working to based on rumors that go back to the start of this year, in fact.

The long and short of it is that while Windows 11 24H2 may be ready for RTM next month and on the cusp of finalization technically, it won’t fully arrive until September (at the earliest). And the rollout will be phased as ever, so you might not get it on your particular Windows 11 device until several months after, which is all standard practice.

You might also like…

TechRadar – All the latest technology news

Read More

Windows 11’s next big AI feature could turn your video chats into a cartoon

Windows 11 users could get some smart abilities that allow for adding AI-powered effects to their video chats, including the possibility of transporting themselves into a cartoon world.

Windows Latest spotted the effects being flagged up on X (formerly Twitter) by regular leaker XenoPanther, who discovered clues to their existence by digging around in a Windows 11 preview build.

See more

These are Windows Studio effects, which is a set of features implemented by Microsoft in Windows 11 that use AI – requiring an NPU in the PC – to achieve various tricks. Currently, one of those is making it look like you’re making eye contact with the person on the other end of the video call. (In other words, making it seem like you’re looking at the camera, when you’re actually looking at the screen).

The new capabilities appear to be the choice to make the video feed look like an animated cartoon, a watercolor painting, or an illustrated drawing (like a pencil or felt tip artwork – we’re assuming something like the video for that eighties classic ‘Take on Me’ by A-ha).

If you’re wondering what Windows Studio is capable of as it stands, as well as the aforementioned eye contact feature – which is very useful in terms of facilitating a more natural interaction in video chats or meetings – it can also apply background effects. That includes blurring the background in case there’s something you don’t want other chat participants to see (like the fact you haven’t tied up your study in about three years).

The other feature is automatic framing which keeps you centered, with the image zoomed and cropped appropriately, as (or if) you move around.

Analysis: That’s all, folks!

Another Microsoft leaker, Zac Bowden, replied to the above tweet to confirm these are the ‘enhanced’ Windows Studio effects that he’s talked about recently, and that they look ‘super cool’ apparently. They certainly sound nifty, albeit on the more off-the-wall side of the equation than existing Windows Studio functionality – they’re fun aspects rather than serious presentation-related AI powers.

This is something we might see in testing soon, then, or that seems likely, particularly as two leakers have chimed in here. We might even see these effects arrive in Windows 11 24H2 later this year.

Of course, there’s no guarantee of that, but it also makes sense given that Microsoft is fleshing out pretty much everything under the sun with extra AI capabilities, wherever they can be crammed in – with a particular focus on creativity at the moment (and the likes of the Paint app).

The future is very much the AI PC, complete with NPU acceleration, as far as Microsoft is concerned.

You might also like…

TechRadar – All the latest technology news

Read More

Oppo’s new AI-powered AR smart glasses give us a glimpse of the next tech revolution

  • Oppo has shown off its Air Glass 3 AR glasses at MWC 2024
  • They’re powered by its AndesGPT AI model and can answer questions
  • They’re just a prototype, but the tech might not be far from launching

While there’s a slight weirdness to the Meta Ray-Ban Smart Glasses – they are a wearable camera, after all – the onboard AI is pretty neat, even if some of its best features are still in beta. So it’s unsurprising that other companies are looking to launch their own AI-powered specs, with Oppo being the latest in unveiling its new Air Glass 3 at MWC 2024.

In a demo video, Oppo shows how the specs have seemingly revolutionized someone's working day. When they boot up, the Air Glass 3's 1,000-nit displays show the user a breakdown of their schedule, and while making a coffee ahead of a meeting they get a message saying that it's started early.

While in the meeting the specs pick up on a question that’s been asked, and Oppo's AndesGPT AI model (which runs on a connected smartphone) is able to provide some possible answers. Later it uses the design details that have been discussed to create an image of a possible prototype design which the wearer then brings to life.

After a good day’s work they can kick back to some of their favorite tunes that play through the glasses’ in-built speakers. All of this is crammed into a 50g design. 

Now, the big caveat here is the Air Glass 3 AR glasses are just a prototype. What’s more, neither of the previous Air Glass models were released outside of China – so there’s a higher than likely chance the Air Glass 3 won’t be either.

But what Oppo is showing off isn’t far from being mimicked by its rivals, and a lot of it is pretty much possible in tech that you can go out and buy today – including those Meta Ray-Ban Smart Glasses.

The future is now

The Ray-Ban Meta Smart Glasses already have an AI that can answer questions like a voice-controlled ChatGPT

They can also scan the environment around you using the camera to get context for questions – for example, “what meal can I make with these ingredients?” – via their 'Look and Ask' feature. These tools are currently in beta, but the tech is working and the AI features will hopefully be more widely available soon.

They can also alert you to texts and calls that you’re getting and play music, just like the Oppo Air Glass 3 concept.

Orange RayBan Meta Smart Glasses in front of a wall of colorful lenses including green, blue, yellow and pink

The Ray-Ban Meta glasses ooze style and have neat AI tools (Image credit: Meta)

Then there’s the likes of the Xreal Air 2. While their AR display is a little more distracting than the screen found on the Oppo Air Glass 3, they are a consumer product that isn’t mind-blowingly expensive to buy – just $ 399 / £399 for the base model.

If you combine these two glasses then you’re already very close to Oppo’s concept; you’d just need to clean up the design a little, and probably splash out a little more as I expect lenses with built-in displays won’t come cheap.

The only thing I can’t see happening soon is the AI creating a working prototype product design for you. It might be able to provide some inspiration for a designer to work off, but reliably creating a fully functional model seems more than a little beyond existing AI image generation tools' capabilities.

While the Oppo Air Glass 3 certainly look like a promising glimpse of the future, we'll have to see what they're actually capable of if and when they launch outside China.

You might also like

TechRadar – All the latest technology news

Read More

Gemma, Google’s new open-source AI model, could make your next chatbot safer and more responsible

Google has unveiled Gemma, an open-source AI model that will allow people to create their own artificial intelligence chatbots and tools based on the same technology behind Google Gemini (the suite of AI tools formerly known as Bard and Duet AI).

Gemma is a collection of open-source models curated from the same technology and research as Gemini, developed by the team at Google DeepMind. Alongside the new open-source model, Google has also put out a ‘Responsible Generative AI Toolkit’ to support developers looking to get to work and experiment with Gemini, according to an official blog post

The open-source model comes in two variations, Gemma 2B and Gemma 7B, which have both been pre-trained to filter out sensitive or personal information. Both versions of the model have also been tested with reinforcement learning from human feedback, to reduce the potential of any chatbots based on Gemma from spitting out harmful content quite significantly. 

 A step in the right direction 

While it may be tempting to think of Gemma as just another model that can spawn chatbots (you wouldn’t be entirely wrong), it’s interesting to see that the company seems to have genuinely developed Gemma to “[make] AI helpful for everyone” as stated in the announcement. It looks like Google’s approach with its latest model is to encourage more responsible use of artificial intelligence. 

Gemma’s release comes right after OpenAI unveiled the impressive video generator Sora, and while we may have to wait and see what developers can produce using Gemma, it’s comforting to see Google attempt to approach artificial intelligence with some level of responsibility. OpenAI has a track record of pumping features and products out and then cleaning up the mess and implementing safeguards later on (in the spirit of Mark Zuckerberg’s ‘Move fast and break things’ one-liner). 

One other interesting feature of Gemma is that it’s designed to be run on local hardware (a single CPU or GPU, although Google Cloud is still an option), meaning that something as simple as a laptop could be used to program the next hit AI personality. Given the increasing prevalence of neural processing units in upcoming laptops, it’ll soon be easier than ever for anyone to take a stab at building their own AI.

You might also like…

TechRadar – All the latest technology news

Read More