Meta’s recent Quest 3 update includes a secret AI upgrade for mixed reality

Meta’s VR headsets recently received update v64, which according to Meta added several improvements to their software – such as better-quality mixed-reality passthrough in the case of the Meta Quest 3 (though I didn’t see a massive difference after installing the update on my headset).

It’s now been discovered (first by Twitter user @Squashi9) that the update also included another upgrade for Meta’s hardware, with Space Scan, the Quest 3’s room scanning feature, getting a major buff thanks to AI.

The Quest 3’s Space Scan is different to its regular boundary scan, which sets up your safe play space for VR. Instead, Space Scan maps out your room for mixed-reality experiences, marking out walls, floors, and ceilings so that experiences are correctly calibrated.

You also have the option to add and label furniture, but you had to do this part manually until update v64 rolled out. Now, when you do a room scan your Quest 3 will automatically highlight and label furniture – and based on my tests it works flawlessly.

Annoyingly, the headset wouldn’t let me take screenshots of the process, so you’ll have to trust me when I say that every piece of furniture was not only picked up by the scan and correctly marked out, it was also labelled accurately – it even picked up on my windows and doors, which I wasn’t expecting.

The only mistake I spotted was that a chair I have in my living room was designated a 'couch', though this seems to be more an issue with Meta’s lack of more specific labels than with Space Scan’s ability to detect what type of object each item of furniture is.

Post by @edwardrichardmiller
View on Threads

This feature isn’t a complete surprise, as Reality Labs showed a version of it off on Threads in March. What is surprising, however, is how quickly it’s been rolled out after being unveiled – though I’m not complaining, considering how well it works and how easy it makes scanning your room. 

So what? 

Adding furniture has a use for MR and VR apps. Tables can be used by apps like Horizon Workrooms as designated desks, while sitting down in or getting up from a designated couch will change your VR experience between a standing or seated mode.

Meanwhile, some apps can use the detected doors, windows, walls, and furniture such as a bookshelf to adjust how mixed-reality experiences interact with your space.

With Meta making it less tedious to add these data points, app developers have more of a reason to take furniture into account when designing VR and MR experiences, which should lead to them feeling more immersive.

This also gives Meta a leg up over the Apple Vision Pro, as it’s not yet able to create a room scan that’s as detailed as the one found on Meta’s hardware – though until software starts to take real advantage of this feature it’s not that big a deal.

We’ll have to wait and see what comes of this improvement, but if you’ve already made a space scan or two on your Quest 3 you might want to redo them, as the new scans should be a lot more accurate.

You might also like

TechRadar – All the latest technology news

Read More

Meta’s Smart Glasses will get a sci-fi upgrade soon, but they’re still not smart enough

There's a certain allure to smart glasses that bulky mixed-reality headsets lack. Meta's Ray-Ban Smart Glasses (formerly Stories), for instance, are a perfect illustration of how you can build smarts into a wearable without making the wearer look ridiculous. The question is, can you still end up being ridiculous while wearing them?

Ray-Ban Meta Smart Glasses' big upcoming Meta AI update will let you talk to your stylish frames, querying them about the food you're consuming, the buildings you're facing, and the animals you encounter. The update is set to transform the wearable from just another pair of voice-enabled glasses into an always-on-your-face assistant.

The update isn't public and will only apply to Ray-Ban Smart Glasses and not the Ray-Ban Meta Stories predecessors that do not feature Qualcomm's new AR1 Gen 1 chip. This week, however, Meta gave a couple of tech reporters at The New York Times early access to the Meta AI integration and they came away somewhat impressed.

I must admit, I found the walkthrough more intriguing than I expected.

Even though they didn't tear the glasses apart, or get into the nitty gritty tech details I crave, the real-world experience depicts Meta AI as a fascinating and possibly useful work in progress.

Answers and questions

In the story, the authors use the Ray Ban smart glasses to ask Meta AI to identify a variety of animals, objects, and landmarks with varying success. In the confines of their homes, they spoke full voice and asked Meta AI. “What am I looking at?” They also enabled transcription so we could see what they asked and the responses Meta AI provided.

It was, in their experience, quite good at identifying their dogs' breed. However, when they took the smart glasses to the zoo, Meta AI struggled to identify far-away animals. In fact, Meta AI got a lot wrong. To be fair, this is beta and I wouldn't expect the large language model (Llama 2) to get everything right. At least it's not hallucinating (“that's a unicorn!”), just getting it wrong.

The story features a lot of photos taken with the Ray-Ban Meta Smart Glasses, along with the queries and Meta AI's responses. Of course, that's not really what was happening. As the authors note, they were speaking to Meta AI wherever they went and then heard the responses spoken back to them. This is all well and good when you're at home, but just weird when you're alone at a zoo talking to yourself.

The creep factor

This, for me, remains the fundamental flaw in many of these wearables. Whether you wear Ray-Ban Smart Glasses or Amazon Echo Frames, you'll still look as if you're talking to yourself. For a decent experience, you may engage in a lengthy “conversation” with Meta AI to get the information you need. Again, if you're doing this at home, letting Meta AI help you through a detailed recipe, that's fine. Using Meta AI as a tour guide when you're in the middle of, say, your local Whole Foods might label you as a bit of an oddball.

We do talk to our best phones and even our best smartwatches, but I think that when people see you holding your phone or smartwatch near your face, they understand what's going on.

The New York Times' authors noted how they found themselves whispering to their smart glasses, but they still got looks.

I don't know a way around this issue and wonder if this will be the primary reason people swear off what is arguably a very good-looking pair of glasses (or sunglasses) even if they could offer the passive smart technology we need.

So, I'm of two minds. I don't want to be seen as a weirdo talking to my glasses, but I can appreciate having intelligence there and ready to go; no need to pull my phone out, raise my wrist, or even tap a smart lapel pin. I just say, “Hey Meta” and the smart glasses wake up, ready to help.

Perhaps the tipping point here will be when Meta can integrate very subtle AR screens into the frames that add some much-needed visual guidance. Plus, the access to visuals might cut down on the conversation, and I would appreciate that.

You might also like

TechRadar – All the latest technology news

Read More

Meta’s Ray-Ban smart glasses are becoming AI-powered tour guides

While Meta’s most recognizable hardware is its Quest VR headsets, its smart glasses created in collaboration with Ray-Ban are proving to be popular thanks to their sleek design and unique AI tools – tools that are getting an upgrade to turn them into a wearable tourist guide.

In a post on Threads – Meta’s Twitter-like Instagram spinoff – Meta CTO Andrew Bosworth showed off a new Look and Ask feature that can recognize landmarks and tell you facts about them. Bosworth demonstrated it using examples from San Francisco such as the Golden Gate Bridge, the Painted Ladies, and Coit Tower.

As with other Look and Ask prompts, you give a command like “Look and tell me a cool fact about this bridge.” The Ray-Ban Meta Smart Glasses then use their in-built camera to scan the scene in front of you, and cross-reference the image with info in the Meta AI’s knowledge database (which includes access to the Bing search engine). 

The specs then respond with the cool fact you requested – in this case explaining the Golden Gate Bridge (which it recognized in the photo it took) is painted “International Orange” so that it would be more visible in foggy conditions.

Screen shots from Threads showing the Meta Ray-Ban Smart Glasses being used to give the suer information about San Francisco landmarks

(Image credit: Andrew Bosworth / Threads)

Bosworth added in a follow-up message that other improvements are being rolled out, including new voice commands so you can share your latest Meta AI interaction on WhatsApp and Messenger. 

Down the line, Bosworth says you’ll also be able to change the speed of Meta AI readouts in the voice settings menu to have them go faster or slower.

Still not for everyone 

One huge caveat is that – much like the glasses’ other Look and Ask AI features – this new landmark recognition feature is still only in beta. As such, it might not always be the most accurate – so take its tourist guidance with a pinch of salt.

Orange RayBan Meta Smart Glasses

(Image credit: Meta)

The good news is Meta has at least opened up its waitlist to join the beta so more of us can try these experimental features. Go to the official page, input your glasses serial number, and wait to get contacted – though this option is only available if you’re based in the US.

In his post Bosworth did say that the team is working to “make this available to more people,” but neither he nor Meta have given a precise timeline of when the impressive AI features will be more widely available.

You might also like

TechRadar – All the latest technology news

Read More

Meta’s Quest Pro can track now your tongue, but it’s not the update the headset needs most

The Meta Quest Pro has received an unexpected update – its face tracking can now follow your tongue movements.

One of the main hardware upgrades featured in the Meta Quest Pro is face and eye tracking. Thanks to inbuilt cameras, the headset can track your facial features and translate your real-life movements onto a virtual avatar. Because it's perfectly mimicking your mouth flaps, nose twitches, and eye movements, the digital model can feel almost as alive as a real human at times – at least in my experience with the tech.

Unfortunately, this immersion can break down at points, as the tracking isn’t always perfect, with one fault many users noticed being that it can’t track your tongue. So if you wanted to tease one of your friends by sticking it out at them, or lick a virtual ice cream cone you couldn’t – until now.

Meta has released a new version of the face-tracking extension in its v60 SDK, which finally includes support for tongue tracking. Interestingly this support hasn’t yet been added to Meta avatars – so you might not see tongue-tracking in apps like Horizon Worlds – but it has already started to be added to third-party apps by developers.

See more

This includes developer Korejan (via UploadVR) who develops a VR Face Tracking module for ALXR – an alternative to apps like Virtual Desktop and Quest Link, which help you connect your headset to a PC. Korejan posted a clip of how it works on X (formerly Twitter). 

All the gear and nothing to do with it 

Meta upgrading its tech’s capabilities is never a bad thing, and we should see tongue-tracking rolling out to more apps soonish – especially once Meta’s own avatar SDK gets support for the feature. But this isn’t the upgrade the feature needs. Instead, face tracking needs to get into more people’s hands, and there needs to be more software that uses it.

Before the Meta Quest 3 was released I seldom used mixed reality – the only times I did were as part of any reviews or tests I did for my job. That’s changed a lot in the past few months, and I’d go as far as to say that mixed reality is sometimes my preferred way to play if there’s a choice between VR and MR.

One reason is that the Quest 3 offers significantly higher quality passthrough than the Quest Pro – it’s still not perfect, but the colors are more accurate, and the feed isn’t ruined by graininess. The other, far more important reason is that the platform is now brimming with software that offers mixed reality support, rather than only a few niche apps featuring mixed reality as an aside to the main VR experience.

A Meta Quest 3 user throwing a giant die onto a virtual medieval tabletop game board full of castles, wizards and knights

Mixed reality is great, and more can use it thanks to the Quest 3 (Image credit: Meta)

Even though they’ve been out for the same length of time on Meta hardware, there isn’t the same support for face tracking or eye tracking. That’s despite all the talk before the Quest Pro released of how much realism these tools can add, and how much more efficiently apps could run using foveated rendering – a technique where VR software would only properly render the part of the scene you’re looking at with your eyes.

The big problem isn’t that face tracking isn’t good enough – if it can track your tongue it definitely is impressive – it’s (probably) the Quest Pro’s poor sales. Meta hasn’t said how well or badly the Pro has performed financially, but you don’t permanently cut the price of a product by a third just four months after launch if it’s selling like hotcakes – it fell from $ 1,500 / £1,500 / AU$ 2,450 to $ 999.99 / £999.99 / AU$ 1,729.99. And if not many people have this headset and its tracking tools, why would developers waste resources on creating apps that use them when they could work on something more people could take advantage of?

For face tracking to take off like mixed reality has it needs to be brought to Meta’s budget Quest line so that more people can access it, and developers are incentivized to create software that can use it. Until then, no matter how impressive it gets, face tracking will remain a fringe tool.

You might also like

TechRadar – All the latest technology news

Read More

Meta’s new VR headset design looks like a next-gen Apple Vision Pro

Meta has teased a super impressive XR headset that looks to combine the Meta Quest Pro, Apple Vision Pro and a few new exclusive features. The only downside? Anything resembling what Meta has shown off is most likely years from release.

During a talk at the University of Arizona College of Optical Sciences, Meta’s director of display systems research, Douglas Lanman, showed a render of Mirror Lake – an advanced prototype that is “practical to build now” based on the tech Meta has developed. This XR headset (XR being a catchall term for VR, AR and MR) combines design elements and features used by the Meta Quest Pro and Apple Vision Pro – such as the Quest Pro’s open side design and the Vision Pro’s EyeSight – with new tools such as HoloCake lenses and electronic varifocal, to make something better than anything on the market.

We’ve talked about electronic varifocal on TechRadar before – when Meta’s Butterscotch Varifocal prototype won an award – so we won’t go too in-depth here. Simply put, using a mixture of eye-tracking and a display system that can move closer or further away from the headset wearer’s face, electronic varifocal aims to mimic the way we focus on objects that are near or far away in the real world. It's an approach Meta calls a “more natural, realistic, and comfortable experience”.

You can see it at work in the video below.

HoloCake lenses help to enable this varifocal system while trimming down the size of the headset – a portmanteau of holographic and pancake.

Pancake lenses are used by the Meta Quest 3, Quest Pro, and other modern headsets including the Pico 4 and Apple Vision Pro, and thanks to some clever optic trickery they can be a lot slimmer than lenses previously used by headsets like the Quest 2.

To further slim the optics down, HoloCake lenses use a thin, flat holographic lens instead of the curved one relied on by a pancake system – holographic as in reflective foil, not as in a 3D hologram you might see in a sci-fi flick.

The only downside is that you need to use lasers, instead of a regular LED backlight. This can add cost, size, heat and safety hurdles. That said, needing to rely on lasers could be seen as an upgrade since these can usually produce a wider and more vivid range of colors than standard LEDs.

A diagram showing the difference between pancake, holocake and regular VR lens optics

Diagrams of different lens optics including HoloCake lenses (Image credit: Meta)

When can we get one? Not for a while 

Unfortunately, Mirror Lake won’t be coming anytime soon. Lanman described the headset as something “[Meta] could build with significant time”, implying that development hasn’t started yet – and even if it has, we might be years away from seeing it in action.

On this point Mark Zuckerberg, Meta’s CEO, added that the technology Mirror Lake relies on could be seen in products “in the second half of the decade”, pointing to a release in 2026 and beyond (maybe late 2025 if we’re lucky).

This would match up with when we predict Meta’s next XR headset – like a Meta Quest Pro or Meta Quest 4 – will probably launch. Meta usually likes to tease its headsets a year in advance at its Meta Connect events (doing so with both the Meta Quest Pro and Quest 3), so if it sticks to this trend the earliest we’ll see a new device is September or October 2025. Meta Connect 2023 passed without a sneak peek at what's to come.

Apple Vision Pro showing a wearer's eye through a display on the front of the headset via EyeSight

Someone wearing the Apple Vision Pro VR headset (Image credit: Apple)

Waiting a few years would also give the Meta Quest 3 time in the spotlight before the next big thing comes to overshadow it, and of course let Meta see how the Apple Vision Pro fares. Apple’s XR headset is taking the exact opposite approach to Meta’s Quest 2 and Quest 3, with Apple offering very high-end tech at a very unaffordable price ($ 3,499, or around £2,800 / AU$ 5,300). 

If Apple’s gamble pays off, Meta might want to mix up its strategy by releasing an equally high-end and costly Meta Quest Pro 2 that offers a more significant upgrade over the Quest 3 than the first Meta Quest Pro offered compared to the Quest 2. If the Vision Pro flops, Meta won’t want to follow its lead.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

Threads is dead – can AI chatbots save Meta’s Twitter clone?

Meta is set to launch numerous artificial intelligence chatbots that will host different ‘personalities’ by September this year, in a bid to recoup faltering interest in the social media giant’s other products.

According to the Financial Times, these chatbots have been in the works for some time, with the aim of having more human conversations with users in an attempt to boost social media engagement.

The attempt to give various chatbots different temperaments and personalities seems like a similar attempt at a ‘social’ AI chatbot seen in Snapchat’s ‘My AI’ earlier this year, which created some mild buzz but quickly faded into irrelevance.

According to the report, Meta is even exploring a chatbot that speaks like Abraham Lincoln, as well as one that will dish out travel advice in the verbal style of a surfer. These new tools are poised to provide new search functions and offer recommendations, similar to the ways in which the popular AI chatbot ChatGPT is used.

It’s possible – likely, even – that this new string of AI chatbots is an attempt to remain relevant, as the company may be focused on maintaining attention since Threads lost more than half its user base only a couple of weeks after launching in early July. Meta’s long-running ‘metaverse’ project also appears to have failed to garner enough interest, with the company switching focus to AI as its primary area of investment back in March.

Regardless, we’ll soon be treated to even more AI-boosted chatbots. Oh, joy. 

TechRadar – All the latest technology news

Read More

Meta’s new music AI could change how you craft soundtracks and tunes

The hills are alive with the sound of AI-generated music as Meta launches its new language learning model (LLM): the aptly named MusicGen.

Developed by the company’s internal Audiocraft team, MusicGen is like a musical version of ChatGPT. You enter a brief text description of the type of music you want to hear, click Generate, and in a short amount of time, the AI creates a 12-second long track according to your instructions. For example, someone can tell MusicGen to generate a “lofi slow BPM electro chill [song] with organic samples” and sure enough, the audio sounds like something you’d hear on YouTube’s Lofi Girl radio. 

It is possible to “steer” MusicGen by uploading your own song so the AI has a better sense of structure. One of the developers for the LLM, Felix Kreuk, posted some samples of what this sounds like on his Twitter profile. As an example, MusicGen can take Sebastian Bach’s famous Toccata and Fugue in D Minor then add some drum beats and synths straight out of the 1980s to produce a more upbeat version of the piece.

See more

Availability

MusicGen is currently available to the public on Meta’s Hugging Face website for everybody to try out. Do be aware that, unlike Google's own AI music generator MusicLM, Meta's model can't do vocals, only instrumentals. This is probably for the best as MusicLM vocals sound a lot like Simlish. No one will be able to understand a single thing.

To the musicians out there, you don’t have to worry about losing your careers. The AI is decent at making simple, short melodies, but not much else. In our opinion, the quality isn’t on the same level as something made with human ingenuity. Some of the songs can get pretty repetitive as MusicGen cycles through the same progressions multiple times. This tool can be useful for creating plain background audio for videos or presentations, but nothing truly engaging. The next pop hit won’t be AI-generated – at least not yet

Act fast

If you are interested in trying out MusicGen, we recommend acting fast. First of all, the Hugging Face website is unstable. We had a ton of AI-generated songs ready to share. However, the web page crashed while working on this piece severing our connection to the tracks. We suspect the dead links were caused by sudden high user traffic. Hopefully, by the time you read this, Hugging Face is working properly.

The second reason is a more litigious one. On the official GitHub page, Meta states its team used 10,000 “high-quality [licensed] music tracks” plus royalty-free songs from Shutterstock and Pond5. Ever since the generative AI craze took off earlier this year, artists have begun to sue developers and platforms alike over “illegal use of copyrighted works.” Meta might soon find itself in the crosshairs of some crossed musicians. Even if it doesn’t get sued over using licensed music to train the LLM, record companies aren’t afraid to flex their industry muscle to shut down this type of content. 

If you're looking for details on how to use AI to generate images, be sure to check out TechRadar’s list of the best AI art generators for 2023. 

TechRadar – All the latest technology news

Read More

Meta’s ChatGPT rival could make language barriers a thing of the past

The rise of AI tools like ChatGPT and Google Bard has presented the perfect opportunity to make significant leaps in multilingual speech projects, advancing language technology and promoting worldwide linguistic diversity.

Meta has taken up the challenge, unveiling its latest AI language model – which is able to recognize and generate speech in over 4,000 spoken languages.

The Massively Multilingual Speech (MMS) project means that Meta’s new AI is no mere ChatGPT replica. The model uses unconventional data sources to overcome speech barriers and allow individuals to communicate in their native languages without going through an exhaustive translation process.

Most excitingly, Meta has made MMS open-source, inviting researchers to learn from and expand upon the foundation it provides. This move suggests the company is deeply invested in dominating the AI language translation space, but also encourages collaboration in the field.

Bringing more languages into the conversation 

Normally, speech recognition and text-to-speech AI programs need extensive training on a large number of audio datasets, combined with meticulous transcription labels. Many endangered languages found outside industrialised nations lack huge datasets like this, which puts these languages at risk of vanishing or being excluded from translation tools.

According to Gizmochina, Meta took an interesting approach to this issue and dipped into religious texts. These texts provide diverse linguistic renditions that allow Meta to get a ‘raw’ and untapped look at lesser-known languages for text-based research.

The release of MMS as an open-source resource and research project demonstrates that Meta is devoting a lot of time and effort towards the lack of linguistic diversity in the tech field, which is frequently limited to the most widely-spoken languages.

It’s an exciting development in the AI world – and one that could bring us a lot closer to having the sort of ‘universal translators’ that currently only exist in science fiction. Imagine an earpiece that, through the power of AI, could not only translate foreign speech for you in real time but also filter out the original language so you only hear your native tongue being spoken.

As more researchers work with Meta’s MMS and more languages are included, we could see a world where assistive technology and text-to-speech could allow us to speak to people regardless of their native language, sharing information so much quicker.  I’m super excited for the development as someone trying to teach themselves a language as it’ll make real-life conversational practice a lot easier, and help ghetto grips with informal and colloquial words and phrases only native speakers would know.

TechRadar – All the latest technology news

Read More