Meta’s Quest Pro can track now your tongue, but it’s not the update the headset needs most

The Meta Quest Pro has received an unexpected update – its face tracking can now follow your tongue movements.

One of the main hardware upgrades featured in the Meta Quest Pro is face and eye tracking. Thanks to inbuilt cameras, the headset can track your facial features and translate your real-life movements onto a virtual avatar. Because it's perfectly mimicking your mouth flaps, nose twitches, and eye movements, the digital model can feel almost as alive as a real human at times – at least in my experience with the tech.

Unfortunately, this immersion can break down at points, as the tracking isn’t always perfect, with one fault many users noticed being that it can’t track your tongue. So if you wanted to tease one of your friends by sticking it out at them, or lick a virtual ice cream cone you couldn’t – until now.

Meta has released a new version of the face-tracking extension in its v60 SDK, which finally includes support for tongue tracking. Interestingly this support hasn’t yet been added to Meta avatars – so you might not see tongue-tracking in apps like Horizon Worlds – but it has already started to be added to third-party apps by developers.

See more

This includes developer Korejan (via UploadVR) who develops a VR Face Tracking module for ALXR – an alternative to apps like Virtual Desktop and Quest Link, which help you connect your headset to a PC. Korejan posted a clip of how it works on X (formerly Twitter). 

All the gear and nothing to do with it 

Meta upgrading its tech’s capabilities is never a bad thing, and we should see tongue-tracking rolling out to more apps soonish – especially once Meta’s own avatar SDK gets support for the feature. But this isn’t the upgrade the feature needs. Instead, face tracking needs to get into more people’s hands, and there needs to be more software that uses it.

Before the Meta Quest 3 was released I seldom used mixed reality – the only times I did were as part of any reviews or tests I did for my job. That’s changed a lot in the past few months, and I’d go as far as to say that mixed reality is sometimes my preferred way to play if there’s a choice between VR and MR.

One reason is that the Quest 3 offers significantly higher quality passthrough than the Quest Pro – it’s still not perfect, but the colors are more accurate, and the feed isn’t ruined by graininess. The other, far more important reason is that the platform is now brimming with software that offers mixed reality support, rather than only a few niche apps featuring mixed reality as an aside to the main VR experience.

A Meta Quest 3 user throwing a giant die onto a virtual medieval tabletop game board full of castles, wizards and knights

Mixed reality is great, and more can use it thanks to the Quest 3 (Image credit: Meta)

Even though they’ve been out for the same length of time on Meta hardware, there isn’t the same support for face tracking or eye tracking. That’s despite all the talk before the Quest Pro released of how much realism these tools can add, and how much more efficiently apps could run using foveated rendering – a technique where VR software would only properly render the part of the scene you’re looking at with your eyes.

The big problem isn’t that face tracking isn’t good enough – if it can track your tongue it definitely is impressive – it’s (probably) the Quest Pro’s poor sales. Meta hasn’t said how well or badly the Pro has performed financially, but you don’t permanently cut the price of a product by a third just four months after launch if it’s selling like hotcakes – it fell from $ 1,500 / £1,500 / AU$ 2,450 to $ 999.99 / £999.99 / AU$ 1,729.99. And if not many people have this headset and its tracking tools, why would developers waste resources on creating apps that use them when they could work on something more people could take advantage of?

For face tracking to take off like mixed reality has it needs to be brought to Meta’s budget Quest line so that more people can access it, and developers are incentivized to create software that can use it. Until then, no matter how impressive it gets, face tracking will remain a fringe tool.

You might also like

TechRadar – All the latest technology news

Read More

Meta’s new VR headset design looks like a next-gen Apple Vision Pro

Meta has teased a super impressive XR headset that looks to combine the Meta Quest Pro, Apple Vision Pro and a few new exclusive features. The only downside? Anything resembling what Meta has shown off is most likely years from release.

During a talk at the University of Arizona College of Optical Sciences, Meta’s director of display systems research, Douglas Lanman, showed a render of Mirror Lake – an advanced prototype that is “practical to build now” based on the tech Meta has developed. This XR headset (XR being a catchall term for VR, AR and MR) combines design elements and features used by the Meta Quest Pro and Apple Vision Pro – such as the Quest Pro’s open side design and the Vision Pro’s EyeSight – with new tools such as HoloCake lenses and electronic varifocal, to make something better than anything on the market.

We’ve talked about electronic varifocal on TechRadar before – when Meta’s Butterscotch Varifocal prototype won an award – so we won’t go too in-depth here. Simply put, using a mixture of eye-tracking and a display system that can move closer or further away from the headset wearer’s face, electronic varifocal aims to mimic the way we focus on objects that are near or far away in the real world. It's an approach Meta calls a “more natural, realistic, and comfortable experience”.

You can see it at work in the video below.

HoloCake lenses help to enable this varifocal system while trimming down the size of the headset – a portmanteau of holographic and pancake.

Pancake lenses are used by the Meta Quest 3, Quest Pro, and other modern headsets including the Pico 4 and Apple Vision Pro, and thanks to some clever optic trickery they can be a lot slimmer than lenses previously used by headsets like the Quest 2.

To further slim the optics down, HoloCake lenses use a thin, flat holographic lens instead of the curved one relied on by a pancake system – holographic as in reflective foil, not as in a 3D hologram you might see in a sci-fi flick.

The only downside is that you need to use lasers, instead of a regular LED backlight. This can add cost, size, heat and safety hurdles. That said, needing to rely on lasers could be seen as an upgrade since these can usually produce a wider and more vivid range of colors than standard LEDs.

A diagram showing the difference between pancake, holocake and regular VR lens optics

Diagrams of different lens optics including HoloCake lenses (Image credit: Meta)

When can we get one? Not for a while 

Unfortunately, Mirror Lake won’t be coming anytime soon. Lanman described the headset as something “[Meta] could build with significant time”, implying that development hasn’t started yet – and even if it has, we might be years away from seeing it in action.

On this point Mark Zuckerberg, Meta’s CEO, added that the technology Mirror Lake relies on could be seen in products “in the second half of the decade”, pointing to a release in 2026 and beyond (maybe late 2025 if we’re lucky).

This would match up with when we predict Meta’s next XR headset – like a Meta Quest Pro or Meta Quest 4 – will probably launch. Meta usually likes to tease its headsets a year in advance at its Meta Connect events (doing so with both the Meta Quest Pro and Quest 3), so if it sticks to this trend the earliest we’ll see a new device is September or October 2025. Meta Connect 2023 passed without a sneak peek at what's to come.

Apple Vision Pro showing a wearer's eye through a display on the front of the headset via EyeSight

Someone wearing the Apple Vision Pro VR headset (Image credit: Apple)

Waiting a few years would also give the Meta Quest 3 time in the spotlight before the next big thing comes to overshadow it, and of course let Meta see how the Apple Vision Pro fares. Apple’s XR headset is taking the exact opposite approach to Meta’s Quest 2 and Quest 3, with Apple offering very high-end tech at a very unaffordable price ($ 3,499, or around £2,800 / AU$ 5,300). 

If Apple’s gamble pays off, Meta might want to mix up its strategy by releasing an equally high-end and costly Meta Quest Pro 2 that offers a more significant upgrade over the Quest 3 than the first Meta Quest Pro offered compared to the Quest 2. If the Vision Pro flops, Meta won’t want to follow its lead.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

Threads is dead – can AI chatbots save Meta’s Twitter clone?

Meta is set to launch numerous artificial intelligence chatbots that will host different ‘personalities’ by September this year, in a bid to recoup faltering interest in the social media giant’s other products.

According to the Financial Times, these chatbots have been in the works for some time, with the aim of having more human conversations with users in an attempt to boost social media engagement.

The attempt to give various chatbots different temperaments and personalities seems like a similar attempt at a ‘social’ AI chatbot seen in Snapchat’s ‘My AI’ earlier this year, which created some mild buzz but quickly faded into irrelevance.

According to the report, Meta is even exploring a chatbot that speaks like Abraham Lincoln, as well as one that will dish out travel advice in the verbal style of a surfer. These new tools are poised to provide new search functions and offer recommendations, similar to the ways in which the popular AI chatbot ChatGPT is used.

It’s possible – likely, even – that this new string of AI chatbots is an attempt to remain relevant, as the company may be focused on maintaining attention since Threads lost more than half its user base only a couple of weeks after launching in early July. Meta’s long-running ‘metaverse’ project also appears to have failed to garner enough interest, with the company switching focus to AI as its primary area of investment back in March.

Regardless, we’ll soon be treated to even more AI-boosted chatbots. Oh, joy. 

TechRadar – All the latest technology news

Read More

Meta’s new music AI could change how you craft soundtracks and tunes

The hills are alive with the sound of AI-generated music as Meta launches its new language learning model (LLM): the aptly named MusicGen.

Developed by the company’s internal Audiocraft team, MusicGen is like a musical version of ChatGPT. You enter a brief text description of the type of music you want to hear, click Generate, and in a short amount of time, the AI creates a 12-second long track according to your instructions. For example, someone can tell MusicGen to generate a “lofi slow BPM electro chill [song] with organic samples” and sure enough, the audio sounds like something you’d hear on YouTube’s Lofi Girl radio. 

It is possible to “steer” MusicGen by uploading your own song so the AI has a better sense of structure. One of the developers for the LLM, Felix Kreuk, posted some samples of what this sounds like on his Twitter profile. As an example, MusicGen can take Sebastian Bach’s famous Toccata and Fugue in D Minor then add some drum beats and synths straight out of the 1980s to produce a more upbeat version of the piece.

See more

Availability

MusicGen is currently available to the public on Meta’s Hugging Face website for everybody to try out. Do be aware that, unlike Google's own AI music generator MusicLM, Meta's model can't do vocals, only instrumentals. This is probably for the best as MusicLM vocals sound a lot like Simlish. No one will be able to understand a single thing.

To the musicians out there, you don’t have to worry about losing your careers. The AI is decent at making simple, short melodies, but not much else. In our opinion, the quality isn’t on the same level as something made with human ingenuity. Some of the songs can get pretty repetitive as MusicGen cycles through the same progressions multiple times. This tool can be useful for creating plain background audio for videos or presentations, but nothing truly engaging. The next pop hit won’t be AI-generated – at least not yet

Act fast

If you are interested in trying out MusicGen, we recommend acting fast. First of all, the Hugging Face website is unstable. We had a ton of AI-generated songs ready to share. However, the web page crashed while working on this piece severing our connection to the tracks. We suspect the dead links were caused by sudden high user traffic. Hopefully, by the time you read this, Hugging Face is working properly.

The second reason is a more litigious one. On the official GitHub page, Meta states its team used 10,000 “high-quality [licensed] music tracks” plus royalty-free songs from Shutterstock and Pond5. Ever since the generative AI craze took off earlier this year, artists have begun to sue developers and platforms alike over “illegal use of copyrighted works.” Meta might soon find itself in the crosshairs of some crossed musicians. Even if it doesn’t get sued over using licensed music to train the LLM, record companies aren’t afraid to flex their industry muscle to shut down this type of content. 

If you're looking for details on how to use AI to generate images, be sure to check out TechRadar’s list of the best AI art generators for 2023. 

TechRadar – All the latest technology news

Read More

Meta’s ChatGPT rival could make language barriers a thing of the past

The rise of AI tools like ChatGPT and Google Bard has presented the perfect opportunity to make significant leaps in multilingual speech projects, advancing language technology and promoting worldwide linguistic diversity.

Meta has taken up the challenge, unveiling its latest AI language model – which is able to recognize and generate speech in over 4,000 spoken languages.

The Massively Multilingual Speech (MMS) project means that Meta’s new AI is no mere ChatGPT replica. The model uses unconventional data sources to overcome speech barriers and allow individuals to communicate in their native languages without going through an exhaustive translation process.

Most excitingly, Meta has made MMS open-source, inviting researchers to learn from and expand upon the foundation it provides. This move suggests the company is deeply invested in dominating the AI language translation space, but also encourages collaboration in the field.

Bringing more languages into the conversation 

Normally, speech recognition and text-to-speech AI programs need extensive training on a large number of audio datasets, combined with meticulous transcription labels. Many endangered languages found outside industrialised nations lack huge datasets like this, which puts these languages at risk of vanishing or being excluded from translation tools.

According to Gizmochina, Meta took an interesting approach to this issue and dipped into religious texts. These texts provide diverse linguistic renditions that allow Meta to get a ‘raw’ and untapped look at lesser-known languages for text-based research.

The release of MMS as an open-source resource and research project demonstrates that Meta is devoting a lot of time and effort towards the lack of linguistic diversity in the tech field, which is frequently limited to the most widely-spoken languages.

It’s an exciting development in the AI world – and one that could bring us a lot closer to having the sort of ‘universal translators’ that currently only exist in science fiction. Imagine an earpiece that, through the power of AI, could not only translate foreign speech for you in real time but also filter out the original language so you only hear your native tongue being spoken.

As more researchers work with Meta’s MMS and more languages are included, we could see a world where assistive technology and text-to-speech could allow us to speak to people regardless of their native language, sharing information so much quicker.  I’m super excited for the development as someone trying to teach themselves a language as it’ll make real-life conversational practice a lot easier, and help ghetto grips with informal and colloquial words and phrases only native speakers would know.

TechRadar – All the latest technology news

Read More