Apple tells developers NOT to use “virtual reality” when talking about Vision Pro

The Vision Pro will go on sale next month, and we’ve just learned that Apple has requested that app developers for visionOS (the operating system that runs on the headset) don’t allude to visionOS apps as “AR” or “VR”. 

We first heard about Apple’s newest innovation in June 2023 – where it was marketed as a spatial computer that combines digital content and the user’s physical surroundings. It’s also equipped with some serious Apple graphics specs and visionOS, which Apple calls the “world’s first spatial computing system”

At first glance, the Vision Pro certainly appears to be similar to existing Virtual Reality (VR) and Augmented Reality (AR) headsets, so it’s interesting that Apple is at pains to ensure that it isn’t mistaken for one. The de facto ban on AR and VR references (as well as Extended Reality (XR) and Mixed Reality (MR)) was spotted in the guidelines of the new Xcode (Apple’s suite of developer tools) update that came after the announcement that Vision Pro devices will be in stores in early February

Vision Pro

(Image credit: Apple)

Apple lays down the law

This recommendation is pretty explicitly laid out on a new Apple Developer page which goes through what a developer needs to do to prepare their app for submission to the App Store. 

Apple insists that developers will also have to use the “visionOS” branding beginning with a lowercase “v” (similar to how they brand their flagship operating system for desktop and laptop devices, macOS), and to use the device’s full name, “Apple Vision Pro,” when referring to it. These aren’t as unexpected as Apple’s more notable instructions to avoid VR and AR, however. According to Apple, visionOS apps will not be considered VR, XR, or MR apps but as “spatial computing apps”.

It’s an interesting move for a number of reasons; coining a new term can be confusing to people, meaning that users will have to build familiarity and actually use the term for it to stick, but it also means that Apple can differentiate itself from the pack of AR/VR devices out there. 

It’s also a pivot from messaging that until now has relied on existing terms like augmented reality and virtual reality. Most of Apple’s current marketing refers to the Vision Pro as  a “spatial computing” platform, but at the Worldwide Developers Conference (WWDC) in 2023, Apple’s annual event for Apple platform developers, Apple CEO Tim Cook introduced the Vision Pro as an “entirely new AR platform.” Materially, this is mainly a marketing and branding move as Apple becomes more confident in its customers’ understanding of what the Vision Pro actually is. 9to5Mac reports that Apple engineers referred to visionOS as xrOS leading up to the device’s official announcement. 

Apple Vision Pro VR headset

(Image credit: Future / Lance Ulanoff)

Apple charts its own course

The pointed effort to distinguish itself from its competitors is an understandable move from Apple considering that some other tech giants have already attempted to dominate this space. 

Meta, Facebook and Instagram’s parent company, was one of the most noticeable examples. You might have a not-so-distant memory of a certain “metaverse”. The metaverse has seen a reception most would call lukewarm, even at its peak, and Apple is making a bold attempt to have its own association in people’s minds, with Apple’s VP of global marketing Greg Joswiak dismissing the word “metaverse” as one he’ll “never use” according to 9to5Mac.

I enjoy watching Apple make bolder moves into existing markets because it’s often when we’ve seen new industry standards emerge, which is always exciting – no matter whether you want to call it AR, VR, or spatial computing. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

New Apple Vision Pro video gives us a taste of escaping to its virtual worlds

The promise of Apple’s Vision Pro headset – or any of the best virtual reality headsets, for that matter – is that it can transport you to another world, at least for a while. Now, we’ve just gained a preview of how Apple’s device will do this in a whole new way.

That’s because the M1Astra account on X (formerly known as Twitter) has begun posting videos showing the Vision Pro’s Yosemite Environment in action, complete with sparkling snow drifts, imposing mountains and beautiful clear blue skies.

It looks like a gorgeous way to relax and shut out the world around you. You’ll be able to focus on the calm and tranquillity of one of the world’s most famous national parks, taking in the majestic surroundings as you move and tilt your head.

This is far from the only location that comes as part of the Vision Pro’s Environments feature – users will be able to experience environs from a sun-dappled beach and a crisp autumnal scene to the dusty plains of the Moon in outer space.

Immersive environments

See more

The Environments feature is designed to be a way for you to not only tune out the real world, but to add a level of calmness and focus to your workstation. That’s because the scenes they depict can be used as backgrounds for a large virtual movie screen, or as a backdrop to your apps, video calls and more.

But as shown in one video posted by M1Astra, you'll also be able to walk around in the environment. As the poster strolled through the area, sun glistened off the snow and clouds trailed across the sky, adding life and movement to the virtual world.

To activate an environment, you’ll just need to turn the Vision Pro’s Digital Crown. This toggles what you see between passthrough augmented reality and immersive virtual reality. That sounds like it should be quick and easy, but we’ll know more when we get to test out the device after it launches.

Speaking of which, Apple’s Vision Pro is still months away from hitting store shelves (the latest estimates are for a March 2024 release date), which means there’s plenty of time for more information about the Environments feature to leak out. What’s clear already, though, is that it could be a great thing to try once the headset is out in the wild.

You might also like

TechRadar – All the latest technology news

Read More

Your Oculus Quest 2 just got better hand-tracking to boost your virtual boxing skills

Meta has released the v56 update for the Meta Quest Pro and the Oculus Quest 2, which introduces a bunch of upgrades for two of the best VR headsets out there.

With the new Quest update Meta is rolling out Hand Tracking 2.2, which says aims to bring responsiveness more in line with what users experience with controllers. According to Meta, Hand Tracking 2.2 will reduce the latency experienced by a typical user by 40%, with a 75% latency reduction for fast hand movements. 

Meta recommends that you download the Move Fast demo app from App Lab to get a feel for what these improvements mean in practise. It looks like a simple fitness trainer in which you have to punch, chop and block incoming blocks while looking out over a lake decorated with cherry blossom trees. Meta has said we can expect more hand-tracking improvements when the Meta Quest 3 launches later this year. It's yet to be seen if these upgrades can keep up with the hand-tracking Apple is expected to launch with its Apple Vision Pro headset.

Another important improvement is coming just for Meta Quest Pro owners. One of the premium headset’s best upgrades over the Quest 2 is its display, which offers local dimming. This allows screens to achieve deeper black levels and improved contrast, something which can help a lot with immersion, as dark spaces actually look dark without it being impossible to see. However, local dimming isn’t available in every app, so with v56 Meta is launching a Local dimming Experimental Setting (which can be found in the Experimental menu in your headset’s Settings).

The feature is off by default, but if you turn it on you should see the benefits of local dimming in a load more titles – that is, unless a developer chooses to opt out. Just note that as with other experimental settings, you may find it isn’t quite perfect or causes some problems.

Quest 2 users aren't missing out on visual upgrades entirely though, as Meta recently announced that a Quest Super Resolution upscaling tool is coming to help developers make their games look and run better.

This month Meta is also improving the accessibility of Quest titles by introducing button mapping and live captions. Live captions will appear in your Quest headset’s Settings, under the Hearings section of the Accessibility menu. Once turned on you’ll see live subtitles while using the Meta Quest TV app, Explore, and the in-headset Quest Store. In the same Accessibility menu, go to the Mobility section and you’ll find an option to remap your Quest controllers – you can swap any buttons you want on the handsets to create a completely custom layout.

These accessibility settings won’t revolutionize your headset overnight, but they’re a great first step. Hopefully, we’ll see Meta introduce captioning to more apps and services, and perhaps it’ll launch custom-accessible controllers like the ones that Sony and Microsoft offer for their PS5 Access controller and the Xbox Adaptive Controller.

New ways to stay connected 

Beyond these major upgrades, Meta is rolling out a handful of smaller improvements as part of update v56.

First, when you leave your headset charging on standby between play sessions it can smartly wake up and install updates whenever it detects that your installed software is out of date. This should help to reduce instances of you going to play a game only to find that you need to wait for ages while your headset installs a patch.

Second is the new Chats and Parties feature. Whenever you start a call in VR a chat thread is also connected with all of the call members, so you can keep in contact later; you can also now start a call from a chat thread (whether it’s a one-on-one chat or a group chat).

Third, and finally, meta is making it easier to stream your VR gameplay to Facebook, and while you play you’ll be able to see a live chat, so you can keep in contact with your viewers. While the platform isn’t many people’s first choice, it hopefully opens the door for easier real-time live streaming to more popular platforms like YouTube and Twitch.

TechRadar – All the latest technology news

Read More

Meta Builder Bot concept happily builds virtual worlds based on voice description

The Metaverse, that immersive virtual world where Meta (née Facebook) imagines we'll work, play, and interact with friends and family is also where we may someday build entire worlds with nothing but our voice.

During an online AI development update delivered, in part, by Meta/Facebook Founder and CEO Mark Zuckerberg on Wednesday (February 23), the company offered a glimpse of Builder Bot, an AI concept that allows the user to build entire virtual experiences using their voice.

Standing in what looked like a stripped-down version of Facebook's Horizon Worlds' Metaverse, Zuckerberg's and a co-worker's avatars asked a virtual bot to add an island, some furniture, clouds, a catamaran, and even a boombox that could pay real music to the environment. In the demonstration, the command phrasing was natural and the 3D virtual imagery appeared instantly, though it did look a bit like the graphics you'd find in Nintendo's Animal Crossing: New Horizons.

The development of Builder Bot is part of a larger AI initiative called Project CAIRaeoke, which is an end-to-end neural model for building on-device assistance. 

Meta's Builder Bot concept

Mark Zuckerberg’s legless avatar and Builder Bot. (Image credit: Future)

Zuckerberg explained that current technology is not yet equipped to help us explore an immersive version of the internet that will ultimately live in the Metaverse. While that will require updates across a whole range of hardware and software, Meta believes AI is the key to unlocking advancement that will lead to, as Zukerberg put it, “a new generation of assistants that will help us explore new worlds”.

“When we’re wearing [smart] Glasses, it will be the first time an AI system will be able to see the world from our perspective,” he added. A key goal here is for the AI they're developing to see as we do and, more importantly, learn about the world as we do, as well.

It's unclear if Builder Bot will ever become a true part of the burgeoning Metaverse, but its skill with real-time language processing and understanding how parts of the environment should go together is clearly informed by the work Meta is doing.

Mark Zuckerberg talks AI translation

Mark Zuckerberg talks AI translation (Image credit: Future)

Zuckerberg outlined a handful of other related AI projects, all of which will eventually feed into a Metaverse that can be accessed and used by anyone in the world.

These include “No Language Left Behind,” which, unlike traditional translation that often uses English as a mid-translation point, can translate languages directly from the source to the translation language. There's also the very Star Trek-like “Universal Speech Translator”, which would provide instantaneous speech-to-speech translation across all languages, including spoken languages.

“AI is going to deliver that in our lifetimes,” said Zuckerberg.

Mark Zuckerberg talks image abstraction

Mark Zuckerberg talks image abstraction (Image credit: Future)

Meta is also investing heavily in self-supervised learning (SSL) to build human-like cognition into AI systems. Instead of training with tons of images to help the AI identify patterns, the system is fed raw data and then asked to predict the missing parts. Eventually, the AI learns how to build abstract representations.

An AI that can understand abstraction could complete an image just from a few pieces of visual information, or generate the next frame of a video it's never seen. It could also build a visually pleasing virtual world with only your words to guide it.

For those full-on freaked out by Meta's Metaverse ambitions, Zuckerberg said that the company is building the Metaverse for everyone and they are “committed to build openly and responsibly” while protecting privacy and preventing harm.

It's unlikely anyone will take his word for it, but we look forward to watching the Metaverse's development.

TechRadar – All the latest technology news

Read More

Huawei could drop another foldable at its MWC 2020 virtual conference on February 24

Huawei is going to host a virtual press conference on February 24 from Barcelona, Spain after GSMA decided to cancel MWC 2020 as many companies dropped out of the event due to the spread of COVID-19 (coronavirus) in may countries across the world.

Huawei will be streaming a pre-recorded launch session where it is expected to unveil multiple products across PC, laptop, wearables, audio and TV categories. However, according to a recent report by Gizmochina, Huawei is also expected to announce its next foldable phone during the event. This is a new foldable product by the company and we are not sure about the name of this upcoming phone, if it's really being unveiled on the said date.

Now, we think this foldable phone could be the Huawei Mate Xs that was announced back in October 2019. A significant update in the Mate Xs is said to be the presence of the Kirin 990 5G chipset, an improved hinge design and a more durable screen. 

The Mate Xs is confirmed to feature a similar design as the Mate X with the same set of Leica-branded camera array. It's worth noting though, that while the Huawei Mate X debuted in 2019, it's availability is still limited to China.

With a ban on Huawei currently in effect, it's unlikely that any phone they launch would have access to Google's suite of applications or even Play Store. As for the other products, we believe some of them could be powered by HarmonyOS.

Previously, the Mate Xs was scheduled to launch during MWC 2020 but as the event was scrapped, Huawei seems to be going ahead with the virtual launch anyway. We'll know more about the upcoming phone as we near the date of the conference on February 24.

TechRadar – All the latest technology news

Read More