Amazon announces Alexa AI – 5 things you need to know about the voice assistant

During a recent live event, Amazon revealed Alexa will be getting a major upgrade as the company plans on implementing a new large language model (LLM) into the tech assistant.

The tech giant is seeking to improve Alexa’s capabilities by making it “more intuitive, intelligent, and useful”. The LLM will allow it to behave similarly to a generative AI in order to provide real-time information as well as understand nuances in speech. Amazon says its developers sought to make the user experience less robotic.

There is a lot to the Alexa update besides the LLM, as it will also be receiving a lot of features. Below is a list of the five things you absolutely need to know about Alexa’s future.

1. Natural conversations

In what may be the most impactful change, Amazon is making a number of improvements to Alexa’s voice in an effort to make it sound more fluid. It will lack the robotic intonation people are familiar with. 

You can listen to the huge difference in quality on the company’s Soundcloud page. The first sample showcases the voice Alexa has had for the past decade or so since it first launched. The second clip is what it’ll sound like next year when the update launches. You can hear the robot voice enunciate a lot better, with more apparent emotion behind.

2. Understanding context

Having an AI that understands context is important because it makes the process of issuing commands easier. Moving forward, Alexa will be able to better understand  nuances in speech. It will know what you’re talking about even if you don’t provide every minute detail. 

Users can issue vague commands – like saying “Alexa, I’m cold” to have the assistant turn up the heat in your house. Or you can tell the AI it’s too bright in the room and it will automatically dim the lights only in that specific room.

3. Improved smart home control

In the same vein of understanding context, “Alexa will be able to process multiple smart home requests.” You can create routines at specific times of the day plus you won’t need a smartphone to configure them. It can all be done on the fly. 

You can command the assistant to turn off the lights, lower the blinds in the house, and tell the kids to get ready for bed at 9 pm. It will perform those steps in that order, on the dot. Users also won’t need to repeat Alexa’s name over and over for every little command.

Amazon Alexa smart home control

(Image credit: Amazon)

4. New accessibility features 

Amazon will be introducing a variety of accessibility features for customers who have “hearing, speech, or mobility disabilities.” The one that caught our interest was Eye Gaze, allowing people to perform a series of pre-set actions just by look at their device. Actions include playing music or sending messages to contacts. Eye Gaze will, however, be limited to Fire Max 11 tablets in the US, UK, Germany, and Japan at launch.

There is also Call Translation, which, as the name suggests, will translate languages in audio and video calls in real-time. In addition to acting as an interpreter, this tool is said to help deaf people “communicate remotely more easily.” This feature will be available to Echo Show and Alexa app users across eight countries (the US, Mexico, and the UK just to mention a few) in 10 languages, including English, Spanish, and German.

5. Content creation

Since the new Alexa will operate on LLM technology, it will be capable of light content creation via skills. 

Through the Character.AI tool, users can engage in “human-like voice conversations with [over] than 25 unique Characters.” You can chat with specific archetypes, from a fitness coach to famous people like Albert Einstein. 

Music production will be possible, too, via Splash. Through voice commands, Splash can create a track according to your specifications. You can then customize the song further by adding a vocal track or by changing genres.

It’s unknown exactly when the Alexa upgrade will launch. Amazon says everything you see here and more will come out in 2024. We have reached out for clarification and will update this story if we learn anything new.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

Microsoft quietly reveals Windows 11’s next big update could be about to arrive

If you were wondering when Windows 11’s big upgrade for this year will turn up, the answer is soon, with Microsoft now making the final preparations to deploy the 23H2 update – with a revelation apparently imminent.

As Windows Latest tells us, Microsoft just shipped a ‘Windows Configuration Update’ which is readying the toggle to allow users to select ‘Get the latest updates as soon as they’re available’ and be first in line to receive the 23H2 update.

Note that nothing is actually happening yet, just that this is a piece of necessary groundwork (confirmed via an internal document from Microsoft, we’re told) ahead of the rollout of the Windows 11 23H2 update.

Okay, so when is the 23H2 update actually going to turn up? Well, Windows Latest has heard further chatter from sources that indicates Microsoft is going to announce the upgrade at an event later this week.

That would be the ‘special event’ Microsoft revealed a while back, taking place in New York on September 21 (Thursday). As well as the expected Surface hardware launches, we will also evidently get our first tease of the 23H2 update, at least in theory.


Analysis: Copilot on the horizon

An announcement this week makes sense to us, ahead of a broader rollout that’ll be coming soon enough.

As Windows Latest further points out, the 23H2 update will likely become available next month – at least in limited form. This means those who have ticked that toggle to get updates as soon as possible may receive it in October – at least some of those folks, in the usual phased deployment – before that wider rollout kicks off in November, and everyone gets the new features contained within the upgrade.

In theory, that means Windows Copilot, though we suspect the initial incarnation of the AI assistant is still going to be pretty limited. (And we do wonder why Microsoft isn’t going to keep on baking it until next year, but that’s a whole other argument – it seems like with AI, everything has to be done in quite the rush).

It’s also worth bearing in mind that if you’re still on the original version of Windows 11, 21H2, you’ll need to upgrade anyway – as support for that runs out on October 10, 2023. PCs on 21H2 are being force-upgraded to 22H2 right now, although you’ll pretty much be able to skip straight to 23H2 after that, should you wish.

You might also like

TechRadar – All the latest technology news

Read More

The Meta Quest 3 feature I was most excited about might come at a price

The Meta Quest 3 launch event is less than a month away, and excitement for the new VR headset is reaching boiling point. But if this leak is correct, the feature I was most excited about might require a pricey add-on.

Ahead of the Oculus Quest 2  successor’s reveal the online retailer UnboundXR.eu has seemingly posted prices for several Quest 3 accessories. This includes a carrying case for €79.99 (around $ 86 / £68 / AU$ 134), an Elite Strap with Battery for €149.99 ($ 161 / £128 / AU$ 252), and a Silicone Face Interface for €49.99 ($ 54 / £42 / AU$ 84). These were spotted by u/Fredricko on Reddit, but the store pages have since been hidden. 

The one that’s most disappointing to me is seeing the Meta Quest 3 Charging Dock for €149.99 ($ 161 / £128 / AU$ 252). 

Thanks to a different Meta Quest 3 leak (courtesy of a US Federal Communication Agency filing) it looked like the new gadget would be getting a charging dock – my favorite Meta Quest Pro feature. Thanks to this peripheral I’ve never gone to wear my Quest Pro and found it’s out of charge – something I can’t say about my Quest 2. The dock also makes it easy to keep the headset and controllers powered up without needing to use multiple power outlets for charging – an issue with headsets such as the HTC Vive XR Elite, which requires three outlets for charging instead of one.

The Meta Quest 3 and its controllers are sat in a white plastic dock in an official looking image

The leaked Quest 3 dock. (Image credit: Meta)

Most importantly, this dock was included in the price of the Meta Quest Pro – which was $ 1,500 / £1,500 / AU$ 2,450 at launch and is now $ 999.99 / £999.99 / AU$ 1,729.99.  According to Meta, the cheapest Meta Quest 3 will be $ 499 / £499 / AU$ 829 / €499  so I was already a little worried that the dock wouldn’t come with the cheaper headset – forcing us to pay a little extra for its advantages. What I didn’t expect, however, was that the dock might be roughly a third of the price of the new machine, as this leak has suggested.

While these leaks come from a semi-official source – a Reddit user claims UnboundXR has said the prices are from Meta directly  – it’s still worth taking the information with a pinch of salt. They could be best-guess placeholder prices while the store builds the product pages ahead of the Quest 3 launch later this year. What’s more, the peripherals UnboudXR listed might still come packaged with the headset with the listings here merely being for replacement parts. We won’t know how pricey the add-ons really are until the headset launches at Meta Connect 2023.

Out with the old

If the price of these XR peripherals has got you down, I’m sorry to say the bad news doesn’t stop there. According to separate leaks, the Quest 3 may not be compatible with your Quest 2 accessories – forcing you to pay for all-new add-ons if you want them.

See more

This is with respect to the headset strap; @ZGFTECH on X (formerly Twitter) posted a picture seemingly showing a side-by-side of the Quest 3 strap and the Quest 2 Elite strap with the two having pretty different designs – suggesting the old strap will be incompatible with the new hardware. I’m still holding out hope however that my Quest Pro charging dock will be compatible with the Quest 3, though given the new dock’s wildly different design, I’m not holding my breath.

Admittedly this shouldn’t be entirely unexpected – it’s not unheard of for tech peripherals to be exclusive to specific devices. But it’s something to keep in mind if you’re looking to upgrade to Meta's latest VR hardware.

You might also like:

TechRadar – All the latest technology news

Read More

WhatsApp is about to get its first AI trick – and it could be just the start

WhatsApp is taking its first steps into the world of artificial intelligence as a recent Android beta introduced an AI-powered, sticker generation tool

Revealed in a new report from WABetaInfo, a Create button will show up in chats whenever some app testers open the sticker tab in the text box. Tapping Create launches a mini-generative AI engine with a description bar at the top asking you to enter a prompt. Upon inputting said prompt, the tool will create a set of stickers according to your specifications that users can then share in a conversation. As an example, WABetaInfo told WhatsApp to make a sticker featuring a laughing cat sitting on top of a skateboard, and sure enough, it did exactly as instructed. 

WhatsApp sticker generator

(Image credit: WABetaInfo)

It’s unknown which LLM (large language model) is fueling WhatsApp’s sticker generator. WABetaInfo claims it uses a “secure technology offered by Meta.”  Android Police, on the other hand, states “given its simplicity” it could be “using Dall-E or something similar.” 

Availability

You can try out the AI tool yourself by joining the Google Play Beta Program and then installing WhatApp beta version 2.23.17.14, although it’s also possible to get it through the 2.23.17.13 update. Be aware the sticker generator is only available to a very small group of people. There’s a chance you won’t get it. However, WABetaInfo claims the update will be “rolling out to more users over the coming weeks,” so keep an eye out for the patch when it arrives. No word on an iOS version. 

Obviously, this is still a work in progress. WABetaInfo says if the AI outputs something that is “inappropriate or harmful, you can report it to Meta.” The report goes on to state that “AI stickers are easily recognizable” explaining recipients “may understand when [a drawing] has been generated”. The wording here is rather confusing. We believe WABetaInfo is saying AI content may have noticeable glitches or anomalies. Unfortunately, since we didn’t get access to the new feature, we can’t say for sure if generated content has any flaws.

Start of an AI future

We do believe this is just the start of Meta implementing AI to its platforms. The company is already working on sticker generators for Instagram and Messenger, but they’re seemingly still under development. So what will the future bring? It’s hard to say. It would, however, be cool to see Meta finally add its Make-A-Scene tool to WhatsApp.

It’s essentially the company’s own take on an image generator, “but with a bigger emphasis on creating artistic pieces.” We could see this being added to WhatsApp as a fun game for friends or family to play. There’s also MusicGen for crafting musical compositions, although that may be better suited for Instagram.

Either way, this WhatsApp beta feels like Meta has pushed the first domino of what could be a string of new AI-powered features coming to its apps.

TechRadar – All the latest technology news

Read More

Microsoft hasn’t forgotten about Windows 10, as a vital fix for game crashes finally arrives

Windows 10 gamers have got a reason to celebrate with the latest preview update for the OS, which comes with an important fix for a nasty gaming-related crash, and other cures besides.

The problem with PC games is related to Timeout Detection and Recovery (TDR) errors popping up, either causing a crash, or even locking up the system in some more extreme cases.

As you may have seen, the fix for this was applied to Windows 11 in the Moment 3 update – it was first spotted in the preview of that patch which emerged late in June.

The good news for Windows 10 users is that the fix is in KB5028244 (build 19045.3271 for Windows 10 22H2), which again is a preview patch (an optional download). This means the full (polished) fix will be available in August’s cumulative update for Windows 10, and that’s only a couple of weeks away now.

In the release notes for the patch, Microsoft observes: “This update addresses an issue that might affect your computer when you are playing a game. Timeout Detection and Recovery (TDR) errors might occur.”

On top of this, there are fixes for a bug that prevents some VPN apps from making a successful connection, and a glitch that means when a PC comes back from sleep, certain display or audio devices go missing in action.

Furthermore, there’s the resolution of a problem with Windows 10 where a full-screen search can’t be closed (and prevents any further action from being taken with the Start menu), and a raft of other tweaks and fixes.


Analysis: A welcome fix, albeit slightly late

There are some important cures here, then, as those mentioned bugs are quite a pain for those affected.

PC gamers on Windows 10 – the vast majority still – were particularly miffed when Windows 11 got a solution for the TDR crashes in June, with Microsoft leaving them in the lurch. And with no mention of Windows 10 back at the time, some gamers were even talking about this being a reason to upgrade to Windows 11 – that’s how annoyed some folks are by this one.

As one Reddit user put it: “Windows 10 TDR errors have been the bane on [sic] my life.”

At any rate, the fix is now here, and hopefully it’ll prove effective on Windows 10. Of course, right now it’s still testing as an optional update, so you’ll have to manually grab the patch via Windows Update, and there may still be problems with it. That said, those affected by TDR crashes might be so keen to get rid of them that any risk of side effects elsewhere may seem a small price to pay.

Whatever the case, as mentioned, the full fix should be coming in the cumulative update for Windows 10 next month (assuming no problems are encountered in this final testing phase).

Clearly, Windows 11 has priority as Microsoft develops and tinkers with its desktop operating systems, but it feels an odd situation where two-thirds of gamers are still on Windows 10, and are getting the short end of the stick with fixes like this.

TechRadar – All the latest technology news

Read More

Bing AI chatbot is about to get two much-wanted features

Bing AI is getting a couple of the most-wanted features folks have been badgering Microsoft for, with image search rolling out to everyone imminently, and dark mode shouldn’t be too far behind that.

These nuggets of info come from Mikhail Parakhin, Microsoft’s head of Advertising and Web Services, who shared the details in a couple of tweets.

See more

We’re told that multimodal/image understanding is rolling out to everyone, meaning the ability to drop an image into the chatbot and have it identify the photo (whatever it may be – a famous building, for example).

Also known as Visual Search (or Bing Vision), Microsoft just penned a blog post on this noting that it’s “rolling out now for consumers on Bing desktop and the mobile app”.

As you can see from Parakhin’s tweet, Visual Search should be fully rolled out as of today, so you should see the feature later on at some point, if you don’t already.

In the replies to the above highlighted Twitter conversation, Parakhin further tweeted about a second piece of functionality for Bing AI that folks have been clamoring for with even greater eagerness than image searching, in some cases.

That would be dark mode, and we’re told that this capability should arrive for Bing AI in a “couple of weeks”, so hopefully pretty soon indeed.


Analysis: Dark times are coming – or maybe already here?

There has been a lot of prodding and poking of Microsoft about providing a dark mode for Bing AI, so it’s great to see this arrive. Interestingly, some users are already reporting that they have dark mode – so perhaps we can expect this very soon for the chatbot, hot on the heels of the full rollout of Visual Search.

Microsoft is making fast progress with its Bing AI, with various nifty bits of functionality coming in at a good pace. Another much-requested feature that’s due to arrive in the near future is a ‘no search’ option that’ll come in handy in certain situations. (This forces an answer direct from the AI, without it scraping data from the web as part of its reply to a query).

Bing AI needs Microsoft to continue driving forward, mind you, as Bard, the rival AI from Google, might have got off to a poor start, but it’s rapidly making up ground with new features now. With Bard set to get extensions brought into the mix soon, there may be some defectors to Google’s AI – something Microsoft will clearly be desperate to avoid.

However, what Microsoft needs to be careful about, of course, is annoying folks by doing its usual badgering tricks in Windows 11 to try and get people to use the AI (and other services for that matter).

TechRadar – All the latest technology news

Read More

ChatGPT use declines as users complain about ‘dumber’ answers, and the reason might be AI’s biggest threat for the future

 

Is ChatGPT old news already? It seems impossible, with the explosion of AI popularity seeping into every aspect of our lives – whether it’s digital masterpieces forged with the best AI art generators or helping us with our online shopping.

But despite being the leader in the AI arms race – and powering Microsoft’s Bing AI – it looks like ChatGPT might be losing momentum. According to SimilarWeb, traffic to OpenAI’s ChatGPT site dropped by almost 10% compared to last month, while metrics from Sensor Tower also demonstrated that downloads of the iOS app are in decline too.

As reported by Insider, paying users of the more powerful GPT-4 model (access to which is included in ChatGPT Plus) have been complaining on social media and OpenAI’s own forums about a dip in output quality from the chatbot.

A common consensus was that GPT-4 was able to generate outputs faster, but at a lower level of quality. Peter Yang, a product lead for Roblox, took to Twitter to decry the bot’s recent work, claiming that “the quality seems worse”. One forum user said the recent GPT-4 experience felt “like driving a Ferrari for a month then suddenly it turns into a beaten up old pickup”.

See more

Why is GPT-4 suddenly struggling?

Some users were even harsher, calling the bot “dumber” and “lazier” than before, with a lengthy thread on OpenAI’s forums filled with all manner of complaints. One user, ‘bitbytebit’, described it as “totally horrible now” and “braindead vs. before”.

According to users, there was a point a few weeks ago where GPT-4 became massively faster – but at a cost of performance. The AI community has speculated that this could be due to a shift in OpenAI’s design ethos behind the more powerful machine learning model – namely, breaking it up into multiple smaller models trained in specific areas, which can act in tandem to provide the same end result while being cheaper for OpenAI to run.

OpenAI has yet to officially confirm this is the case, as there has been no mention of such a major change to the way GPT-4 works. It’s a credible explanation according to industry experts like Sharon Zhou, CEO of AI-building company Lamini, who described the multi-model idea as the “natural next step” in developing GPT-4.

AIs eating AIs

However, there’s another pressing problem with ChatGPT that some users suspect could be the cause of the recent drop in performance – an issue that the AI industry seems largely unprepared to tackle.

If you’re not familiar with the term ‘AI cannibalism’, let me break it down in brief: large language models (LLMs) like ChatGPT and Google Bard scrape the public internet for data to be used when generating responses. In recent months, a veritable boom in AI-generated content online – including an unwanted torrent of AI-authored novels on Kindle Unlimited – means that LLMs are increasingly likely to scoop up materials that were already produced by an AI when hunting through the web for information.

An iPhone screen showing the OpenAI ChatGPT download page on the App Store

ChatGPT app downloads have slowed, indicating a decrease in overall public interest. (Image credit: Future)

This runs the risk of creating a feedback loop, where AI models ‘learn’ from content that was itself AI-generated, resulting in a gradual decline in output coherence and quality. With numerous LLMs now available both to professionals and the wider public, the risk of AI cannibalism is becoming increasingly prevalent – especially since there’s yet to be any meaningful demonstration of how AI models might accurately differentiate between ‘real’ information and AI-generated content.

Discussions around AI have largely focused on the risks it poses to society – for example, Facebook owner Meta recently declined to open up its new speech-generating AI to the public after it was deemed ‘too dangerous’ to be released. But content cannibalization is more of a risk to the future of AI itself; something that threatens to ruin the functionality of tools such as ChatGPT, which depend upon original human-made materials in order to learn and generate content.

Do you use ChatGPT or GPT-4? If you do, have you felt that there’s been a drop in quality recently, or have you simply lost interest in the chatbot? I’d love to hear from you on Twitter. With so many competitors now springing up, is it possible that OpenAI’s dominance might be coming to an end? 

TechRadar – All the latest technology news

Read More

Windows 11 finally gets 3D-style emoji (about 2 years too late for some folks)

Windows 11 has a new preview build and it introduces 3D emoji, plus it takes an important first step for change on the security front.

You may recall that 3D emoji were promised by Microsoft in the past – the distant past, in fact, since this was something that was supposed to launch with Windows 11 – but they’re finally here. Putting paid to what was quite the controversy almost two years ago (we’ll come back to ‘emojigate’ shortly).

Build 25905 for the Canary channel gives us some smart-looking emoji that are nicely fleshed out with a 3D-like appearance.

As Microsoft notes: “These emoji use gradients to bring the design style that our customers have been asking for.”

Windows 11 3D Emoji

(Image credit: Microsoft)

Elsewhere in this preview build, security has been tightened thanks to the introduction of Rust in the Windows Kernel. Not rust as in metal-gone-bad, but Rust as in the coding language which offers advantages over C++ (the currently used programming language), notably in terms of memory safety (and defending against exploits that take this route).

At the moment, the initial steps with Rust are just a “small trial” as Microsoft describes it, but expect the Windows 11 kernel to get rustier as time goes on.

It’s also worth noting that the Microsoft Store now has an AI Hub, and not just for the Canary channel, but all testers who are running version 22306.1401.x.x or better of the store.

We discussed this in detail yesterday, but the idea is for Microsoft to highlight some top apps that make good use of AI (and more besides, eventually).

Check out the full details of everything going on in this new preview build by reading through Microsoft’s blog post on the release.


Analysis: Fiery feelings over emoji

What’s all this about ‘emojigate’ then? Well, as mentioned, Microsoft did tease 3D-like emoji before the release of Windows 11, promising that they’d arrive with the OS. However, when Windows 11 launched in October 2021, the redesigned emoji looked nothing like the promised 3D-style affairs, and were simply flat icons.

That caused quite an outpouring of rage on social media. While emoji may seem like a relatively unimportant facet of an operating system to some folks, to others, they’re a key part of the experience and communicating with friends. More to the point, people don’t like feeling duped, and indeed at the time, some threw accusations at Microsoft of ‘scamming’ them.

Over the top, yes, but that’s how folks can react when they feel they’ve been lied to in some way. Microsoft explained that the wrong graphics had been used for teasing the feature, and there had been some kind of a mix-up, but that didn’t sit well with some Windows 11 users back at the time, either.

At any rate, Brandon LeBlanc, Senior Program manager at Microsoft, told the disgruntled users that the 3D emoji could arrive in Windows 11 at a later date – and they finally have. At least in testing, anyway, and they should be in the release version of Windows 11 later this year.

TechRadar – All the latest technology news

Read More

Apple Vision Pro price, release date and everything we know about the VR headset

The Apple Vision Pro is one of the biggest tech announcements of recent years – and with the dust is still settling on the tech giant's first AR/VR headset, many questions remain. What's the Vision Pro's actual release date? What do we know about its specs? And how will you use it if you wear glasses?

Apple Vision Pro specs

– Mixed reality headset
– Dual M2 and R1 chip setup
– 4K resolution per eye
– No controllers, uses hand tracking and voice inputs
– External battery pack
– Two-hour battery life
– Starts at $ 3,499 (around £2,800 / AU$ 5,300)
– Runs on visionOS

We've rounded up the answers to those questions and more in this guide to everything we know (so far) about the Apple Vision Pro. You can also read our hands-on Apple Vision Pro review for a more experiential sense of what it's like to wear the headset. 

Now that visionOS, which is the headset's operating system, is in the hands of developers, a bigger picture is forming of exactly how this “spatial computer” (as Apple calls its) will work and fit into our lives.

Still, actually using the Vision Pro as a next-gen Mac, TV, FaceTime companion and more is a long way off. It'll cost $ 3,499 (around £2,800 / AU$ 5,300) when it arrives “early next year”, and that'll only be in the US initially.

Clearly, the Vision Pro is a first-generation, long-term platform that is going to take a long time to reach fruition. But the journey there is definitely going to be fun as more of its mysteries are uncovered – so here's everything we know about Apple's AR/VR headset so far.

Apple Vision Pro latest news

Apple Vision Pro: what you need to know

Vision Pro release date: Sometime “early next year” according to Apple.

Vision Pro headset price: Starts at $ 3,499 (around £2,800 / AU$ 5,300).

Vision Pro headset specs: Apple's headset uses two chipsets, an M2 and a new R1 to handle regular software and its XR capabilities respectively. It also has dual 4K displays.

Vision Pro headset design: The Vision Pro has a similar design to other VR headsets, with a front panel that covers your eyes, and an elastic strap. One change from the norm is that it has an outer display to show the wearer's eyes.

Vision Pro headset battery life: It lasts for up to two hours on a full charge using the official external battery pack.

Vision Pro headset controllers: There are no controllers – instead you'll use your eyes, hands, and voice to control its visionOS software.

Apple Vision Pro: price and release date

Apple says the Vision Pro will “start” at $ 3,499 (that's around £2,800 / AU$ 5,300). That wording suggests that more expensive options will be available, but right now we don't know what those higher-priced headsets might offer over the standard model.

As for release date for the Vision Pro, Apple has only given a vague “early next year.” That's later than we'd been expecting, with leaks suggesting it would launch in the next few months – perhaps around the same time as the iPhone 15 – but that isn't the case. As 2024 gets closer we expect Apple will give us an update on when we'll be able to strap a Vision Pro onto our heads.

Interestingly, Apple's website only mentions a US release. Apple has yet to confirm if the Vision Pro will launch in regions outside of the US, and when that'll happen.

Apple Vision Pro: design

The Apple Vision shares a lot of similarities with the current crop of best VR headsets. It has a large face panel that covers your eyes, and is secured to your head with a strap made from elasticated fabric, plastic and padding.

But rather than the similarities, let's focus on the Vision Pro's unique design features.

The biggest difference VR veterans will notice is that the Vision Pro doesn't have a battery; instead, it relies on an external battery pack. This is a sort of evolution of the HTC Vive XR Elite's design, which allowed the headset to go from being a headset with a battery in its strap to a battery-less pair of glasses that relies on external power.

Vision Pro

(Image credit: Apple)

This battery pack will provide roughly two hours of use on a full charge according to Apple, and is small enough to fit in the wearer's pocket. It'll connect to the headset via a cable, which is a tad unseemly by Apple’s usual design standards, but what this choice lacks in style it should make up for in comfort. 

We found the Meta Quest Pro to be really comfy, but wearing it for extended periods of time can put a strain on your neck – just ask our writer who wore the Quest Pro for work for a whole week.

Apple Vision Pro VR headset's battery pack on a table

The Vision Pro’s battery pack (Image credit: Future / Lance Ulanoff)

If you buy a Vision Pro you'll find that your box lacks something needed for other VR headsets: controllers. That's because the Vision Pro relies solely on tracking your hand and eye movements, as well as voice inputs, to control its apps and experiences. It'll pick up these inputs using its array of 12 cameras, five sensors, and six microphones.

The last design detail of note is the Vision Pro's Eyesight display. It looks pretty odd, maybe even a bit creepy, but we're reserving judgment until we've had a chance to try it out.

Apple Vision Pro's Eyesight feature showing you the wearer's eyes.

Eyesight in action (Image credit: Apple)

When a Vision Pro wearer is using AR features and can see the real world, nearby people will see their eyes 'through' the headset's front panel (it's actually a screen showing a camera view of the eyes, but based on Apple's images you might be convinced it's a simple plane of glass). If they're fully immersed in an experience, onlookers will instead see a cloud of color to signify that they're exploring another world.

Apple Vision Pro: specs and features

As the rumors had suggested, the Apple Vision Pro headset will come with some impressive specs to justify its sky-high price.

First, the Vision Pro will use two chipsets to power its experiences. One is an M2 chip, the same one you'll find in the Apple iPad Pro (2022), and some of the best MacBooks and Macs

This powerful processor will handle the apps and software you're running on the Vision Pro. Meanwhile, the R1 chipset will deal with the mixed reality side of things, processing the immersive elements of the Vision Pro that turn it from a glorified wearable Mac display to an immersive “spatial computer”.

Apple Vision Pro

(Image credit: Apple)

On top of these chips, the Vision Pro has crisp 4K micro-OLED displays – one per eye – that offer roughly 23 million pixels each. According to Apple the Vision Pro's display fits 64 pixels into the same space that the iPhone's screen fits one single pixel, and this could eliminate the annoying screen-door effect that affects other VR headsets. 

This effect occurs when you're up close to a screen and you can start to see the gaps between the pixels in the array; the higher the pixel density, the closer you can get before the screen door effect becomes noticeable.

These components will allow you to run an array of Apple software through Apple's new visionOS platform (not xrOS as was rumored). This includes immersive photos and videos, custom-made Disney Plus experiences, and productivity apps like Keynote.

You'll also be able to play over 100 Apple Arcade titles on a virtual screen that's like your own private movie theatre.

Apple Vision OS app screen

(Image credit: Apple)

You'll be able to connect your Vision Pro headset to a Mac via Bluetooth. When using this feature you'll be able to access your Mac apps and see your screen on a large immersive display, and it'll sit alongside other Vision Pro apps you're using. Apple says this setup will help you be more productive than you've ever been.

With the power of the M2 chip, Apple's headset should be able to run most Mac apps natively – Final Cut Pro and Logic Pro recently arrived on M2 iPads. For now, however, Apple hasn't revealed if these and other apps will be available natively on the Vision Pro, or if you'll need a Mac to unlock the headset's full potential. We expect these details will be revealed nearer to the headset's 2024 launch.

Apple Vision Pro: your questions answered

We've answered all of the basic questions about the Apple Vision Pro's release date, price, specs and more above, but you may understandably still have some more specific or broader ones. 

To help, we've taken all of the most popular Vision Pro questions from Google and social media and answered them in a nutshell below.

Apple Vision Pro

(Image credit: Apple)

What is the point of Apple Vision Pro?

Apple says that the point of the Vision Pro is to introduce a “new era of spatial computing”. It’s a standalone, wearable computer that aims to deliver new experiences for watching TV, working, reliving digital memories, and remotely collaborating with people in apps like FaceTime.

But it’s still early days. And there arguably isn’t yet a single ‘point’ to the Vision Pro. At launch, it’ll be able to do things like give you a huge, portable monitor for your Apple laptop, or create a home cinematic experience in apps like Disney Plus. However, like the first Apple Watch, it’ll be up to developers and users to define the big new use cases for the Vision Pro.

Apple Vision Pro

(Image credit: Apple)

How much does an Apple Vision Pro cost?

The Apple Vision Pro will cost $ 3,499 when it goes on sale in the US “early next year”. It won’t be available in other countries until “later next year”, but that price converts to around £2,815 / AU$ 5,290.

This makes the Vision Pro a lot more expensive than rival headsets. The Meta Quest Pro was recently given a price drop to $ 999 / £999 / AU$ 1,729. Cheaper and less capable VR-only headsets, like the incoming Meta Quest 3, are also available for $ 499 / £499 / AU$ 829. But there is also no direct comparison to the kind of technology offered by the Vision Pro.   

Apple Vision Pro

(Image credit: Apple)

Does Apple Vision Pro work with glasses?

The Apple Vision Pro does work for those who wear glasses, although there are some things to be aware of. If you wear glasses you won’t wear them with the headset. Instead, you’ll need to buy some separate optical inserts that attach magnetically to the Vision Pro’s lenses.  Apple hasn’t yet announced the pricing for these, currently only stating that “vision correction accessories are sold separately”.

Apple says it’ll offer a range of vision correction strengths that won’t compromise the display quality or the headset’s eye-tracking performance. But it also warns that “not all prescriptions are supported” and that a “valid prescription is required”. So while the Vision Pro does work well for glasses wearers, there are some potential downsides.

A virtual display hovering above an Apple MacBook

(Image credit: Apple)

Is Apple Vision Pro a standalone device?

The Apple Vision Pro is a standalone device with its own visionOS operating system and doesn’t need an iPhone or MacBook to run. This is why Apple calls the headset a “spatial computer”.

That said, having an iPhone or MacBook alongside a Vision Pro will bring some benefits. For example, to create a personalized spatial audio profile for the headset’s audio pods, you’ll need an iPhone with a TrueDepth camera. 

The Vision Pro will also give MacBook owners a large virtual display that hovers above their real screen, an experience that won’t be available on other laptops. So while you don’t need any other Apple devices to use the Vision Pro, owning other Apple-made tech will help maximize the experience.

The merging of an Apple TV menu and a real room in the Apple Vision Pro

(Image credit: Apple)

Is Apple Vision Pro VR or AR?

The Apple Vision Pro offers both VR and AR experiences, even if Apple doesn’t use those terms to describe them. Instead, Apple says it creates “spatial experiences” that “blend the digital and physical worlds”. You can control how much you see of both using its Digital Crown on the side.

Turning the Digital Crown lets you control how immersed you are in a particular app. This reveals the real world behind an app’s digital overlays, or extends what Apple calls ‘environments’. These spread across and beyond your physical room, for example giving you a view over a virtual lake.

While some of the examples shown by Apple look like traditional VR, the majority err towards augmented reality, combining your real-world environment (captured by the Vision Pro’s full-color passthrough system) with its digital overlays.

Apple Vision Pro

(Image credit: Apple)

Is Apple Vision Pro see through?

The front of the Apple Vision Pro isn’t see-through or fully transparent, even though a feature called EyeSight creates that impression. The front of the headset is made from laminated glass, but behind that lens is an outward-facing OLED screen. 

It’s this screen that will show a real-time view of your eyes (captured by the cameras inside the headset) to the outside world if you’re in augmented reality mode. If you’re enjoying a fully immersive, VR-like experience like watching a movie, this screen will instead show a Siri-like graphic.

To help you look out through the headset, the Apple Vision Pro has a passthrough system that uses cameras on the outside of the goggles to give you a real-time, color feed of your environment. So while the headset feels like it’s see-through, your view of the real world is digital. 

The Apple Vision Pro headset on a black background

(Image credit: Apple)

How does Vision Pro work?

The Apple Vision Pro uses a combination of cameras, sensors, and microphones to create a controller-free computing experience that you control using your hands, eyes, and voice.

The headset’s forward-facing cameras capture the real world in front of you, so this can be displayed on its two internal lenses (Apple says these give you “more pixels than a 4K TV for each eye”). The Vision Pro’s side and downward-facing cameras also track your hand movements, so you can control it with your hands – for example, touching your thumb and forefinger together to click.

But the really unique thing about the Vision Pro is its eye-tracking, which is powered by a group of infrared cameras and LED illuminators on the inside of the headset. This mean you can simply look at app icons or even smaller details to highlight them, then use your fingers or voice to type.

TechRadar – All the latest technology news

Read More

6 things we’ve learned about the Apple Vision Pro from the visionOS beta

Apple has launched its first-ever beta for visionOS – the operating system the upcoming Apple Vision Pro mixed-reality headset will use – giving us a glimpse at what its new gadget should be capable of at launch.

As explained in the Apple Developer blog post making the announcement, the launch of the visionOS SDK will give developers the chance to start working on spatial computing apps for the Vision Pro. It will also help developers understand the Vision Pro's capabilities. Even better, the SDK provides a visionOS simulator so that developers can test out their 3D interface in a number of room layouts with various lighting conditions. And those tests have already revealed a number of details about what the Vision Pro will and won’t be able to do at launch.

This is only the first beta, and users are accessing the simulator via a PC rather than a headset – so expect some changes to be made to visionOS before it officially launches. With that said, here’s what we’ve learned so far about the Apple Vision Pro from the visionOS beta.

1. Visual Search is coming 

Visual Search is basically the Vision Pro’s version of Google Lens or the Visual Lookup feature found on the best iPhones and best iPads (via MacRumors).

A man wearing the Apple Vision Pro headset and pressing its shutter button to take a photo

You can use the Vision Pro to scan real-world objects and text (Image credit: Apple)

According to info found in the visionOS beta, Vision Pro headset wearers will be able to use the headset’s cameras to find information about an item they scan and to interact with real-world text. This includes copying and pasting the text into Vision Pro apps, translating it between 17 supported languages, and converting units (like grams to ounces, or meters to feet). This sounds pretty neat, but unless you’re wearing your Vision Pro headset all the time while traveling abroad or baking with a recipe we aren’t too sure how often you’ll rely on these features.

2. The OS is intuitive 

While not the most flashy feature, intuitive OS design and windows management in 3D space will be crucial for the Vision Pro. The idea of having loads of software windows floating around us seems neat – it'd be like we’re a real-world Tony Stark – but if it's a pain to position them how we want, it’ll be easier to stick with a traditional PC and monitor.

Thankfully, it looks like it’s super easy to move, resize, and hide app windows in Vision Pro, as shown off by @Lascorbe on Twitter.

See more

The video also shows that you aren’t moving the apps on a fixed cylinder around you; you can take full advantage of the 3D space around you by bringing some windows closer while moving others further away – and even stacking them in front of each other if you want. While dragging a window it’ll turn translucent so you can see what’s behind it as you decide where to position it.

3. Porting iOS to visionOS is easy 

According to developers (like @lydakisg on Twitter) that have started working with visionOS, it’s incredibly easy to port iOS apps over to the new system – so many of the best iPhone apps could be available on the Vision Pro at launch. 

See more

This is great news for people that were worried that the Vision Pro might not have an app library comparable to the Quest Store found on Meta’s VR headsets like the Meta Quest Pro.

The only downside is that the ported iOS apps appear in a floating window as they would on a Mac rather than being a fully-fledged immersive experience. So while your favorite appears can easily appear on the Vision Pro, they might not take advantage of its new tech – at least not without the developers spending more time working on a dedicated visionOS version.

4. Battery percentages return 

Battery percentages are a sore spot for many iPhone users. When the iPhone X was released over five years ago it changed the battery status symbol – the percentage disappeared and only a steadily emptying symbol of a battery remained. While this symbol does give a visual indication of how much charge your phone has left, it’s not always as clear as a number; as such, it's been a constant request from iPhone users for Apple to bring back battery charge percentages – which it did with iOS 16 when the iPhone 14 launched.

A woman wears the Vision pro in front of a menu showing a battery icon that has no number inside of it

The Vision Pro trailer shows a battery icon with no percentage (Image credit: Apple)

Unfortunately, a brief section of Apple’s Vision Pro intro video showed us that the Vision Pro might make the iPhone X’s mistake by using a battery status symbol without a number.  

See more

Thankfully for fans of Apple’s more accurate battery symbol, users like @aaronp613 on Twitter have found that battery percentages do show up on Vision Pro. It’s not a massive win, but an important one for a lot of people. 

5. Apps can use unique control schemes 

The visionOS beta not only gives developers tools to create their own Vision Pro apps and to port their existing iOS software to the system; they’re also given details, sample code, and videos showing off the kinds of projects they could create for the upcoming Apple hardware.

One such game is Happy Beam, a video of which has been shared on Twitter by @SwiftlyAlex.

See more

Happy Beam doesn’t look super interesting in and of itself – one Twitter commenter noted it looks like the sort of AR game you could play on the Nintendo 3DS – but it shows that the Vision Pro is able to recognize different hand gestures (like forming a heart) and translate them to different in-game controls. 

We’ll have to wait and see how developers use these capabilities in their creations, but we can already imagine a few possible implementations. For example, rather than using button prompts you could make a scissors gesture with your hand to cut images and text from one document, then clap your hands to paste it in a new spot.

It also appears that Apple is conscious that its headset should remain accessible. As shown in the Happy Beam demo, there are alternative controls that allow Vision Pro users to rely on simpler gestures or controllers to play the game – with it serving as a reminder to other developers to consider similar alternative control schemes in their software.

This gameplay video shared by @wilburwongdev on YouTube shows how the game changes when not using your hands.

6. Fitness apps are discouraged

One last tidbit that has been spotted not in the visionOS beta but in the developer guidelines for the operating system. In its guidelines, Apple says app makers should “avoid encouraging people to move too much” while immersed in the headset. The wording is a little vague, but it seems as if Apple is against the development of fitness apps for Vision Pro at this time.

One notable omission from the Vision Pro reveal trailer was that there were no fitness apps featured. Many people (some of our writers included) use VR headsets for working out, or even just getting a bit active. There’s Beat Saber and Pistol Whip for more gamified workouts, or FitXR and Litesport for more traditional fitness options. These developer notes make the omission seem more intentional, suggesting fitness and activities involving a lot of movement are not in Apple’s current plan for the Vision Pro. We’ll have to wait and see if this changes when the device launches.


Want to learn more about the Vision Pro? Check this round-up of 5 features Apple may have removed from the Vision Pro before it was even out.

TechRadar – All the latest technology news

Read More