Rumored Apple and Meta collaboration might make the iPhone 16 a better AI phone

Apple may be augmenting its new Apple Intelligence Artificial Intelligence (AI) features with models built by Meta, according to a Wall Street Journal report. The two tech giants are supposedly discussing incorporating Meta's generative AI services into iOS 18 and the next generation of iPhone models. 

The WSJ report cites conversations Apple has begun with most of the big names in AI, including Google (Gemini), Anthropic (Claude), and OpenAI (ChatGPT). Plans for Apple Intelligence to include free ChatGPT access and GPT-4o integration were mentioned among the deluge of Apple Intelligence news at WWDC this year. That is clearly a non-exclusive arrangement if a Meta collaboration is underway. 

Apple's interest in Meta's Llama 2 and Llama 3 large language models makes sense on both ends of any deal. Meta would get to bring its AI to the massive global network of iPhone users, while Apple could cite Meta's AI features as another selling point for the iPhone. And while both Meta and Apple have some deals with OpenAI and its main funder, Microsoft, an alliance between the two might help build a competitive alternative even as OpenAI and ChatGPT may be what people first point to as generative AI. 

Mutually beneficial

For Apple as a hardware platform, it's especially good to widen the available AI model choices. That way, Apple can pitch iPhones as an AI hub, switching among models depending on what people want the AI to do. Apple explicitly pointed toward that goal at WWDC this year when announcing the deal with OpenAI to provide ChatGPT on Apple products.

“We wanted to start with the best, and we think ChatGPT from OpenAI and their new 4o model represents the best choice for our users today,” Apple's senior vice president of Software Engineering Craig Federighi explained at the event. “We think ultimately, people are going to have a preference perhaps for certain models that they want to use – maybe one that great’s for creative writing, or one they prefer for coding, and so we want to enable users ultimately to bring the model of their choice and we’re going to look forward to doing integrations with models, like Google Gemini for instance, in the future.”

Any speculation on how Apple Intelligence will change thanks to Meta is premature, but the fact it's happening at all might surprise some. Meta's advertising income took a beating after Apple changed its policies to give users more control over their data in 2021. Requiring user permission before tracking data across other apps and websites cost Meta billions of dollars and prompted Meta to release a method for advertisers to avoid Apple's service fee for boosting ad posts. The stakes of those business battles are apparently no match for Apple and Meta's anticipated AI earnings, and both now seem happy to let bygones be bygones. 

You might also like…

TechRadar – All the latest technology news

Read More

Windows 11 could make checking your phone from your PC even better – so Apple, take note for macOS Sequoia

Windows 11 could put your iPhone or Android device right into the heart of the Start menu, in a manner of speaking – or at least the Phone Link app is apparently headed this way.

That’s according to clues unearthed by MS Power User, which reported on whispers from Windows 11 testers to the effect that Phone Link is set to be made into a Start menu ‘Companion.’

If you’ve missed the Companion panel appearing in Windows preview builds last month, it’s a floating panel that can be docked to the left or right of the Start menu. The Companions it plays host to are a bit like Live Tiles of old, widget-style affairs that display real-time info which is piped through.

In theory, Phone Link will be one of the apps that’ll appear in the Companion panel, as MS Power User took a deep dive into files from Phone Link and found a number of code strings relating to ‘StartMenuCompanion’ settings.


Analysis: Dialing up the work on phone integration

This would appear to be the groundwork for Phone Link to become a Start menu Companion, but of course, this is just work hidden in testing right now – and we can’t take it for granted this will happen. Indeed, the Companion panel itself might be abandoned yet if Microsoft thinks better of it – only time will tell.

Given the rumors, and at least some concrete evidence that Phone Link will get this treatment, it seems more likely to happen than not, on balance. Phone Link would also be a logical and useful app to have in the Companion panel, in order to pipe notifications through from your smartphone, bringing them to your attention when you’re in the Start menu.

Phone Link has been a key part of Windows for some time now, and it’s not surprising Microsoft is pushing ahead with potential features like this – and work on the Cross-Device Experience Host (albeit that has stumbled of late) and other phone-related capabilities besides – given that Apple now has iPhone Mirroring inbound with macOS Sequoia.

Whichever way you dice it, smartphones are becoming more and more deeply integrated into desktop operating systems these days.

You might also like…

TechRadar – All the latest technology news

Read More

Google’s NotebookLM is now an even smarter assistant and better fact-checker

Google is updating its NotebookLM writing assistant with improved performance and new features, among other things. It now runs on the company’s Gemini 1.5 Pro model, the same AI that powers the Gemini Advanced chatbot. 

Thanks to this functionality, the assistant is more contextually aware, allowing you to ask “questions about [the] images, charts, and diagrams in your source” in order to gain a better understanding. 

Although Gemini 1.5 Pro is part of NotebookLM, it's unknown if this means the AI will accept longer text prompts or create more detailed answers. After all, Google's AI can handle context windows of up to a million tokens. We reached out to the tech giant to ask if people can expect to see support for bigger prompts.

Prompts will mostly stay the same, although they’ll now provide inline citations in the form of encircled numbers. Clicking on one of these citations takes you directly to supporting passages inside source documents. That way, you can double-check the material to see if NotebookLM got things right. 

AI hallucinations continue to be a problem for the tech, so it’s important that people are able to fact-check outputs. When it comes to images, opening the citation causes the source picture to appear in a small window next to the text.

NotebookLM

(Image credit: Google)

Upgraded sourcing

Support for information sources is expanding to now include “Google Slides and web URLs” alongside PDFs, text files, and Google Docs. A new feature called Notebook Guide is being added, too. What this does is give you the opportunity to rearrange the data you enter into a specific format like a series of FAQs or a Study Guide. It could be quite handy.

The Guide sees other changes being made, though they’re not included in the initial announcement. For instance, you can have up to 50 different sources per project, and each one can be up to 500,000 words long, according to TheVerge. Prior to this, users could only have five sources at once, so it’s a big step up. 

Raiza Martin, who is a senior product manager at Google Labs, also told the publication that “NotebookLM is a closed system.” This means the AI won’t perform any web searches beyond what you, the user, give it in a prompt. Every response it generates pertains only to the information it has on hand.

NotebookLM’s latest update is live now and is rolling out to “over 200 countries and territories around the world.” You can head over to the AI’s official website to try out the new features. But, do keep in mind that NotebookLM is still considered to be experimental and you may run into some quirks. TheVerge, for instance, claimed the URL source function didn’t work in their demo. However, in our experience, the tool worked just fine.

Be sure to check out TechRadar's list of the best business laptops for 2024, if you plan on using the assistant at work.

You might also like

TechRadar – All the latest technology news

Read More

Meta can’t stop making the Meta Quest 3’s mixed reality better with updates

June is here, and like clockwork the latest update for your Meta Quest 3 headset is ready to roll out. 

The standout upgrade for v66 is to the VR headset’s mixed reality (again) – after it was the main focus of Horizon OS v64, and got some subtle tweaks in v65 too.

We aren’t complaining though, as this improvement looks set to make the image quality even better, with reduced image distortion in general and a reduction to the warping effect that can appear around moving objects. The upshot is that you should notice that it’s easier to interact with real-world objects while in mixed reality, and the overlay that displays your virtual hands should better align with where your actual hands appear to be.

If you want to see a side-by-side, Meta has handily released a video showcasing the improvements to mixed reality.

If you’re using your hands instead of controllers, Meta is also adding new wrist buttons.

Should you choose to enable this option in the experimental settings menu, you’ll be able to tap on your right or left wrist to use the Meta or Menu buttons respectively.

According to Meta, wrist buttons will make it a lot easier to open a menu from within a game or app – either the in-game pause screen, or the system-level menu should you want to change to a different experience, take a screenshot or adjust your headset’s settings. We’ll have to try them out for ourselves, but they certainly sound like an improvement, and a similar feature could bring even more button controls to the hand-tracking experience.

A gif showing a person pinching their fingers to open the Quest menu

You’ll no longer need to pinch to open menus (Image credit: Meta)

Lastly Meta is making it easier to enjoy background audio – so if you start audio or a video in the Browser, it’ll keep playing when you minimize the app – as well as a few changes to Parental Supervision features. Namely, from June 27, children aged 10 to 12 who are supervised by the same parent account will automatically be able to see each other in the Family Center.

As Meta warns however its update is rolling out gradually, and because this month’s passthrough change is so big it’s saying it will be sending out updates even more slowly than usual – and what’s more, some people who update to v66 might not get all the improvements right away.

So if you don’t see the option to update right away, or any passthrough improvements once you've installed v66 on your Meta Quest 3, don’t fret. You will get the upgrade eventually.

You might also like

TechRadar – All the latest technology news

Read More

Google explains why AI Overviews couldn’t understand a joke and told users to eat one rock a day – and promises it’ll get better

If you’ve been keeping up with the latest developments in the area of generative AI, you may have seen that Google has stepped up the rollout of its ‘AI Overviews’ section in Google Search to all of the US.

At Google I/O 2024, held on May 14, Google confidently presented AI Overviews as the next big thing in Search that it expected to wow users, and when the feature finally began rolling out the following week it received a less than enthusiastic response. This was mainly due to AI Overviews returning peculiar and outright wrong information, and now, Google has responded by explaining what happened and why AI Overviews performed the way it did (according to Google). 

The feature was intended to bring more complex and better-verbalized answers to user queries, synthesizing a pool of relevant information and distilling it into a few convenient paragraphs. This summary would then be followed by the listed blue links with brief descriptions of the websites that we’re used to. 

Unfortunately for Google, screenshots of AI Overviews that provided strange, nonsensical, and downright wrong information started circulating on social media shortly after the rollout. Google has since pulled the feature, and published an explanatory post on its ‘Keyword’ blog to explain why AI Overviews was doing this, as mentioned – being quick to point out that many of these screenshots were faked. 

What AI Overviews were intended to be

Keynote speech at Google i/o 2024

(Image credit: Future)

In the blog post, Google first explains that the AI Overviews were designed to collect and present information that you would have to dig further via multiple searches to find out otherwise, and to prominently include links to credit where the information comes from, so you could easily follow up from the summary. 

According to Google, this isn’t just its large language models (LLMs) assembling convincing-sounding responses based on existing training data. AI Overviews is powered by its own custom language model that integrates Google’s core web ranking systems, which are used to carry out searches and integrate relevant and high-quality information into the summary. Accuracy is one of the cornerstones that Google prides itself on when it comes to search, the company notes, saying that it built AI Overviews to show information that’s sourced only from the web results it deems the best. 

This means that AI Overviews are generally supposed to hallucinate less than other LLM products, and if things happen to go wrong, it’s probably for a reason that Google also faces when it comes to search, giving the possible issues as “misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available.”

What actually happened during the rollout

Windows 10 dual screen

(Image credit: Shutterstock / Dotstock)

Google goes on to state that AI Overviews was optimized for accuracy and tested extensively before its wider rollout, but despite these seemingly robust testing efforts, Google does admit that’s not the same as having millions of people trying out the feature with a flood of novel searches. It also points out that some people were trying to provoke its search engine into producing nonsensical AI Overviews by carrying out ridiculous searches. 

I find this part of Google’s explanation a bit odd, seeing as I’d imagine that when building such a feature as AI Overviews, the company would appreciate that folks are likely to try to break it, or send it off the rails somehow, and that it should therefore be designed to handle silly or nonsense searches in its stride.

At any rate, Google then goes on to call out fake screenshots of some of the nonsensical and humorous AI Overviews that made their way around the web, which is fair I think. It reminds us we shouldn’t believe everything we see online, of course, although the faked screenshots looked pretty good if you didn't scrutinize them too closely (and all this underscores the need to check AI-generated features, anyway).

Google does admit, though, that sometimes AI Overviews did produce some odd, inaccurate, or unhelpful responses. It elaborates by explaining that there are multiple reasons why these happened, and that this whole episode has highlighted specific areas where AI Overviews could be improved.

The tech company further observes that these questionable AI Overviews would appear on searches for queries that didn’t happen often. A Threads user, @crumbler, posted an AI Overviews screenshot that went viral after they asked Google: “how many rocks should i eat?” This returned an AI Overview that recommended eating at least one small rock per day. Google’s explanation is that before this screenshot circulated online, this question had rarely been asked in search (which is certainly believable enough). 

A screenshot of an AI Overview recommending that humans should eat one small rock a day

(Image credit: Google/@crumbler on Threads)

Google continues to explain that there isn’t a lot of quality source material to answer that question seriously, either, calling instances when this happens a “data void” or an “information gap.” Additionally, in the case of the query above, some of the only content that was available was satirical by nature, and was linked in earnest as one of the only websites that addressed the query. 

Other nonsensical and silly AI Overviews pulled details from sarcastic or humorous content sources, and the likes of troll posts from discussion forums.

Google's next steps and the future of AI Overviews

When explaining what it’s doing to fix and improve AI Overviews, or any part of its Search results, Google notes that it doesn’t go through Search results pages one by one. Instead, the company tries to implement updates that affect whole sets of queries, including possible future queries. Google claims that it’s been able to identify patterns when analyzing the instances where AI Overviews got things wrong, and that it’s put in a whole set of new measures to continue to improve the feature.

You can check out the full list in Google’s post, but better detection capabilities for nonsensical queries trying to provoke a weird AI Overview are being implemented, and the search giant is looking to limit the inclusion of satirical or humorous content.

Along with the new measures to improve AI Overviews, Google states that it’s been monitoring user feedback and external reports, and that it’s taken action on a small number of summaries that violate Google’s content policies. This happens pretty rarely – in less than one in seven million unique queries, according to Google – and it’s being addressed.

The final reason Google gives for why AI Overviews performed this way is just the sheer scale of the billions of queries that are performed in Search every day. I can’t say I fault Google for that, and I would hope it ramps up the testing it does on AI Overviews even as the feature continues to be developed.

As for AI Overviews not understanding sarcasm, this sounds like a cop-out at first, but sarcasm and humor in general is a nuance of human communication that I can imagine is hard to account for. Comedy is a whole art form in itself, and this is going to be a very thorny and difficult area to navigate. So, I can understand that this is a major undertaking, but if Google wants to maintain a reputation for accuracy while pushing out this new feature – it’s something that’ll need to be dealt with.

We’ll just have to see how Google’s AI Overviews perform when they are reintroduced – and you can bet there’ll be lots of people watching keenly (and firing up yet more ridiculous searches in an effort to get that viral screenshot).

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Microsoft finds a use for AI in Windows 11 that you might not hate: better weather predictions that could help keep you dry

Microsoft’s been busy, what with working on the next major Windows 11 update (24H2) and debuting the next generation of AI-focused Copilot+ PCs at Build 2024. However, there’s another piece of work that likely flew under your radar: an improved weather model powered by its AI technology that’ll benefit Windows 11 and Windows 10 users.

The new upgraded model was developed by Microsoft’s Start weather team, and brings in improved rain and cloud prediction. This follows a recent initiative to implement better AI-assisted prediction models for more accurate weather forecasting over a 30-day period. 

Microsoft publicized the improvement in an official Bing blog post and described how its Start team kicked off the improvements, dubbed ‘precipitation nowcasting,’ in late 2021. The team’s model combined data from local radar installations and satellite data to make this advancement.

Better predictions thanks to model improvements

The new model from the Microsoft Start team is an improvement on the previous iteration, which had some flaws due to the satellite weather data necessary for the model only being available 85% to 95% of the time. Then there was the additional complication that this data was drawn from a variety of sources

The predecessor of the new model was much smaller and only predicted a factor called simulated radar reflectivity. The new model is four times the size of the older one and jointly predicts two factors – both satellite and simulated radar reflectivity. This enables the model to better fill in gaps in the data

This is explained in the blog post, as well as the fact that the radar predictiveness model was prioritized and given six times the weight of AI training as the satellite predictiveness model.

The Start team found that their new model, designed to predict precipitation and cloudiness better, offers a substantial improvement in F1-score compared to the radar-based model. (The F1-score is a metric for measuring a machine learning model’s performance by assessing aspects like precision and recall).

They also found that the newly devised model produced predicted satellite images that scored better than those of a persistence forecast (one that assumes that future weather will be like the present) after 15 minutes. This indicated to them that predictions can be especially useful in places and times where satellite outages last longer than 15 minutes. 

The good news for Windows 11 and Windows 10 users is that the new and improved Start weather model that uses both satellite and radar prediction has now been fully integrated into Microsoft’s Weather products. The refreshed model will now power the weather icons on the Windows taskbar, lock screen, and anywhere else the forecast appears in your OS (like the MSN feed)

This is a way of using AI that I think most people can get behind, as predicting the weather is notoriously difficult, and we can all use a more accurate weather forecast when it comes to making those important decisions – like whether we need our raincoat or umbrella when we’re venturing out.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Google Workspace is getting a talkative tool to help you collaborate better – meet your new colleague, AI Teammate

If your workplace uses Google Workspace productivity suite of apps, then you might soon get a new teammate – an AI Teammate that is. 

In its mission to improve our real-life collaboration, Google has created a tool to pool shared documents, conversations, comments, chats, emails, and more into a singular virtual generative AI chatbot: the AI Teammate. 

Powered by Google's own Gemini generative AI model, AI Teammate is designed to help you concentrate more on your role within your organization and leave the tracking and tackling of collective assignments and tasks to the AI tool.

This virtual colleague will have its own identity, its own Workspace account, and a specifically defined role and objective to fulfil.

When AI Teammate is set up, it can be given a custom name, as well as have other modifications, including its job role, a description of how it's expected to help your team, and specific tasks it's supposed to carry out.

In a demonstration of an example AI Teammate at I/O 2024, Google showed a virtual teammate named 'Chip' who had access to a group chat of those involved in presenting the I/O 2024 demo. The presenter, Tony Vincent, explained that Chip was privy to a multitude of chat rooms that had been set up as part of preparing for the big event. 

Vincent then asks Chip if I/O storyboards had been approved – the type of question you'd possibly ask colleagues –  and Chip was able to answer as it can analyze all of these conversations that it had been keyed into. 

As AI Teammate is added to more threads, files, chats, emails, and other shared items, it builds a collective memory of the work shared in your organization. 

Google Workspace

(Image credit: Google)

In a second example, Vincent shows another chatroom for an upcoming product release and asks the room if the team is on track for the product's launch. In response, AI Teammate searches through everything it has access to like Drive, chat messages, and Gmail, and synthesizes all of the relevant information it finds to form its response. 

When it's ready (which looks like about a second or slightly less), AI Teammate delivers a digestible summary of its findings. It flagged up a potential issue to make the team aware, and then gave a timeline summary, showing the stages of the product's development. 

As the demo is taking place in a group space, Vincent stated that anyone can follow along and jump in at any point, for example asking a question about the summary or for AI Teammate to transfer its findings into a Doc file, which it does as soon as the Doc file is ready. 

AI Teammate becomes as useful as it's customized to be and Google promises that it can make your collaborative work seamless, being integrated into Google's host of existing products that many of us are already used to.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Windows 11 speech recognition feature gets ditched in September 2024 – but only because there’s something better

Windows 11’s voice functionality is being fully switched over to the new Voice Access feature later this year, and we now have a date for when the old system – Windows Speech Recognition (WSR) – will be officially ditched from the OS.

The date for the replacement of WSR by Voice Access has been announced as September 2024 in a Microsoft support document (as Windows Latest noticed). Note that the change will be ‘starting’ in that month, so will take further time to roll out to all Windows 11 PCs.

However, there’s a wrinkle here, in that this is the case for Windows 11 22H2 and 23H2 users, which means those still on Windows 11 21H2 – the original version of the OS – won’t have WSR removed from their system.

Windows 10 users will still have WSR, of course, as Voice Access is a Windows 11-only feature.


Analysis: WSR to go MIA, but it’s A-OK (for the most part)

This move is no surprise as Microsoft removed Windows Speech Recognition from Windows 11 preview builds back at the end of 2023. So, this change was always going to come through for release versions of Windows 11, it was just a question of when – and now we know.

Will the jettisoning of WSR mean this feature is missed by Windows 11 users? Well, no, not really, because its replacement, Voice Access, is so much better in pretty much every respect. It is leaps and bounds ahead of WSR, in fact, with useful new features being added all the time – such as the ability to concoct your own customized voice shortcuts (a real timesaver).

In that respect, there’s no real need to worry about the transition from WSR to Voice Access – the only potential thorny issue comes with language support. WSR offers a whole lot more in this respect, because it has been around a long time.

However, Voice Access is getting more languages added in the Moment 5 update. And in six months’ time, when WSR is officially canned (or that process begins), we’ll probably have Windows 11 24H2 rolling out, or it’ll be imminent, and we’d expect Voice Access to have its language roster even more filled out at the point.

Those on Windows 11 21H2 will be able to stick with WSR as observed, but then there’s only a very small niche of users left on that OS, as Microsoft has been rolling out an automatic forced upgrade for 21H2 for some time now. (Indeed, this is now happening for 22H2 as of a few weeks ago). Barely anyone should remain on 21H2 at this point, we’d imagine, and those who are might be stuck there due to a Windows update bug, or oversight during the automated rollout.

Windows 10 users will continue with WSR as it’s their only option, but as a deprecated feature, it won’t receive any further work or upgrades going forward. That’s another good reason why Windows 11 users should want to upgrade to Voice Access which is being actively developed at quite some pace.

You might also like…

TechRadar – All the latest technology news

Read More

Nvidia finally catches up to AMD and drops a new app that promises better gaming and creator experiences

Nvidia has announced plans to bring together the features of the Nvidia Control Panel, GeForce Experience, and RTX Experience apps all in a single piece of software. On February 22, Nvidia explained on its website that this new unified app is being made available as a public beta. This means that the app could still be changed in the hopes of improving it, but you can download it now and try it for yourself.

The app is made specifically to improve the experience of gamers and creators currently using machines equipped with Nvidia GPUs by making it easier to find and use functions that formerly lived in separate programs. 

Users with suitable Nvidia GPUs can expect a number of significant improvements that come with this new centralized app. Settings to optimize gaming experiences (by tweaking graphical settings based on your hardware)  and downloading and installing new drivers can now be found in one easy interface.

It’ll be easier to understand and keep track of driver updates, such as new features and fixes for bugs, with clear descriptions. While in-game, users should see a redesigned overlay that makes it easier to access features and tools like filters, recording tools, monitoring tools, and more. Speaking of filters, Nvidia is introducing new AI Freestyle Filters which can enhance users’ visuals and allow them to customize the aesthetics of their games. As well as all of these upgrades, users can easily view and navigate bundles, redeem rewards, get new game content, view current GeForce NOW offers, and more.

Screenshot of the webpage where users can download the Nvidia app beta

(Image credit: Future)

Nvidia's vision

It certainly seems like Nvidia has worked hard to create a more streamlined app that makes it easier to use your RTX-equipped PC. It’s specifically intended to make it easier to do things like make sure your PC is updated with the latest Nvidia drivers, and quickly discover and install other Nvidia apps including Nvidia Broadcast, GeForce NOW, and more. The Nvidia team also claims in its announcement that this new centralized app will perform better on RTX-GPU-equipped PCs than its separate predecessors. That’s thanks to reduced installation times through the app, better responsiveness from the user interface (UI), and because it should take up less disk space than its predecessors (I assume combined). 

This isn’t the end of the new Nvidia app’s development, and it seems some legacy features didn’t make the cut, including 360/Stereo photo modes and streaming directly to YouTube and Twitch, because they see less use. Clearly, Nvidia felt it wasn't worth including these more niche features in the new app, and anyone who wants to continue to use them can still use the older apps (for now, at least). The new app is focused on improving performance, and making it easier to install and integrate new features into users’ systems. 

An Nvidia GeForce RTX 2060 slotted into a PC with its fans showing

(Image credit: Future)

By combining its apps into one, easy-to-use piece of software, Nvidia is finally catching up to AMD in one aspect where Team Red has the advantage: software. AMD's Radeon Adrenalin app already offers a lot of these features, as well as others, like a built-in browser and HDMI link assurance and monitoring that can automatically detect any issues with the HDMI’s connectivity – all in one single interface.

Finally, AMD doesn’t require users to make an account to be able to use its app. We don’t expect that Nvidia will fully catch up to AMD’s app just yet (though it would be nice not to have to sign in), but this is definitely a push in the right direction and hopefully users will see a lot of use out of the new app.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Mark Zuckerberg thinks the Meta Quest 3 is better than Vision Pro – and he’s got a point

Mark Zuckerberg has tried the Apple Vision Pro, and he wants you to know that the Meta Quest 3 is “the better product, period”. This is unsurprising given that his company makes the Quest 3, but having gone through all of his arguments he does have a point – in many respects, the Quest 3 is better than Apple’s high-end model.

In his video posted to Instagram, Zuckerberg starts by highlighting the fact that the Quest 3 offers a more impressive interactive software library than the Vision Pro, and right now that is definitely the case. Yes, the Vision Pro has Fruit Ninja, some other spatial apps (as Apple calls them), and plenty of ported-over iPad apps, but nothing on the Vision Pro comes close to matching the quality or immersion levels of Asgard’s Wrath 2, Walkabout Mini Golf, Resident Evil 4 VR, The Light Brigade, or any of the many amazing Quest 3 VR games

It also lacks fitness apps. I’m currently testing some for a VR fitness experiment (look out for the results in March) and I’ve fallen in love with working out with my Quest 3 in apps like Supernatural. The Vision Pro not only doesn't offer these kinds of experiences, but its design isn’t suited to them either – the hanging cable could get in the way, and the fabric facial interface would get drenched in sweat; a silicone facial interface is a must-have based on my experience.

The only software area where the Vision Pro takes the lead is video. The Quest platform is badly lacking when it comes to offering the best streaming services in VR – only having YouTube and Xbox Cloud Gaming – and it’s unclear if or when this will change. I asked Meta if it has plans to bring more streaming services to Quest, and I was told by a representative that it has “no additional information to share at this time.” 

Zuckerberg also highlights some design issues. The Vision Pro is heavier than the Quest 3, and if you use the cool-looking Solo Knit Band you won’t experience the best comfort or support – instead most Vision Pro testers recommend you use the Dual-Loop band which more closely matches the design of the Quest 3’s default band as it has over the head support.

You also can’t wear glasses with the Vision Pro, instead you need to buy expensive inserts. On Quest 3 you can just extend the headset away from your face using a slider on the facial interface and make room for your specs with no problem.

Lance Ulanoff wearing Apple Vision Pro

The Vision Pro being worn with the Dual-Loop band (Image credit: Future)

Then there’s the lack of controllers. On the Vision Pro unless you’re playing a game that supports a controller you have to rely solely on hand tracking. I haven’t used the Vision Pro but every account I’ve read or heard – including Zuckerberg’s – has made it clear that hand-tracking isn’t any more reliable on the Vision Pro than it is on Quest; with the general sentiment being that 95% of the time it works seamlessly which is exactly my experience on the Quest 3.

Controllers are less immersive but do help to improve precision – making activities like VR typing a lot more reliable without needing a real keyboard. What’s more, considering most VR and MR software out there right now is designed for controllers software developers have told us it would be a lot easier to port their creations to the Vision Pro if it had handsets.

Lastly, there’s the value. Every Meta Quest 3 and Apple Vision Pro comparison will bring up price so we won’t labor the point, but there’s a lot to be said for the fact the Meta headset is only $ 499.99 / £479.99 / AU$ 799.99 rather than $ 3,499 (it’s not yet available outside the US). Without a doubt the Quest 3 is giving you way better bang for your buck.

The Meta Quest 3 controller being held above a table with a lamp, a plant and the QUest 3 headset on. You can see the buttons and the thumbstick on top.

The Vision Pro could be improved if it came with controllers (Image credit: Future)

Vision Pro: not down or out 

That said, while Zuckerberg makes some solid arguments he does gloss over how the Vision Pro takes the lead, and even exaggerates how much better the Quest 3 is in some areas – and these aren’t small details either.

The first is mixed reality. Compared to the Meta Quest Pro the Vision Pro is leaps and bounds ahead, though reports from people who have tried the Quest 3 suggest the Vision Pro doesn’t offer as much of an improvement – and in ways it is worse as Zuckerberg mentions.

To illustrate the Quest 3’s passthrough quality Zuckerberg reveals the video of him comparing the two headsets is being recorded using a Quest 3, and it looks pretty good – though having used the headset I can tell you this isn’t representative of what passthrough actually looks like. Probably due to how the video is processed recordings of mixed reality on Quest always look more vibrant and less grainy than experiencing it live.

Based on less biased accounts from people who have used both the Quest 3 and Vision Pro it sounds like the live passthrough feed on Apple’s headset is generally a bit less grainy – though still not perfect – but it does have way worse motion blur when you move your head.

Apple Vision Pro spatial videos filmed at the beach being watched by someone wearing the headset on their couch

Mixed reality has its pros and cons on both headsets (Image credit: Apple)

Zuckerberg additionally takes aim at the Vision Pro’s displays pointing out that they seem less bright than the Quest 3’s LCDs and they offer a narrower field of view. Both of these points are right, but I feel he’s not given enough credit to two important details.

While he does admit the Vision Pro offers a higher resolution he does so very briefly. The Vision Pro’s dual 3,680 x 3,140-pixel displays will offer a much crisper experience than the Quest 3’s dual 2064 x 2208-pixel screens. Considering you use this screen for everything the advantage of better visuals can’t be understated – and a higher pixel density should also mean the Vision Pro is more immersive as you’ll experience less of a screen door effect (where you see the lines between pixels as the display is so close to your eyes).

Zuckerberg also ignores the fact that the Vision Pro’s screens are OLEDs. Yes, this will mean they’re less vibrant, but the upshot is they offer much better contrast for blacks and dark colors. Better contrast has been shown to improve a user’s immersion in VR based on Meta and other’s experiments so I wouldn’t be surprised if the next Quest headset also incorporated OLEDs – rumors suggest it will and I seriously hope it does.

Lastly, there’s eye-tracking which is something the Quest 3 lacks completely. I don’t think the unavailability of eye-tracking is actually a problem, but that deserves its own article.

Hamish Hector holding Starburst to his face

This prototype headset showed me how important great contrast is (Image credit: Future)

Regardless of whether you agree with Mark Zuckerberg’s arguments or not one thing that’s clear from the video is that the Vision Pro has got the Meta CEO fired up. 

He ends his video stating his desire for the Quest 3 and the Meta’s open model (as opposed to the closed-off walled-garden Apple has where you can only use the headset how it intends) to “win out again” like Windows in the computing space.

But we’ll have to wait and see how it pans out. As Zuckerberg himself admits “The future is not yet written” and only time will tell if Apple, Meta or some new player in the game (like Samsung with its Samsung XR headset) will come out on top in the long run.

You might also like

TechRadar – All the latest technology news

Read More