Apple reveals Apple Intelligence to take on ChatGPT, Copilot and Gemini

Apple has unveiled its own spin of Artificial Intelligence (AI) – and it's called Apple Intelligence. As expected, it was unveiled at WWDC 2024 (you can follow all the announcements as they happen at our WWDC 2024 live blog).

Here's everything we know so far about Apple Intelligence, and how it integrates with Siri and other Apple products.

Apple Intelligence will be free for users with iOS 18, iPadOS 18 and macOS 15 Sequoia – all operating updates announced at WWDC 2024.

This story is breaking. We'll continue to update this article – and check out our WWDC 2024 live blog for all the breaking news

TechRadar – All the latest technology news

Read More

Microsoft reveals AI-powered ‘Recall’ feature to transform Windows 11’s searchability, while confirming hardware requirements

Microsoft’s annual developer conference, Build, has only just kicked off but we’ve already learned lots of exciting things, including the company showing off a new AI-powered ‘Recall’ feature to be integrated into Copilot+ PCs with Windows 11.

Copilot+ is a new software platform that was introduced yesterday, aiming to infuse Windows 11 with new AI features, ushering in a raft of new devices with more advanced AI functionality.

You’ve doubtless already heard of AI PCs, but the new breed of portables, which are powered by Qualcomm’s Snapdragon X chips with an integrated Neural Processing Unit (NPU), were officially debuted yesterday. Windows 11 Recall will be exclusive to PCs that have Snapdragon X processors as the current generation of Intel and AMD mobile CPUs don’t have a powerful enough NPU to deal with the feature. (It needs an NPU capable of 40 TOPS, or trillions of operations per second). 

This isn’t the only hardware requirement that the Recall feature will necessitate, with the full spec requirements being as follows:

  • Snapdragon X Elite or X Plus processor
  • NPU capable of 40 TOPs
  • 225GB storage
  • 16GB RAM

While these new Qualcomm chips are the only mobile silicon that can drive the Recall feature (and other AI capabilities in Copilot+ PCs) right now, future generations of Intel and AMD processors will be on board (Intel’s Lunar Lake for example, or AMD’s Strix Point chips).

Windows Latest notes that the above hardware requirements are not only needed to ensure a quality experience – with enough performance to drive snappy responses with these AI features such as Recall – but also for data security reasons.

Microsoft unveils new Surface Laptop and Surface Pro on a stage

(Image credit: Future / John Loeffler)

So, how does Recall work? 

In the past we’ve seen reports of a rumored feature, often referred to as ‘AI Explorer,’ that would enable you to search through your past activity on your PC. It looks like this has manifested as the Recall feature, and it’ll be privy to all the activity on your PC including what apps you use, how you use those apps, and what you do in them (for example, conversations in WhatsApp). Recall will record all of this activity going forward, saving snapshots of it in your PC’s local storage. 

Additionally, the Settings app will have a dedicated update history section for Recall, and a toggle for new Privacy and Security settings. You’ll be able to update Recall for Windows 11 and other AI features besides using the Windows Update app. 

If you’re feeling wary about allowing Recall to access everything, and concerned about having control over what it records and stores, Windows Latest reports that you’ll be able to delete snapshots manually from Recall’s storage, and set Recall to exclude certain apps and websites from its recording activity. In your device’s Settings, you’ll also be able to adjust the time ranges over which Recall stores snapshots, or indeed pause Recall altogether by clicking on its icon in your Taskbar. 

In practice, Recall is designed to help you go back in time and find elements of your past activity. So for example, if you previously had a conversation with a colleague on a certain topic, but couldn’t remember the details, you could ask Recall to go and find it within Windows 11.  Recall would then comb over your past conversations with the colleague, searching across all of your apps, open tabs within apps, and more besides.

Recall will also be able to help you find files you’ve lost, and to search your browser history, and so forth. You’ll be able to ask for Recall’s assistance using natural language, the way we converse with one another in real life, instead of having to use precise commands. 

All of this will run natively on your PC and won’t have to tap the cloud for computing power, meaning your data will be more secure, as everything can be kept locally, and nothing is sent to an external data center. It’s all happening right there on your Copilot+ PC with the help of that powerful NPU.

Microsoft presenting Surface Laptop and Surface Pro devices.

(Image credit: Microsoft)

When can you try Recall for yourself? 

The hubbub and excitement of Recall is just one of many things that have been revealed at Microsoft Build 2024 already, but you’ll have to wait until the Windows 11 24H2 update to try the feature (and don’t forget, you’ll need a PC that meets the hardware requirements). The 24H2 update is expected to arrive in September or October, or thereabouts.

If Recall and other AI features, deliver on all that’s promised (or even most of it), we think many people will be impressed and it could convince them to try to adapt to the new way of computing that Microsoft is trying to usher in.

Right now, Copilot isn’t regarded as particularly impressive, but in some ways, that’s due to the hardware needed to facilitate Microsoft’s plans for its AI assistant not being available – until now. We’re excited to get our hands on all these new AI features, as we’re one of those people that floods our PC with media – and we’d imagine Recall could be very handy for us indeed.


TechRadar – All the latest technology news

Read More

Chrome report reveals which extension could be slowing down your browser the most

Chrome extensions are a great way to enhance internet browsing, but some of them may be slowing down your browser. The development team behind DebugBear, a web page optimizing service, analyzed 5,000 extensions to see how they impacted Google Chrome. According to their findings, some can cause longer load times on websites although it depends how data is processed. Certain ones are better than others.

DebugBear states extensions that process data “before a page has rendered will have a much worse impact on user experience.” VPNs seem to be among the worst at this, with some causing a full second of delay. It makes sense why load times would be particularly bad with a VPN as they “route traffic through an intermediary server.” Other extensions that may cause long load times include Trancy AI Subtitles and Klarna Pay Later.

Extensions that run their code “after the page has loaded” can also impact Chrome, but to a seemingly lesser extent. Processing times can result in web page slowdown as the software strains the hardware, but not always. The Monica AI Assistant, for instance, was discovered to add “1.3 seconds of processing time;” however, it actually reduces page load speeds. This is because extensions like Monica run “as soon as the page starts loading.” 

Page interactions

Even if an extension doesn’t create slow load times, it may cause slow page interactions, meaning that clicking around on a website may not feel snappy. Avira Password Manager reportedly adds a “160 millisecond delay when clicking on… random content [headings]”. Granted, 160 milliseconds is less than half a second, but we can’t help but wonder if the delays add up.

Let’s say, for example, you have seven extensions, each individually adding a 160 millisecond delay. Now, imagine if all those delays turn into a big performance drop. That’s an entire second of delay added to a webpage. Is this possible? To be honest, we don’t know as DebugBear doesn’t state whether or not the delay of these tools can accumulate.

What is true is that most ad-blockers can improve your browsing experience. Websites with tons of ads directly cause a slowdown, and without an ad-blocker, DebugBear found the average CPU processing time on ad-heavy websites was 57 seconds. With uBlock Origin installed, the time drops “down to just under 4 seconds,” saving your computer precious power.

uBlock Origin appears to be one of the best ad-blockers you can add to Chrome alongside Malwarebytes and Privacy Badger. AdBlock Plus is one of the worst, as it takes up a lot of processing time – over 40 seconds.

What you can do

So, if you’re a frequent Chrome user experiencing a browser slowdown with extensions installed, there isn't much you can do to remedy the issue. Fixing extensions ultimately falls on the developers who made them. But there are a couple of things you can do to help.

First, the easiest thing you can do is uninstall the offending tool or restrict it to only enable on certain sites. DebugBear also recommends using their Chrome Extension Performance Lookup tool to help you find the best, lightweight extensions for the browser.

Be sure to check out TechRadar's list of the best ad blockers for 2024. uBlock Origin is the best one, but there are other great options out there.

You might also like

TechRadar – All the latest technology news

Read More

Google reveals new video-generation AI tool, Veo, which it claims is the ‘most capable’ yet – and even Donald Glover loves it

Google has unveiled its latest video-generation AI tool, named Veo, at its Google I/O live event. Veo is described as offering “improved consistency, quality, and output resolution” compared to previous models.

Generating video content with AI is nothing new; tools like Synthesia, Colossyan, and Lumiere have been around for a little while now, riding the wave of generative AI's current popularity. Veo is only the latest offering, but it promises to deliver a more advanced video-generation experience than ever before.

Google IO 2024

Donald Glover invited Google to his creative studio at Gilga Farm, California, to make a short film together. (Image credit: Google)

To showcase Veo, Google recruited a gang of software engineers and film creatives, led by actor, musician, writer, and director Donald Glover (of Community and Atlanta fame) to produce a short film together. The film wasn't actually shown at I/O, but Google promises that it's “coming soon”.

As someone who is simultaneously dubious of generative AI in the arts and also a big fan of Glover's work (Awaken, My Love! is in my personal top five albums of all time), I'm cautiously excited to see it.

Eye spy

Glover praises Veo's capabilities on the basis of speed: this isn't a deletion of human ideas, but rather a tool that can be utilized by creatives to “make mistakes faster”, as Glover puts it.

The flexibility of Veo's prompt reading is a key point here. It's capable of understanding prompts in text, image, or video format, paying attention to important details like cinematic style, camera positioning (for example, a birds-eye-view shot or fast-tracking shot), time elapsed on camera, and lighting types. It also has an improved capability to accurately and consistently render objects and how they interact with their surroundings.

Google DeepMind CEO Demis Hassabis demonstrated this with a clip of a car speeding through a dystopian cyberpunk city.

Google IO 2024

The more detail you provide in your prompt material, the better the output becomes. (Image credit: Google)

It can also be used for things like storyboarding and editing, potentially augmenting the work of existing filmmakers. While working with Glover, Google DeepMind research scientist Kory Mathewson explains how Veo allows creatives to “visualize things on a timescale that's ten or a hundred times faster than before”, accelerating the creative process by using generative AI for planning purposes.

Veo will be debuting as part of a new experimental tool called VideoFX, which will be available soon for beta testers in Google Labs.

TechRadar – All the latest technology news

Read More

iOS 18 could be loaded with AI, as Apple reveals 8 new artificial intelligence models that run on-device

Apple has released a set of several new AI models that are designed to run locally on-device rather than in the cloud, possibly paving the way for an AI-powered iOS 18 in the not-too-distant future.

The iPhone giant has been doubling down on AI in recent months, with a carefully split focus across cloud-based and on-device AI. We saw leaks earlier this week indicating that Apple plans to make its own AI server chips, so this reveal of new local large language models (LLMs) demonstrates that the company is committed to both breeds of AI software. I’ll dig into the implications of that further down, but for now, let’s explain exactly what these new models are.

The suite of AI tools contains eight distinct models, called OpenELMs (Open-source Efficient Language Models). As the name suggests, these models are fully open-source and available on the Hugging Face Hub, an online community for AI developers and enthusiasts. Apple also published a whitepaper outlining the new models. Four were pre-trained on CoreNet (previously CVNets), a massive library of data used for training AI language models, while the other four have been ‘instruction-tuned’ by Apple; a process by which an AI model’s learning parameters are carefully honed to respond to specific prompts.

Releasing open-source software is a somewhat unusual move for Apple, which typically retains quite a close grip on its software ecosystem. The company claims to want to “empower and enrich” public AI research by releasing the OpenELMs to the wider AI community.

So what does this actually mean for users?

Apple has been seriously committed to AI recently, which is good to see as the competition is fierce in both the phone and laptop arenas, with stuff like the Google Pixel 8’s AI-powered Tensor chip and Qualcomm’s latest AI chip coming to Surface devices.

By putting its new on-device AI models out to the world like this, Apple is likely hoping that some enterprising developers will help iron out the kinks and ultimately improve the software – something that could prove vital if it plans to implement new local AI tools in future versions of iOS and macOS.

It’s worth bearing in mind that the average Apple device is already packed with AI capabilities, with the Apple Neural Engine found on the company’s A- and M-series chips powering features such as Face ID and Animoji. The upcoming M4 chip for Mac systems also appears to sport new AI-related processing capabilities, something that's swiftly becoming a necessity as more-established professional software implements machine-learning tools (like Firefly in Adobe Photoshop).

In other words, we can probably expect AI to be the hot-button topic for iOS 18 and macOS 15. I just hope it’s used for clever and unique new features, rather than Microsoft’s constant Copilot nagging.

You might also like…

TechRadar – All the latest technology news

Read More

Microsoft reveals next evolution of Windows – and it won’t be Windows 12

Microsoft has confirmed that the next update for Windows, Windows 11 version 24H2, is indeed coming later this year. While it’s good to know that Microsoft is planning a major update for Windows 11, the news will be disappointing to anyone who was hoping for an imminent release of Windows 12, the rumored next generation of the Windows operating system (OS). We expect Windows 11 24H2 to arrive around September or October, and will continue Microsoft’s focus on developing the AI-aided user experience and quality of life upgrades that the company has been so keen on pushing lately. 

This does mean that we can put any expectations of a Windows 12 to bed, at least until after the second half 2024. Many people were convinced that Windows 11’s successor was coming sooner rather than later because of the heavy emphasis on next-generation AI features and experiences. This rumored release was code-named Hudson Valley, and it’s anticipated to get an official announcement mid-2024, and start rolling out in the latter half of 2024. 

A leaked screenshot of a possible Windows 12 OS mockup.

(Image credit: Microsoft)

Straight from the horse's mouth (or rather, blog)

According to Windows Central, this confirmation of Windows’ annual major feature update comes to us from a Windows 11 preview build changelog published on February 8, 2024. Microsoft writes: 

“Starting with Build 26-xx today, Windows Insiders in the Canary and Dev Channels will see the versioning updated under Settings > System > About (and winver) to version 24H2. This denotes that Windows 11, version 24H2 will be this year’s annual feature update.”

Windows 11 24H2 will still absolutely be worth updating to as Microsoft is currently one of the leaders in the personal computing space that’s actively pursuing and developing AI user assistance. We’ve seen evidence of this with Microsoft’s enthusiastic debut and continued campaign to bring Windows Copilot, its digital AI assistant that’s even getting its own keyboard button, to users. New AI features will make use of recently-manufactured devices’ cutting-edge processors from manufacturers like AMD, Intel, and Qualcomm, who have all recently released (or at least announced) new chips with dedicated support for artificial intelligence.

Qualcomm Snapdragon 8 Gen 2 sustained graphics performance Ziad Asghar Snapdragon Summit 2022

(Image credit: Future / Alex Walker-Todd)

What's the hold up with Windows 12?

There are multiple speculated reasons for why Microsoft is currently sticking to Windows 11 instead of moving on. One such suggestion is  that Microsoft is reluctant to split its PC user base even more with a third major Windows version on the market. Its user base is already somewhat split with many users preferring to stick to Windows 10 (reportedly outnumbering Windows 11 users more than twofold). 

Meanwhile, Windows Central suggests multiple (very reasonable) reasons why Microsoft is currently sticking to Windows 11. First off, Windows’ and Surface’s former leader, Panos Panay, has departed Microsoft. Panay has headed up the Surface team since its inception, and led the development of Windows since 2020. 

It’s a major change-up for Microsoft internally, and along with Windows 10’s continued widespread popularity, the company is probably somewhat hesitant to release Windows 12 during this period. Microsoft is planning to end support for Windows 10 in 2025 to have a better chance of consolidating its user base, and it’s probably waiting at least until then to introduce Windows 12. 


TechRadar – All the latest technology news

Read More

Apple reveals new Vision Pro advert, as Meta plans Android-style rivalry

A slick new advert for the Apple Vision Pro has just appeared on the official Apple YouTube account, just a few days ahead of the first shipments of the mixed reality headset being sent out to customers. Meanwhile, a new report claims that Meta now considers itself to be the headset's main, Android-style adversary. 

The 68-second video has the usual Apple polish and a lot of the ingredients we've come to expect from Apple commercials – such as a classic pop song, aspirational lifestyles, travel, family and friends. It's called “Hello Apple Vision Pro” and the promise in the caption is that “you can do the things you love in ways never before possible”.

It's actually a helpful preview of some of the features and experiences that the Vision Pro offers: watching movies, working on presentations in a virtual 3D space, making FaceTime calls, bringing up images and video that wrap around your field of vision, and more.

As you would expect from an advertisement, it's somewhat selective in what it shows. There's no sign of the Vision Pro battery pack, and Napoleon is an interesting choice as the featured movie, because Ridley Scott's historical epic is seven minutes longer than the Vision Pro's official estimated battery life. You might need a recharge for the end credits.

More competition

We've already spent some time with Apple's headset for our hands-on Apple Vision Pro review, though not enough yet for a full review. Those first verdicts are going to be interesting, as will the impact of the headset on the augmented/virtual/mixed reality hardware market as a whole.

The Meta Quest 3 and Meta Quest Pro are now in direct competition with Apple's new device, but Meta executives don't seem to be overly perturbed by the Vision Pro's arrival. As per the Wall Street Journal (via 9to5Mac), Meta is hoping that the Vision Pro boosts interest in the tech in general, leading to more sales of the cheaper Meta headsets too.

Meta executives are “optimistic”, sources have told the WSJ, with the report going on to say that Meta is hoping to be the Android of the AR/VR/MR space – in other words, the main alternative to Apple, as Google's mobile operating system is on phones and tablets.

According to the WSJ, the Vision Pro has “influenced Meta's thinking” when it comes to embracing mixed reality experiences, and improving natural gesture control – Apple's headset relies on eye and finger tracking, while the Meta devices are primarily operated by physical controllers, with gesture support in testing.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft reveals new Copilot Pro subscription service that turbo-charges the AI assistant in Windows 11 for $20 a month

Microsoft is taking Windows Copilot to the next level with Windows Copilot Pro and bringing Microsoft 365 Copilot to businesses of all sizes. 

Windows Copilot and 365 Copilot are Microsoft’s newest AI digital assistants to help users with all kinds of tasks and projects that we were introduced to last year, and they’re getting a major boost with higher-tier AI functionality. 

Microsoft is officially debuting Copilot Pro, available for individual users to subscribe to for $ 20 per month (per user) starting today January 16. 

This version of Copilot will allow individuals to upgrade their productivity and user experience with the best Copilot has to offer in terms of AI features, capability, performance speed and being able to access Copilot at peak times. 

This will also grant users with a Personal or Home subscription access to Copilot Pro in Microsoft 365 apps like Word, Excel, OneNote, and available for PowerPoint on PC, Mac and iPad. This is similar to the existing Microsoft 365 Copilot made for enterprise customers, which requires an enterprise subscription, but now these Copilot AI capabilities will be available to Microsoft 365 Personal and Family subscribers as well. 

The crème de la Copilot on offer

If you choose to sign up for Copilot Pro, it will grant you priority access to the latest OpenAI models, like the state-of-the-art GPT-4 Turbo from OpenAI, and enable you to build and tailor your own Copilot GPT bot to a topic of your choosing. 

Copilot Pro will give users greater agency in how and what they Copilot Pro by allowing them to toggle between models and try out different options to optimize their experience. 

Users will be able to build and mold these personalized Copilot GPTs in a brand new Copilot GPT Builder (similar to the commercial version launched last year) by answering some straightforward prompts, and Microsoft assures us it’s coming soon. 

You can also look forward to an upgrade to the AI image generation from Microsoft with Image Creator from Designer (formerly known as Bing Image Creator). With Copilot Pro, you’ll get one hundred boosts (accelerated image generation processes), greater image detail and quality, and the landscape image format.

Along with the introduction of Copilot Pro for individual use, Copilot for Microsoft 365 will be available to more types of commercial customers, particularly small- and medium-sized businesses. From now on, there’s no employee minimum, lower prerequisites, and more availability of Copilot subscriptions through Microsoft partners.

New upgrades to Copilot and a new Copilot app

Copilot imagery from Microsoft

(Image credit: Microsoft)

For those users that want to continue experimenting with Copilot for free, there’s something to watch out for as well. The free version of Copilot is getting Copilot GPTs that allow you to customize and tailor a Copilot that you can discuss a particular topic of your choosing. Today you should be able to see some of the topics already available, such as fitness, travel, cooking and more.

Along with these developments, Copilot is getting an iOS app and an Android app, and Copilot is coming to the Microsoft 365 mobile app. With these new apps, you’ll be able to have a single AI run across your devices, able to analyze information from your web usage, your PC use, and the apps you use to make its help more context-specific.  

The Copilot app is equipped with the same powerful tools that the PC version benefits from, such as GPT-4, Dall-E 3’s image creation capabilities, and the ability to input your own images into Copilot and have it respond to them

Copilot will be added to the Microsoft 365 app on both iOS and Android devices over the course of the next month for users who have a Microsoft account, and these users will be able to export the content that they generate as a Word or PDF document. Microsoft’s vision for this is that you’ll be able to summon Copilot almost instantly, as soon as you need it, and no matter what device you're currently using.

Microsoft is just getting started

It also looks like there are plenty more Copilot Pro features in the pipeline – similar to how we’ve seen multiple improvements to the standard version of Copilot in Windows 11. Divya Kumar relayed this while speaking to The Verge, referring to Microsoft’s recent release schedule as a “rolling thunder.” 

With Copilot Pro, Microsoft is aiming to catch the attention of “power users like creators, researchers, programmers and others” that might be interested in the latest innovations that it, with its collaborator OpenAI, has to offer. 

Microsoft has recently overtaken Apple as the most valuable company in the world, and it’s not showing signs of losing steam. Yusuf Mehdi, Microsoft’s Executive Vice President and Consumer Chief Marketing Officer, claims that Copilot empowers “every person and every organization on the planet to achieve more.” If there’s a reason that you might want or even need assistance or advice digitally, it’s clear how eager Microsoft is to be there to meet it.

You might also like

TechRadar – All the latest technology news

Read More

Apple Vision Pro patent reveals some less-creepy uses for its external display

The Apple Vision Pro headset is potentially just weeks from launching, if the latest rumors are to be believed, but a few new details about the external display may have been leaked via an Apple patent. 

According to the document (first reported on by Patently Apple) the headset’s external display won’t just be able to enable EyeSight – which shows onlookers the wearer's eyes in a kinda creepy way – but will also be able to display a much wider array of images.

A person wearing an Apple Vision Pro, you can see their eyes on the external display

How EyeSight looks on the Apple Vision Pro (Image credit: Apple)

So far Apple has only shown the Vision Pro’s external screen displaying two things: the wearer’s eyes when they’re in mixed reality, and a colorful pattern when they’re fully immersed in virtual reality. But using the display to show a wider mix of icons makes a lot of sense, and would be useful.

A flashing “do not disturb” sign, for example, could alert people around you that you’re in an important virtual meeting or trying to focus on something, while displaying a virtual scene on the external display would give people around you an idea of what you’re looking at. 

The patent also reveals alternatives to Apple’s realistic EyeSight feature. Rather than showing a photorealistic image, the wearer's eyes could be shown as digital dots and lines, in the style of an expressive robot – still a little creepy-looking, but maybe not so weird in practice.

Various Apple Vision Pro headsets showing different ways the external display could be used

Examples from Apple’s patent showing how the external display could be used (Image credit: Apple)

Other examples in the patent document, like the clock or current weather conditions, aren’t super-helpful while you’re wearing the headset but could be handy when you aren’t. While the Vision Pro is charging on your desk the external display could be set to show you useful info like what’s on your calendar for the day, or simply what charge the headset is at.

As with all tech patents, there’s no guarantee that we’ll ever see these features in action, and even if they are on their way to the Vision Pro, they might not be ready at launch or for a while after. Until the headset is in people’s hands we won’t know what it is or isn’t capable of.

However, if Apple is indeed close to launching the headset, it shouldn't be too long before all of our Vision Pro questions are finally answered.

You may also like

TechRadar – All the latest technology news

Read More

Apple Vision Pro 2 leak reveals what’s coming next for Apple’s headset

The Apple Vision Pro hasn't yet made its way to any actual customers, but we're already starting to hear a few whispers about what might be in the pipeline for the second generation of Apple's augmented reality and virtual reality headset.

Sources speaking to MacRumors say that the Apple Vision Pro 2 is actually going to look very similar to the original headset, although there might be changes to the speaker configuration, with a flatter shape on each side.

We might also see variations in the design of the top vents, the report says, with the possibility that clusters of small holes will replace the existing strips. There's also talk of an audio accessory in the documentation, which might refer to an external speaker.

One of the key differences will be to the rear straps, MacRumors says. The 2nd-gen headset apparently has straps that are simpler in design, and “somewhat reminiscent of the flat straps commonly found on laptop bags or backpacks”.

The waiting game

It sounds as though the next model of the Apple Vision Pro is going to retain the external battery pack that the current model has, and MacRumors also says that most of the sensors and cameras will be similar as well.

A compass, ambient light sensor, magnetometer, and gyroscope are specifically mentioned, alongside support for Wi-Fi, Bluetooth 5, and ultra-low latency audio, which is all very much as you would expect.

Based on the information included in this leak, what's known as production validation testing (PVT) is scheduled for 2025, which would mean a release date of late 2025 or early 2026. Of course, all of these details and plans could change over time.

We've previously heard that Apple is working on a cheaper Vision Pro model, but it's not entirely certain if this is it. Other improvements Apple is reportedly considering are to make the next Vision Pro lighter, more compact, and more comfortable.

You might also like

TechRadar – All the latest technology news

Read More