The Meta Quest 3’s popularity is proof a cheap Vision Pro can’t come soon enough

The Oculus Quest 2 has been the most popular VR headset in the world for the past couple of years – dominating sales and usage charts with its blend of solid performance, amazing software library and, most importantly, affordability. 

Now its successor – the Meta Quest 3 – is following in its footsteps. 

Just four months after launch it’s the third most popular headset used on Steam (and will likely be the second most popular in the next Steam Hardware Survey). What’s more, while we estimate the Quest 3’s not selling quite as well as the Quest 2 was at the four-month mark, it still looks to be a hit (plus, lower sales figures are expected considering it’s almost double the launch price of the Quest 2).

Despite its higher cost, $ 499.99 / £479.99 / AU$ 799.99 is still relatively affordable in the VR space, and its early success continues the ongoing trend in VR that accessibility is the make or break factor in a VR gadget’s popularity.  

Oculus Quest 2 floating next to its handsets

The cheap Oculus Quest 2 made VR mainstream (Image credit: Facebook)

There’s something to be said for high-end hardware such as the Apple Vision Pro bringing the wow factor back to VR (how can you not be impressed by its crisp OLED displays and inventive eye-and-hand-tracking system), but I’ll admit I was worried that its launch – and announcement of other high-end, and high-priced, headsets – would see VR return to its early, less affordable days.

Now I’m more confident than ever that we’ll see Apple’s rumored cheaper Vision Pro follow-up and other budget-friendly hardware sooner rather than later.

Rising up the charts 

According to the Steam Hardware Survey, which tracks the popularity of hardware for participating Steam users, 14.05% of all Steam VR players used a Quest 3 last month. That’s a 4.78% rise in its popularity over the previous month’s results and means it’s within spitting distance of the number two spot, which is currently held by the Valve Index – 15% of users prefer it over other VR headsets, even three-and-a-half years after its launch.

It has a ways to go before it reaches the top spot, however, with the Oculus Quest 2 preferred by 40.64% of Steam VR players. The Quest 3’s predecessor has held this top spot for a couple of years now, and it’s unlikely to lose to the Quest 3 or another headset for a while. Even though the Quest 3 is doing well for itself, it’s not selling quite as fast as the Quest 2.

(Image credit: Future)

Using Steam Hardware Survey data for January 2024 (four months after its launch) and data from January 2021 (four months after the Quest 2’s launch) – as well as average Steam player counts for these months based on SteamDB data – it appears that the Quest 3 has sold about 87% as many units as the Quest 2 did at the same point in its life.

Considering the Quest 3 is priced at $ 499.99 / £479.99 / AU$ 799.99, a fair bit more than the $ 299 / £299 / AU$ 479 the Quest 2 cost at launch, to even come close to matching the sales speed of its predecessor is impressive. And the Quest 2 did sell very well out of the gate.

We don’t have exact Quest 2 sales data from its early days – Meta only highlights when the device passes certain major milestones – but we do know that after five months, its total sales were higher than the total sales of all other Oculus VR headsets combined, some of which had been out for over five years. Meta’s gone on to sell roughly 20 million Quest 2s, according to a March 2023 leak. That's about as fast as the Xbox Series X is believed to have sold, which launched around the same time.

This 87% of Quest 2 sales figure can be taken with a pinch of salt – you can find out how I got to this number at the bottom of this piece; it required pulling data from a few sources and making some reasonable assumptions – but that number and the Quest 2 and 3’s popularity on Steam shows that affordability is still the most powerful driving force in the VR space. So, I hope other headset makers are paying attention.

Lance Ulanoff wearing Apple Vision Pro

The Apple Vision had me a little concerned (Image credit: Future)

A scary expensive VR future

The Apple Vision Pro is far from unpopular. Reports suggest that between 160,000 and 200,000 preorders were placed on the headset ahead of its release on February 2, 2024 (some of those orders have been put on eBay with ridiculously high markups and others have been returned by some disappointed Vision Pro customers).

The early popularity makes sense. Whatever Mark Zuckerberg says about the superiority of the Quest 3, the Apple Vision Pro is the best of the best VR headsets from a technical perspective. There’s some debate on the comfort and immersive software side of things, but eye-tracking, ridiculously crisp OLED displays, and a beautiful design do make up for that.

Unfortunately, thanks to these high-end specs and some ridiculous design choices – like the outer OLED display for EyeSight (which lets an onlooker see the wearer’s eyes while they're wearing the device) – the headset is pretty pricey coming in at $ 3,499 for the 256GB model (it’s not yet available outside the US).

Seeing this, and the instant renewed attention Apple has drawn to the VR space – with high-end rivals like the Samsung XR headset now on the way – I’ll admit I was a little concerned we might see a return to VR’s early, less accessible days. In those days, you’d spend around $ 1,000 / £1,000 / AU$ 1,500 on a headset and the same again (or more) on a VR-ready PC.

Valve Index being worn by a person

The Valve Index is impressive, but it’s damn expensive (Image credit: Future)

Apple has a way of driving the tech conversation and development in the direction it chooses. Be it turning more niche tech into a mainstream affair like it did for smartwatches with the Apple Watch or renaming well-established terms by sheer force of will (VR computing and 3D video are now exclusively called spatial computing and spatial video after Apple started using those phrases).

While, yes, there’s something to be said for the wow factor of top-of-the-line tech, I hoped we wouldn’t be swamped with the stuff while more budget-friendly options get forgotten about because this is the way Apple has moved the industry with its Vision Pro.

The numbers in the Steam Hardware Survey have assuaged those fears. It shows that meaningful budget hardware – like the Quest 2 and 3, which, despite being newer, have less impressive displays and specs than many older, pricier models – is still too popular to be going anywhere anytime soon.

If anything, I’m more confident than ever that Apple, Samsung, and the like need to get their own affordable VR headsets out the door soon. Especially the non-Apple companies that can’t rely on a legion of rabid fans ready to eat up everything they release. 

If they don’t launch budget-friendly – but still worthwhile – VR headsets, then Meta could once again be left as the only real contender in this sector of VR. Sure, I like the Meta headsets I’ve used, but nothing helps spur on better tech and/or prices than proper competition. And this is something Meta is proving it doesn’t really have right now.

Girl wearing Meta Quest 3 headset interacting with a jungle playset

(Image credit: Meta)

Where did my data come from?

It’s important to know where data has come from and what assumptions have been made by people handling that data, but, equally, not everyone finds this interesting, and it can get quite long and distracting. So, I’ve put this section at the bottom for those interested in seeing my work on the 87% sales figure comparison between the Oculus Quest 2 and Meta Quest 3 four months after their respective launches.

As I mentioned above, most of the data for this piece has been gathered from the Steam Hardware Survey. I had to rely on the Internet Archive’s Wayback Machine to see some historical Steam Hardware Survey data because the results page only shows the most recent month’s figures.

When looking at the relative popularity of headsets in any given month, I could just read off the figures in the survey results. However, to compare the Quest 2 and Quest 3’s four-month sales to each other, I had to use player counts from SteamDB and make a few assumptions.

The first assumption is that the Steam Hardware Survey’s data is consistent for all users. Because Steam users have to opt-in to the survey, when it says that 2.24% of Steam users used a VR headset in January 2024, what it really means is that 2.24% of Steam Hardware Survey participants used a VR headset that month. There’s no reason to believe the survey’s sample isn’t representative of the whole of Steam’s user base, and this is an assumption that’s generally taken for granted when looking at Hardware Survey data. But if I’m going to break down where my numbers come from, I might as well do it thoroughly.

Secondly, I had to assume that Steam users only used one VR headset each month and that they didn’t share their headsets with other Steam users. These assumptions allow me to say that if the Meta Quest 3 was used for 14.05% of Steam VR sessions, then 14.05% of Steam users with a VR headset (which is 2.24% of Steam’s total users) owned a Quest 3 in January 2024. Not making these assumptions leads to an undercount and overcount, respectively, so they kinda cancel each other out. Also, without this assumption, I couldn’t continue beyond this step as I’d lack the data I need.

The Oculus Quest 2 headset sat on top of its box and next to its controllers

Who needs more than one VR headset anyway? (Image credit: Shutterstock / agencies)

Valve doesn’t publish Steam’s total user numbers, and the last time it published monthly active user data was in 2021 – and that was an average for the whole year rather than for each month. It also doesn’t say how many people take part in the Hardware Survey. All it does publish is how many people are using Steam right now. This information is gathered by SteamDB so that I and other people can see Steam’s Daily Active User (DAU) average for January 2021 and January 2024 (as well as other months, but I only care about these two).

My penultimate assumption was that the proportion of DAUs compared to the total number of Steam users in January 2021 is the same as the proportion of DAUs compared to the total number of Steam users in January 2024. The exact proportion of DAUs to the total doesn’t matter (it could be 1% or 100%). By assuming it stays consistent between these two months, I can take the DAU figures I have – 25,295,361 in January 2024 and 24,674,583 in January 2021 – multiply them by the percentage of Steam users with a Quest 3 and Quest 2 during these months, respectively – 0.31% and 0.37% – then finally compare the numbers to one another.

The result is that the number of Steam users with a Quest 3 in January 2024 is 87.05% of the number of Steam users with a Quest 2 in January 2021.

My final assumption was that Quest headset owners haven’t become more or less likely to connect their devices to a PC to play Steam VR. So if it's 87% as popular on Steam four months after their respective launches, the Quest 3 has sold 87% as well as the Quest 2 did after their first four months on sale.

You might also like

TechRadar – All the latest technology news

Read More

Windows 11 could soon deliver updates that don’t need a reboot

Windows 11 could soon run updates without rebooting, if the rumor mill is right – and there’s already evidence this is the path Microsoft is taking in a preview build.

This comes from a regular source of Microsoft-related leaks, namely Zac Bowden of Windows Central, who first of all spotted that Windows 11 preview build 26058 (in the Canary and Dev channels) was recently updated with an interesting change.

Microsoft is pushing out updates to testers that do nothing and are merely “designed to test our servicing pipeline for Windows 11, version 24H2.” The the key part is we’re informed that those who have VBS (Virtualization Based Security) turned on “may not experience a restart upon installing the update.”

Running an update without requiring a reboot is known as “hot patching” and this method of delivery – which is obviously far more convenient for the user – could be realized in the next major update for Windows 11 later this year (24H2), Bowden asserts.

The leaker has tapped sources for further details, and observes that we’re talking about hot patching for the monthly cumulative updates for Windows 11 here. So the bigger upgrades (the likes of 24H2) wouldn’t be hot-patched in, as clearly there’s too much work going on under the hood for that to happen.

Indeed, not every cumulative update would be applied without a reboot, Bowden further explains. This is because hot patching uses a baseline update, one that can be patched on top of, but that baseline model needs to be refreshed every few months.

Add seasoning with all this info, naturally, but it looks like Microsoft is up to something here based on the testing going on, which specifically mentions 24H2, as well.

Analysis: How would this work exactly?

What does this mean for the future of Windows 11? Well, possibly nothing. After all, this is mostly chatter from the grapevine, and what’s apparently happening in early testing could simply be abandoned if it doesn’t work out.

However, hot patching is something that is already employed with Windows Server, and the Xbox console as well, so it makes sense that Microsoft would want to use the tech to benefit Windows 11 users. It’s certainly a very convenient touch, though as noted, not every cumulative update would be hot-patched.

Bowden believes the likely scenario would be quarterly cumulative updates that need a reboot, followed by hot patches in between. In other words, we’d get a reboot-laden update in January, say, followed by two hot-patched cumulative updates in February and March that could be completed quickly with no reboot needed. Then, April’s cumulative update would need a reboot, but May and June wouldn’t, and so on.

As mentioned, annual updates certainly wouldn’t be hot-patched, and neither would out-of-band security fixes for example (as the reboot-less updates rely on that baseline patch, and such a fix wouldn’t be based on that, of course).

This would be a pretty cool feature for Windows 11 users, because dropping the need to reboot – to be forced to restart in some cases – is obviously a major benefit. Is it enough to tempt upgrades from Windows 10? Well, maybe not, but it is another boon to add to the pile for those holding out on Microsoft’s older operating system. (Assuming they can upgrade to Windows 11 at all, of course, which is a stumbling block for some due to PC requirements like TPM).

You might also like…

TechRadar – All the latest technology news

Read More

What is OpenAI’s Sora? The text-to-video tool explained and when you might be able to use it

ChatGPT maker OpenAI has now unveiled Sora, its artificial intelligence engine for converting text prompts into video. Think Dall-E (also developed by OpenAI), but for movies rather than static images.

It's still very early days for Sora, but the AI model is already generating a lot of buzz on social media, with multiple clips doing the rounds – clips that look as if they've been put together by a team of actors and filmmakers.

Here we'll explain everything you need to know about OpenAI Sora: what it's capable of, how it works, and when you might be able to use it yourself. The era of AI text-prompt filmmaking has now arrived.

OpenAI Sora release date and price

In February 2024, OpenAI Sora was made available to “red teamers” – that's people whose job it is to test the security and stability of a product. OpenAI has also now invited a select number of visual artists, designers, and movie makers to test out the video generation capabilities and provide feedback.

“We're sharing our research progress early to start working with and getting feedback from people outside of OpenAI and to give the public a sense of what AI capabilities are on the horizon,” says OpenAI.

In other words, the rest of us can't use it yet. For the time being there's no indication as to when Sora might become available to the wider public, or how much we'll have to pay to access it. 

Two dogs on a mountain podcasting

(Image credit: OpenAI)

We can make some rough guesses about timescale based on what happened with ChatGPT. Before that AI chatbot was released to the public in November 2022, it was preceded by a predecessor called InstructGPT earlier that year. Also, OpenAI's DevDay typically takes place annually in November.    

It's certainly possible, then, that Sora could follow a similar pattern and launch to the public at a similar time in 2024. But this is currently just speculation and we'll update this page as soon as we get any clearer indication about a Sora release date.

As for price, we similarly don't have any hints of how much Sora might cost. As a guide, ChatGPT Plus – which offers access to the newest Large Language Models (LLMs) and Dall-E – currently costs $ 20 (about £16 / AU$ 30) per month. 

But Sora also demands significantly more compute power than, for example, generating a single image with Dall-E, and the process also takes longer. So it still isn't clear exactly how well Sora, which is effectively a research paper, might convert into an affordable consumer product.

What is OpenAI Sora?

You may well be familiar with generative AI models – such as Google Gemini for text and Dall-E for images – which can produce new content based on vast amounts of training data. If you ask ChatGPT to write you a poem, for example, what you get back will be based on lots and lots of poems that the AI has already absorbed and analyzed.

OpenAI Sora is a similar idea, but for video clips. You give it a text prompt, like “woman walking down a city street at night” or “car driving through a forest” and you get back a video. As with AI image models, you can get very specific when it comes to saying what should be included in the clip and the style of the footage you want to see.

See more

To get a better idea of how this works, check out some of the example videos posted by OpenAI CEO Sam Altman – not long after Sora was unveiled to the world, Altman responded to prompts put forward on social media, returning videos based on text like “a wizard wearing a pointed hat and a blue robe with white stars casting a spell that shoots lightning from his hand and holding an old tome in his other hand”.

How does OpenAI Sora work?

On a simplified level, the technology behind Sora is the same technology that lets you search for pictures of a dog or a cat on the web. Show an AI enough photos of a dog or cat, and it'll be able to spot the same patterns in new images; in the same way, if you train an AI on a million videos of a sunset or a waterfall, it'll be able to generate its own.

Of course there's a lot of complexity underneath that, and OpenAI has provided a deep dive into how its AI model works. It's trained on “internet-scale data” to know what realistic videos look like, first analyzing the clips to know what it's looking at, then learning how to produce its own versions when asked.

So, ask Sora to produce a clip of a fish tank, and it'll come back with an approximation based on all the fish tank videos it's seen. It makes use of what are known as visual patches, smaller building blocks that help the AI to understand what should go where and how different elements of a video should interact and progress, frame by frame.

OpenAI Sora

Sora starts messier, then gets tidier (Image credit: OpenAI)

Sora is based on a diffusion model, where the AI starts with a 'noisy' response and then works towards a 'clean' output through a series of feedback loops and prediction calculations. You can see this in the frames above, where a video of a dog playing in the show turns from nonsensical blobs into something that actually looks realistic.

And like other generative AI models, Sora uses transformer technology (the last T in ChatGPT stands for Transformer). Transformers use a variety of sophisticated data analysis techniques to process heaps of data – they can understand the most important and least important parts of what's being analyzed, and figure out the surrounding context and relationships between these data chunks.

What we don't fully know is where OpenAI found its training data from – it hasn't said which video libraries have been used to power Sora, though we do know it has partnerships with content databases such as Shutterstock. In some cases, you can see the similarities between the training data and the output Sora is producing.

What can you do with OpenAI Sora?

At the moment, Sora is capable of producing HD videos of up to a minute, without any sound attached, from text prompts. If you want to see some examples of what's possible, we've put together a list of 11 mind-blowing Sora shorts for you to take a look at – including fluffy Pixar-style animated characters and astronauts with knitted helmets.

“Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt,” says OpenAI, but that's not all. It can also generate videos from still images, fill in missing frames in existing videos, and seamlessly stitch multiple videos together. It can create static images too, or produce endless loops from clips provided to it.

It can even produce simulations of video games such as Minecraft, again based on vast amounts of training data that teach it what a game like Minecraft should look like. We've already seen a demo where Sora is able to control a player in a Minecraft-style environment, while also accurately rendering the surrounding details.

OpenAI does acknowledge some of the limitations of Sora at the moment. The physics don't always make sense, with people disappearing or transforming or blending into other objects. Sora isn't mapping out a scene with individual actors and props, it's making an incredible number of calculations about where pixels should go from frame to frame.

In Sora videos people might move in ways that defy the laws of physics, or details – such as a bite being taken out of a cookie – might not be remembered from one frame to the next. OpenAI is aware of these issues and is working to fix them, and you can check out some of the examples on the OpenAI Sora website to see what we mean.

Despite those bugs, further down the line OpenAI is hoping that Sora could evolve to become a realistic simulator of physical and digital worlds. In the years to come, the Sora tech could be used to generate imaginary virtual worlds for us to explore, or enable us to fully explore real places that are replicated in AI.

How can you use OpenAI Sora?

At the moment, you can't get into Sora without an invite: it seems as though OpenAI is picking out individual creators and testers to help get its video-generated AI model ready for a full public release. How long this preview period is going to last, whether it's months or years, remains to be seen – but OpenAI has previously shown a willingness to move as fast as possible when it comes to its AI projects.

Based on the existing technologies that OpenAI has made public – Dall-E and ChatGPT – it seems likely that Sora will initially be available as a web app. Since its launch ChatGPT has got smarter and added new features, including custom bots, and it's likely that Sora will follow the same path when it launches in full.

Before that happens, OpenAI says it wants to put some safety guardrails in place: you're not going to be able to generate videos showing extreme violence, sexual content, hateful imagery, or celebrity likenesses. There are also plans to combat misinformation by including metadata in Sora videos that indicates they were generated by AI.

You might also like

TechRadar – All the latest technology news

Read More

Watch out, Apple Vision Pros are reportedly cracking all on their own

If you’ve spent $ 3,500 or more on the Apple Vision Pro you’d be understandably frustrated if you damaged the outer screen and had to pay $ 799 (or $ 299 with Apple Care) to get it fixed. But imagine how much more annoyed you’d be if it cracked for seemingly no reason at all.

That’s what some people are taking to social media to complain about, after they discovered cracks extending upwards from the nose bridge of their pricey Apple headset – which they all claim appeared despite them never dropping, bumping, or damaging the headset.

Reddit user dornbirn explained that after putting their headset away for the night they woke up and found a large crack extending from the nose bridge. u/ContributionFar8997, u/inphenite, and u/Wohinbistdu all shared similar complaints to the Vision Pro Subreddit, with images of their Vision Pro’s showing practically identical cracks extending from the nose bridge.

You should always take posts on the internet with a pinch of salt, but the fact that every crack looks the same and has seemingly appeared while the headset wasn’t in use suggests that this is some kind of manufacturing issue rather than user error.

We’ve reached out to Apple to find out what's causing the apparent cracks and if it has any advice for Vision Pro customers who are worried about their screens breaking.

Cracked Vision Pro Update: good ending! from r/VisionPro

Why are Vision Pro screens cracking? 

It’s not clear exactly why the outer screen is cracking, but the reports we’ve seen all come from people who discovered the Vision Pro was damaged after leaving the device charging with the front cover on.

Our best guess right now is that as the headset charges it heats up, and because of the cover this heat doesn’t dissipate quickly. As the outer screen warms it expands, with perhaps one of the inner layers expanding faster than the outer layer causing tension.

Given the nose bridge is the area with the most complex curved design it makes sense this would be the place where the tension is at its highest. So when the screen can’t take anymore this is where it would most likely crack – explaining why all the images show near identical damage.

We're not engineers though, so to know for sure we'll need to wait for an official Apple explanation of what's causing the cracks.

An Apple support employee in an Apple Store with customers.

Apple Store support staff should be able to help  (Image credit: Apple)

I have a Vision Pro, what should I do? 

Because there are so many unknown factors it’s tough to say exactly what measures you should take to avoid the same issue happening to your Vision Pro. 

Based on the current evidence we’d suggest that you don’t charge the headset with the cover on and that you don’t leave it charging for longer than is necessary. However, the best thing to do is to keep an eye out for Apple’s official guidance, and if a crack forms in your Vision Pro contact support as soon as you can. 

While some users have said the Apple Care support team hasn’t been the most helpful – asking them to pay to get the screen fixed – u/Wohinbistdu posted an update to their original Reddit post saying that they were able to take their Vision Pro to the Apple Store and get a replacement unit. Their original has apparently been sent off for Apple’s engineers to investigate.

This was 12 days ago at the time of writing so hopefully Apple is close to finding what’s causing the problems, and is almost ready with a fix.

You might also like

TechRadar – All the latest technology news

Read More

Nvidia finally catches up to AMD and drops a new app that promises better gaming and creator experiences

Nvidia has announced plans to bring together the features of the Nvidia Control Panel, GeForce Experience, and RTX Experience apps all in a single piece of software. On February 22, Nvidia explained on its website that this new unified app is being made available as a public beta. This means that the app could still be changed in the hopes of improving it, but you can download it now and try it for yourself.

The app is made specifically to improve the experience of gamers and creators currently using machines equipped with Nvidia GPUs by making it easier to find and use functions that formerly lived in separate programs. 

Users with suitable Nvidia GPUs can expect a number of significant improvements that come with this new centralized app. Settings to optimize gaming experiences (by tweaking graphical settings based on your hardware)  and downloading and installing new drivers can now be found in one easy interface.

It’ll be easier to understand and keep track of driver updates, such as new features and fixes for bugs, with clear descriptions. While in-game, users should see a redesigned overlay that makes it easier to access features and tools like filters, recording tools, monitoring tools, and more. Speaking of filters, Nvidia is introducing new AI Freestyle Filters which can enhance users’ visuals and allow them to customize the aesthetics of their games. As well as all of these upgrades, users can easily view and navigate bundles, redeem rewards, get new game content, view current GeForce NOW offers, and more.

Screenshot of the webpage where users can download the Nvidia app beta

(Image credit: Future)

Nvidia's vision

It certainly seems like Nvidia has worked hard to create a more streamlined app that makes it easier to use your RTX-equipped PC. It’s specifically intended to make it easier to do things like make sure your PC is updated with the latest Nvidia drivers, and quickly discover and install other Nvidia apps including Nvidia Broadcast, GeForce NOW, and more. The Nvidia team also claims in its announcement that this new centralized app will perform better on RTX-GPU-equipped PCs than its separate predecessors. That’s thanks to reduced installation times through the app, better responsiveness from the user interface (UI), and because it should take up less disk space than its predecessors (I assume combined). 

This isn’t the end of the new Nvidia app’s development, and it seems some legacy features didn’t make the cut, including 360/Stereo photo modes and streaming directly to YouTube and Twitch, because they see less use. Clearly, Nvidia felt it wasn't worth including these more niche features in the new app, and anyone who wants to continue to use them can still use the older apps (for now, at least). The new app is focused on improving performance, and making it easier to install and integrate new features into users’ systems. 

An Nvidia GeForce RTX 2060 slotted into a PC with its fans showing

(Image credit: Future)

By combining its apps into one, easy-to-use piece of software, Nvidia is finally catching up to AMD in one aspect where Team Red has the advantage: software. AMD's Radeon Adrenalin app already offers a lot of these features, as well as others, like a built-in browser and HDMI link assurance and monitoring that can automatically detect any issues with the HDMI’s connectivity – all in one single interface.

Finally, AMD doesn’t require users to make an account to be able to use its app. We don’t expect that Nvidia will fully catch up to AMD’s app just yet (though it would be nice not to have to sign in), but this is definitely a push in the right direction and hopefully users will see a lot of use out of the new app.


TechRadar – All the latest technology news

Read More

Are you a Reddit user? Google’s about to feed all your posts to a hungry AI, and there’s nothing you can do about it

Google and Reddit have announced a huge content licensing deal, reportedly worth a whopping $ 60 million – but Reddit users are pissed.

Why, you might ask? Well, the deal involves Google using content posted by users on Reddit to train its AI models, chiefly its newly launched Google Gemini AI suite. It makes sense; Reddit contains a wealth of information and users typically talk colloquially, which Google is probably hoping will make for a more intelligent and more conversational AI service. However, this also essentially means that anything you post on Reddit now becomes fuel for the AI engine, something many users are taking umbrage at.

While the very first thing that came to mind was MIT’s insane Reddit-trained ‘psychopath AI’ from years ago, it’s fair to say that AI model training has come a long way since then – so hooking it up to Reddit hopefully won’t turn Gemini into a raving lunatic.

The deal, announced yesterday by Reddit in a blog post, will have other benefits as well: since many people specifically append ‘reddit’ to their search queries when looking for the answer to a question, Google aims to make getting to the relevant content on Reddit easier. Reddit plans to use Google’s Vertex AI to improve its own internal site search functionality, too, so Reddit users will enjoy a boost to the user experience – rather than getting absolutely nothing in return for their training data. 

Do Redditors deserve a cut of that $ 60 million?

A lot of Reddit users have been complaining about the deal in various threads on the site, for a wide variety of reasons. Some users have privacy worries, some voiced concerns about the quality of output from an AI trained on Reddit content (which, let’s be honest, can get pretty toxic), and others simply don’t want their posts ‘stolen’ to train an AI.

Unfortunately for any unhappy Redditors, the site’s Terms of Service do mean that Reddit can (within reason) do whatever it wants with your posts and comments. Calling the content ‘stolen’ is inaccurate: if you’re a Reddit user, you’re the product, and Reddit is the one selling. 

Personally, I’m glad to see a company actually getting paid for providing AI training data, unlike the legal grey-area dodginess of previous chatbots and AI art tools that were trained on data scraped from the internet for free without user consent. By agreeing to the Reddit TOS, you’re essentially consenting to your data being used for this.

A person introduces Google Gemini next to text saying it is

Google Gemini could stand to benefit hugely from the training data produced by this content use deal. (Image credit: Google)

Some users are positively incensed by this though, claiming that if they’re the ones making the content, surely they should be entitled to a slice of the AI pie. I’m going to hand out some tough love here: that’s a ridiculous and naive argument. Do these people believe they deserve a cut of ad revenue too, since they made a hit post that drew thousands of people to Reddit? This isn’t the same as AI creators quietly nabbing work from independent artists on Twitter.

At the end of the day, you’re never going to please everyone. If this deal has actual potential to improve not just Google Gemini, but Google Search in general (as well as Reddit’s site search), then the benefits arguably outweigh the costs – although I do think Reddit has a moral obligation to ensure that all of its users are fully informed about the use of their data. 

A few paragraphs in the TOS aren’t enough, guys: you know full well nobody reads those.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft brings one of the Google Pixel’s best features to Windows 11

The Google Pixel series has given us some of the best phones on the market, and one thing that sets it apart from other phones is the suite of built-in generative AI features, like Best Photo and Magic Eraser. Now, thanks to an upcoming tool coming to the Windows Photos App, you won’t need to buy a whole new phone just to get your hands on these types of features. 

Microsoft has announced in a blog post that the ‘Spot fix’ tool in the desktop Photos app will be getting an AI boost, and will now be known as ‘Generative erase’. 

Generative erase will allow you to remove imperfections from your photos in a more natural-looking way, like removing random people in the background and replacing them with an AI-generated backdrop – basically, the exact same way that Magic Eraser works on a Pixel phone. Microsoft notes in the blog post that “Generative erase creates a more seamless and realistic result after objects are erased from the photo, even when erasing large areas”. 

Windows Photos App

The before-and-after is quite impressive – the AI alterations are barely noticeable at first glance. (Image credit: Windows)

Keep it coming!

The example ‘before and after’ image in the blog post shows a very cute dog on the beach, wearing a collar, with some people in the background. After using Generative erase, the new photo looks entirely organic, with the dog collar free and no people in the background. Even when you zoom into the photo to where the collar and people originally were, you can’t see any obviously visible evidence that the image was altered at all. 

It’s an incredibly impressive editing job – considering that it takes very little time and zero effort – and I’m very excited to see it in action when it does make its way over to Windows. It won’t just be Windows 11 users who get to enjoy the new feature, either; Microsoft will be adding the full suite of Photos AI features to Windows 10 too, proving that the older OS isn’t dead just yet.

Currently, the tool is reserved for Windows Insiders, the community of Windows enthusiasts and developers who get early access to potential new features. However, the fact that Microsoft is publicly discussing the feature is a good sign that we will see it sooner rather than later. Alongside Generative erase, the blog notes very briefly that we could also see background blurring and removal features join the Photos app in the same upcoming update. 

The company recently announced that Microsoft Paint was getting another string of new AI features as well, so we may be seeing the beginning of a Windows-wide revamp when it comes to creative AI tools. It seems like Microsoft is putting a lot of time and effort into implementing useful generative features into its apps, which is good news for Windows users who want to experiment with artificial intelligence – without having to make a million accounts on different platforms to do so. 

Via The Verge.

You might also like…

TechRadar – All the latest technology news

Read More

Microsoft is giving Windows Copilot an upgrade with Power Automate, promising to banish boring tasks thanks to AI

Microsoft has revealed a new plug-in for Copilot, its artificial intelligence (AI) assistant, named Power Automate that will enable users to (as the name suggests) automate repetitive and tedious tasks, such as creating and manipulating entries in Excel, handling PDFs, and file management. 

This development is part of a bigger Copilot update package that will see several new capabilities being added to the digital AI assistant.

Microsoft gives the following examples of tasks this new Copilot plug-in could automate: 

  • Write an email to my team wishing everyone a happy weekend.
  • List the top 5 highest mountains in the world in an Excel file.
  • Rename all PDF files in a folder to add the word final at the end.
  • Move all word documents to another folder.
  • I need to split a PDF by the first page. Can you help?

Who can get the Power Automate plug-in and how

As of now, it seems like this plug-in is only available to some users with access to Windows 11 Preview Build 26058, available to Windows Insiders in the Canary and Dev Channels of the Windows Insider Program. The Windows Insider Program is a Microsoft-run community for Windows enthusiasts and professionals where users can get early access to upcoming versions of Windows, features, and more, and provide feedback to Microsoft developers to improve these before a wider rollout.

Hopefully, the Power Automate plug-in for Copilot will prove a hit with testers – and if it is, we should hopefully see it rolled out to all Windows 11 users soon.

As per the blog post announcing the Copilot update, this is the first release of the plug-in, which is part of Microsoft’s Power Platform, a comprehensive suite of tools designed to help users make their workflows more efficient and versatile – including Power Automate. To be able to use this plug-in, you’ll need to download Power Automate for Desktop from the Microsoft Store (or make sure you have the latest version of Power Automate). 

There are multiple options for using Power Automate:  the free plan, suitable for personal use or smaller projects, and there are premium plans that offer packages with more advanced features. From what we can tell, the ability to enable the Power Automate plug-in for Copilot will be available for all users, free and premium, but Microsoft might change this.

Once you’ve made sure you have the latest version of Power Automate downloaded, you’ll also need to be signed into Copilot for Windows with a Microsoft Account. Then you’ll need to add the plug-in to Copilot To do this, you’ll have to go to the Plug in section in the Copilot app for Windows, and turn on the Power Automate plug-in which should now be visible. Once enabled, you should be able to ask it to perform a task like one of the above examples and see how Copilot copes for yourself.

Once you try the plug-in for yourself, if you have any thoughts about it, you can share them with Microsoft directly at [email protected]

Copilot in Windows

(Image credit: Microsoft)

Hopefully, a sign of more to come

The language Microsoft is using about the plug-in implies that it will see improvements in the future to enable it and, therefore, Copilot to carry out more tasks. Upgrades like this are steps in the right direction if they’re as effective as they sound. 

This could address one of the biggest complaints people have about Copilot since it was launched. Microsoft presented it as a Swiss Army Knife-like digital assistant with all kinds of AI capabilities, and, at least for now, it’s not anywhere near that. While we admire Microsoft’s AI ambitions, the company did make big promises, and many users are growing impatient. 

I guess we’ll have to just continue to watch whether Copilot will live up to Microsoft’s messaging, or if it’ll go the way of Microsoft’s other digital assistants like Cortana and Clippy.


TechRadar – All the latest technology news

Read More

Microsoft Paint update could make it even more Photoshop-like with handy new tools

Microsoft Paint received a plethora of new features late last year, introducing layers, a dark mode, and AI-powered image generation. These new updates brought Microsoft Paint up to speed with the rest of Windows 11's modern layout (maybe a different word? Trying to say vibe)  after years of virtually no meaningful upgrades, and it looks like Microsoft still has plans to add even more features to the humble art tool. 

X user @PhantomOfEarth made a post highlighting potential changes spotted in the Canary Development channel, and we could see these new features implemented in Microsoft Paint very soon. The Canary Dev channel is part of the Microsoft Insider Program, which allows Windows enthusiasts and developers to sign up and get an early look at upcoming releases and new features that may be on the way. 

See more

 We do have to take the features we see in such developer channels with a pinch of salt, as it’s common to see a cool upgrade or new software appear in the channel but never actually make it out of the development stage. That being said, PhantonOfEarth originally spotted the big changes set for Windows 11 Paint last year in the same Dev channel, so there’s a good chance that the brush size slider and layer panel update that is now present in the Canary build will actually come to fruition in a public update soon.   

Show my girl Paint some love

It’s great to see Microsoft continue to show some love for the iconic Paint app, as it had been somewhat forgotten about for quite some time. It seems like the company has finally taken note of the app's charm, as many of us can certainly admit to holding a soft spot for Paint and would hate to see it abandoned. I have many memories of using Paint; as a child in IT class learning to use a computer for the first time, or firing it up to do some casual scribbles while waiting for my family’s slow Wi-Fi to connect. 

These proposed features won’t make Paint the next Photoshop (at least for now), but they do bring the app closer to being a simple, free art tool that most everyday people will have access to. Cast your mind back to the middle of last year, when Photoshop introduced image generation capabilities – if you wanted to use them, you’d have to have paid for Adobe Firefly access or a Photoshop license. Now, if you’re looking to do something quick and simple with AI image-gen, you can do it in Paint. 

Better brush size control and layers may not seem like the most important or exciting new features, especially compared to last year's overhaul of Windows Paint, but it is proof that the team at Microsoft is still thinking about Paint. In fact, the addition of a proper layers panel will do a lot to justify the program’s worth to digital artists. It could also be the beginning of a new direction for Paint if more people flock back to the revamped app. I hope that Microsoft continues to improve it – just so long as it remains a free feature of Windows.

You might also like…

TechRadar – All the latest technology news

Read More

Haven’t got round to installing Windows 11 23H2 yet? You’ll soon be forced to get the latest update

The 23H2 update is coming, whether you're ready or not, for those holdouts who have yet to upgrade their Windows 11 installation from 22H2 (or indeed 21H2).

Tom’s Hardware noticed that Microsoft updated its Windows 11 23H2 status document to let users know what’s happening, and that eligible Windows 11 devices will be automatically upgraded to version 23H2.

That means you’ll have no choice in the matter, of course. Updating to Windows 11 23H2 is mandatory at this point, with the caveat that this automatic upgrade process may not come to your PC all that soon.

Microsoft uses AI to “safely roll out this new Windows version in phases to deliver a smooth update experience,” and therefore some PC configurations may find it’s still a while before they have 23H2 foisted on them.

Alternatively, you may find the upgrade is piped to your machine imminently. It’s a roll of the hardware (and software) configuration dice, in short.

Analysis: Staying safe

Automatic upgrades being forced on Windows users is nothing new, of course. This happens whenever an update has been around for a good deal of time, and Microsoft feels everyone who is running an older version of Windows 11 (or Windows 10) needs to step up and move away from it (because it’s running out of road for support, or indeed has run out).

Regarding Windows 21H2 users (the original version of the OS), you may be thinking – didn’t they already get forced to upgrade to 22H2? Yes, they did. So why are some folks still on 21H2 then? Well, there may be a small niche of users remaining on 21H2 as anomalies, basically (we spotted a couple on Reddit), and they will be transferred direct to 23H2 instead. (Hopefully, anyway – though it’s possible that not having been offered an upgrade at all so far could be the result of a bug).

Microsoft needs to push upgrades like this for security reasons. If a Windows 11 user remains on an unsupported version, they won’t get monthly security updates, which is bad news of course – their PC could be vulnerable to exploits. Hence the big updates become mandatory eventually.

You might also like…

TechRadar – All the latest technology news

Read More