Want to skip to the good bit of a video? YouTube is testing a smart AI feature for that

I’ve been increasingly driven to distraction by YouTube’s ever-more-aggressive delivery of adverts before, during and after videos, which is making it a challenge to even get to the bits of a video that I want to see without having some earnest voice encourage me to trade stocks or go to Dubai. Until now I’ve been too cheap to subscribe to YouTube Premium – but that may soon change. 

That’s because YouTube is apparently testing an AI-powered recommendation system that will analyze patterns in viewer behavior to cleverly skip to the most popular parts of a video with just a double tap on a touchscreen. 

“The way it works is, if a viewer is double tapping to skip ahead on an eligible segment, we’ll show a jump ahead button that will take them to the next point in the video that we think they’re aiming for,” YouTube creator-centric channel Creator Insider explained. “This feature will also be available to creators while watching their own videos.”

Currently, such a double-tap action skips a YouTube video forward by a few seconds, which I don’t find hugely useful. And while YouTube introduces a form of wave pattern on the video timeline to show what the most popular parts of the video are, it’s not the easiest thing to use, and can sometimes feel rather lacking in intuitiveness.

So being able to easily tap to get to the most popular part of a video, at least according to an AI, could be a boon for impatient people like me. The only wrinkle is that this feature is only being tested for YouTube Premium users, and is currently limited to the US.

But such features do tend to get a larger global rollout once they come out of the testing phase, meaning there’s scope for Brits like myself to have access to some smart double-tap video skipping – that’s if I do finally decide to bite the bullet and pay for YouTube Premium.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI just gave artists access to Sora and proved the AI video tool is weirder and more powerful than we thought

A man with a balloon for a head is somehow not the weirdest thing you'll see today thanks to a series of experimental video clips made by seven artists using OpenAI's Sora generative video creation platform.

Unlike OpenAI's ChatGPT AI chatbot and the DALL-E image generation platform, the company's text-to-video tool still isn't publicly available. However, on Monday, OpenAI revealed it had given Sora access to “visual artists, designers, creative directors, and filmmakers” and revealed their efforts in a “first impressions” blog post.

While all of the films ranging in length from 20 seconds to a minute-and-a-half are visually stunning, most are what you might describe as abstract. OpenAI's Artist In Residence Alex Reben's 20-second film is an exploration of what could very well be some of his sculptures (or at least concepts for them), and creative director Josephine Miller's video depicts models melded with what looks like translucent stained glass.

Not all the videos are so esoteric.

OpenAI Sora AI-generated video image by Don Allen Stevenson III

OpenAI Sora AI-generated video image by Don Allen Stevenson III (Image credit: OpenAI sora / Don Allen Stevenson III)

If we had to give out an award for most entertaining, it might be multimedia production company shy kids' “Air Head”. It's an on-the-nose short film about a man whose head is a hot-air-filled yellow balloon. It might remind you of an AI-twisted version of the classic film, The Red Balloon, although only if you expected the boy to grow up and marry the red balloon and…never mind.

Sora's ability to convincingly merge the fantastical balloon head with what looks like a human body and a realistic environment is stunning. As shy kids' Walter Woodman noted, “As great as Sora is at generating things that appear real, what excites us is its ability to make things that are totally surreal.” And yes, it's a funny and extremely surreal little movie.

But wait, it gets stranger.

The other video that will have you waking up in the middle of the night is digital artist Don Allen Stevenson III's “Beyond Our Reality,” which is like a twisted National Geographic nature film depicting never-before-seen animal mergings like the Girafflamingo, flying pigs, and the Eel Cat. Each one looks as if a mad scientist grabbed disparate animals, carved them up, and then perfectly melded them to create these new chimeras.

OpenAI and the artists never detail the prompts used to generate the videos, nor the effort it took to get from the idea to the final video. Did they all simply type in a paragraph describing the scene, style, and level of reality and hit enter, or was this an iterative process that somehow got them to the point where the man's balloon head somehow perfectly met his shoulders or the Bunny Armadillo transformed from grotesque to the final, cute product?

That OpenAI has invited creatives to take Sora for a test run is not surprising. It's their livelihoods in art, film, and animation that are most at risk from Sora's already impressive capabilities. Most seem convinced it's a tool that can help them more quickly develop finished commercial products.

“The ability to rapidly conceptualize at such a high level of quality is not only challenging my creative process but also helping me evolve in storytelling. It's enabling me to translate my imagination with fewer technical constraints,” said Josephine Miller in the blog post.

Go watch the clips but don't blame us if you wake up in the middle of the night screaming.

You might also like

TechRadar – All the latest technology news

Read More

YouTube TV refreshed UI makes video watching more engaging for users

YouTube is redesigning its smart TV app to increase interactivity between people and their favorite channels.

In a recent blog post, YouTube described how the updated UI shrinks the main video a bit to make room for an information column housing a video’s view counts, amount of likes it has, description, and comments. Yes, despite the internet’s advice, people do read the YouTube comments section. The current layout has the same column, but it obscures the right side of the screen. YouTube states in its announcement the redesign allows users to enjoy content “without interrupting [or ruining] the viewing experience.” 

Don’t worry about this becoming the new normal. TheVerge in their coverage states the full screen view will remain. It won’t be supplanted by the refresh or removed as the default setting. You can switch to the revamped interface at any time from within the video player screen. It’s totally up to the viewer how they want to curate their experience. 

Varying content

What you see on the UI’s column can differ depending on the type of content being watched. In the announcement, YouTube demonstrates how the layout works by playing a video about beauty products. Below the comments, viewers can check out the specific products mentioned in the clip and buy them directly.

Shopping on YouTube TV may appear seamless, however, TheVerge claims it’ll be a little awkward. Instead of buying items directly from a channel, you'll have to scan a QR code that shows up on the screen. From there, you will be taken to a web page where users will complete the transaction. We contacted YouTube to double-check, and a company representative confirmed that is how it’ll work.

Besides shopping, the far-right column will also display live scores and stats for sports games. It’ll be a part of the already existing “Views suite of features,” all of which can be found by triggering the correct on-screen filter.

The update will be released to all YouTube TV subscribers in the coming weeks. It won’t happen all at once so keep an eye out for the patch when it arrives.

Be sure to check out TechRadar's recommendations for the best TVs for 2024 if you're looking to upgrade.

You might also like

TechRadar – All the latest technology news

Read More

Windows 11’s next big AI feature could turn your video chats into a cartoon

Windows 11 users could get some smart abilities that allow for adding AI-powered effects to their video chats, including the possibility of transporting themselves into a cartoon world.

Windows Latest spotted the effects being flagged up on X (formerly Twitter) by regular leaker XenoPanther, who discovered clues to their existence by digging around in a Windows 11 preview build.

See more

These are Windows Studio effects, which is a set of features implemented by Microsoft in Windows 11 that use AI – requiring an NPU in the PC – to achieve various tricks. Currently, one of those is making it look like you’re making eye contact with the person on the other end of the video call. (In other words, making it seem like you’re looking at the camera, when you’re actually looking at the screen).

The new capabilities appear to be the choice to make the video feed look like an animated cartoon, a watercolor painting, or an illustrated drawing (like a pencil or felt tip artwork – we’re assuming something like the video for that eighties classic ‘Take on Me’ by A-ha).

If you’re wondering what Windows Studio is capable of as it stands, as well as the aforementioned eye contact feature – which is very useful in terms of facilitating a more natural interaction in video chats or meetings – it can also apply background effects. That includes blurring the background in case there’s something you don’t want other chat participants to see (like the fact you haven’t tied up your study in about three years).

The other feature is automatic framing which keeps you centered, with the image zoomed and cropped appropriately, as (or if) you move around.


Analysis: That’s all, folks!

Another Microsoft leaker, Zac Bowden, replied to the above tweet to confirm these are the ‘enhanced’ Windows Studio effects that he’s talked about recently, and that they look ‘super cool’ apparently. They certainly sound nifty, albeit on the more off-the-wall side of the equation than existing Windows Studio functionality – they’re fun aspects rather than serious presentation-related AI powers.

This is something we might see in testing soon, then, or that seems likely, particularly as two leakers have chimed in here. We might even see these effects arrive in Windows 11 24H2 later this year.

Of course, there’s no guarantee of that, but it also makes sense given that Microsoft is fleshing out pretty much everything under the sun with extra AI capabilities, wherever they can be crammed in – with a particular focus on creativity at the moment (and the likes of the Paint app).

The future is very much the AI PC, complete with NPU acceleration, as far as Microsoft is concerned.

You might also like…

TechRadar – All the latest technology news

Read More

YouTube Shorts gains an edge over TikTok thanks to new music video remix feature

YouTube is revamping the Remix feature on its ever popular Shorts by allowing users to integrate their favorite music videos into content.

This update consists of four tools: Sound, Collab, Green Screen, and Cut. The first one lets you take a track from a video for use as background audio. Collab places a Short next to an artist’s content so you can dance alongside it or copy the choreography itself. Green Screen, as the name suggests, allows users to turn a music video into the background of a Short. Then there’s Cut, which gives creators the ability to remove a five-second portion of the original source to add to their own content and repeat as often as they like. 

It’s important to mention that none of these are brand new to the platform as they were actually introduced years prior. Green Screen, for instance, hit the scene back in 2022 although it was only available on non-music videos.

Remixing

The company is rolling out the remix upgrade to all users, as confirmed by 9To5Google, but it’s releasing it incrementally. On our Android, we only received a part of the update as most of the tools are missing. Either way, implementing one of the remix features is easy to do. The steps are exactly the same across the board with the only difference being the option you choose.

To start, find the music video you want to use on the mobile app and tap the Remix button. It’ll be found in the description carousel. Next, select the remix tool. At the time of this writing, we only have access to Sound so that’ll be the one we’ll use.

YouTube Short's new Remix tool for Music Videos

(Image credit: Future)

You will then be taken to the YouTube Shorts editing page where you highlight the 15-second portion you want to use in the video. Once everything’s sorted out, you’re free to record the Short with the music playing in the back.

Analysis: A leg over the competition

The Remix feature’s expansion comes at a very interesting time. Rival TikTok recently lost access to the vast music catalog owned by Universal Music Group (UMG), meaning the platform can no longer host tracks by artists represented by the record label. This includes megastars like Taylor Swift and Drake. TikTok videos with “UMG-owned music” will be permanently muted although users can replace them with songs from other sources.

The breakup between UMG and TikTok was the result of contract negotiations falling through. Apparently, the social media platform was trying to “bully” the record label into accepting a bad deal that wouldn’t have adequately protected artists from generative AI and online harassment.  

YouTube, on the other hand, was more cooperative. The company announced last August they were working with UMG to ensure “artists and right holders would be properly compensated for AI music.” So creators on YouTube are safe to take whatever songs they want from the label – for now. It's possible future negotiations between these two entities will turn sour down the line.

If you're planning on making YouTube Shorts, you'll need a smartphone with a good camera. Be sure to check out TechRadar's list of the best iPhone for 2024 if you need some recommendations.

You might also like

TechRadar – All the latest technology news

Read More

The Meta Quest 3 yoinks Vision Pro’s spatial video to help you relive your memories

Just as the Vision Pro launches, Meta has started rolling out software update v62 to its Meta Quest 3, Quest Pro, and Quest 2. The new software’s headline feature is it’s now a lot easier to watch your spatial video recordings on Quest hardware – stealing the Vision Pro’s best feature.

You’ve always been able to view 3D spatial video (or stereoscopic video as most people call it) on Quest hardware. And using a slightly awkward workaround you could convert spatial video recordings you’ve made using an iPhone 15 Pro into a Quest-compatible format to watch them in 3D without needing Apple’s $ 3,500 Vision Pro. But, as we predicted it would, Meta’s made this conversion process a lot simpler with v62.

Now you can simply upload the captured footage through the Meta Quest mobile app and Meta will automatically convert and send it to your headset – even giving the videos the same cloudy border as you’d see on the Vision Pro. 

You can find the recordings, and a few Meta-made demo videos, in the spatial videos section of the Files menu on your Quest headset.

iPhone 15 Pro review front flat angled handheld

You need an iPhone 15 Pro or Pro Max to record 3D video (Image credit: Future | Alex Walker-Todd)

Spatial video has been a standout feature referenced in nearly every review featured in our Apple Vision Pro review roundup – with our own Lance Ulanoff calling it an “immersive trip” after one of his demos with the Apple headset. So it’s almost certainly not a coincidence that Meta has announced it’s nabbed the feature literally as the Vision Pro is launching.

Admittedly Quest spatial video isn’t identical to the Vision Pro version as you need an iPhone 15 Pro – on the Vision Pro you can use the iPhone or the headset itself – but over time there’s one potential advantage Meta’s system could have. Non-exclusivity. 

Given that other smartphone manufacturers are expected to launch headsets of their own in the coming year or so – such as the already teased Samsung XR headset created in partnership with Google – it’s likely the ability to record 3D video will come to non-iPhones too. 

If this happens you’d likely be able to use whichever brand of phone you’d like to record 3D videos that you can then convert and watch on your Quest hardware through the Meta Quest app. Given its typical walled garden approach, you’ll likely always need an iPhone to capture 3D video for the Vision Pro and Apple’s future headsets – and Samsung, Google, and other brands that make smartphones may also impose some kind of walled garden to lock you into their hardware.

A gif showing a person pinching their fingers to open the Quest menu

(Image credit: Meta)

Other v62 improvements 

It’s not just spatial video coming in the next Quest operating system update.

Meta has added support for a wider array of controllers – including the PS5 DualSense controller and PS4 DualShock – that you can use to play games through apps like the Xbox Cloud Gaming (Beta) or the Meta Quest Browser.

Facebook Livestreaming, after being added in update v56, is now available to all Meta Quest users. So now everyone can share their VR adventures with their Facebook friends in real-time by selecting “Go Live” from the Camera icon on the Universal Menu while in VR (provided your Facebook and Meta accounts are linked through the Accounts Center). 

If you prefer YouTube streaming, it’s now possible to see your chat while streaming without taking the headset off provided you’re using OBS software.

Lastly, Meta is improving its hand-tracking controls so you can quickly access the Universal Menu by looking at your palm and doing a short pinch. Doing a long pinch will recenter your display. You can always go back to the older Quick Actions Menu by going into your Settings, searching for Expanded Quick Actions, and turning it back on.

You might also like…

TechRadar – All the latest technology news

Read More

Wondershare Filmora 13 releases update with a better video editing experience for users at all levels

Creating content and sharing our lives online has become the norm, but not everybody can just sit down at their computer and put together high-quality video footage. Editing can be complicated even for advanced users. With Wondershare Filmora, it doesn’t have to be. Filmora 13.1.0, the latest update to the video editing suite from Wondershare, was designed to make content creation accessible to all, regardless of skill level. Ease of use doesn’t mean lacking in functionality, though, and Filmora is packed with useful features to give your videos an extra kick. 

AI Music Generator and Text-to-Speech 

Wondershare Filmora 13.1.0 update

(Image credit: Wondershare)

Sometimes we just want to create and share videos about our day-to-day lives, but we want to make those videos more interesting with background music. If you’ve taken an incredible vacation and want to share video footage of your adventure, you’re going to need music to accompany that, even if you’re just planning to share the footage with family and friends. However, finding the right music for your videos can be time-consuming. 

Filmora offers a solution with their AI Music Generator tools that can help you create soundtracks for your videos that fit your vibe and are safe to commercialize. With Filmora you can easily make those shareable moments in your life look and sound good without worry. Filmora’s latest slate of enhancements makes it even easier to use, as well, allowing you to utilize Text-to-Speech to add voice-overs to your vlogs with natural-sounding tones that are categorized by scene type. 

Vlogs are not the only content that can benefit from these new features, either. Many of us have taken our educational endeavors online in recent years. Teachers and professors have had to find new ways to engage their students via video, becoming content creators in the process. Soundtracks created with Filmora’s AI Music Generator can help set the tone for your lectures. Text-To-Speech to translate your lesson, giving your students clear, natural-sounding audio that is easy for them to understand and easy for you to create.

Special effects for everybody 

Some stories are too good not to be told, but not everybody has the backing of a major motion picture studio at their disposal. Filmora 13.1.0 features improved professional caliber tools that allow you to easily create short films and music videos with ease, regardless of skill level (or production teams.)

Special effects have traditionally been thought of as an extremely skill-dependent part of content creation and cinematography. Filmora demystifies special effects. With just a few clicks of your mouse, your video’s action sequences can be taken up a notch with realistic motion blur that can be customized to suit your specific needs. Want to draw extra attention to a particular element in a scene? Filmora features a Lens Zoom Effect to simulate camera zoom, giving you creative freedom to hone in on a part of a scene and further enhance your storytelling. Get ready for your close-up, a well-timed zoom-in can set the scene and change the tone of your video. 

With the ability to digitally zoom also comes the option for digital magnification. The Magnifying Glass Tool in Filmora makes it easy for you, as an editor, to examine a scene in your video by getting up close and personal with it. Zoom in, make adjustments, correct your footage as necessary, and then return the frame to its proper size with the corrections intact. That’s professional-quality editing with no more effort than a few clicks of your mouse.

Create with the power of the cloud 

Whether you’re creating with the power of a production team or you’re a personal creator looking to share your life, one thing remains true: video content is a resource hog. If you’re working on projects that involve others, you may find that harnessing the power of the creative cloud can streamline the process and make it more accessible for everybody involved. 

Filmora 13 features improvements to Cloud Resource Management and Beautification tools, making it easier to enable migration of custom LUTs to cloud storage. Seamless synchronization allows you and your collaborators to color-grade assets across multiple devices, streamlining remote work and improving your workflow. Custom LUTs can even allow for the direct import of media files from cloud storage. If your video content features episodic content and color grading is important, the cloud-based custom LUT feature of Filmora 13 can streamline that process by allowing you to enhance and color grade your footage with the power of the cloud.

Every day editing at a professional scale 

With Filmora from Wondershare, creators of all skill levels can create professional quality videos and content with ease. From the DIY homemaker creating short content for YouTube to full-scale production teams working on episodic content, Filmora’s suite of tools can help you put out the best content with less work. Wondershare continues to work and improve Filmora with each upgrade so that you can spend less time editing and more time creating.  

TechRadar – All the latest technology news

Read More

Assistant with Bard video may show how it’ll work and when it could land on your Pixel

New footage has leaked for Google’s Assistant with Bard demonstrating how the digital AI helper could work at launch.

Nail Sadykov posted the video on X (the platform formerly known as Twitter) after discovering the feature on the Pixel Tips app. Apparently, Google accidentally spilled the beans on its tech, so it’s probably safe to say this is legitimate. It looks like something you would see in one of the company’s Keyword posts explaining the feature in detail except there’s no audio.

There will, based on the clip, be two ways to activate Assistant with Bard: either by tapping the Bard app and saying “Hey Google” or pressing and holding the power button. A multimodular input box rises from the bottom where you can type in a text prompt, upload photos, or speak a verbal command. The demo proceeds to only show the second method by having someone take a picture of a wilting plant and then verbally ask for advice on how to save it. 

See more

A few seconds later, Assistant with Bard manages to correctly identify the plant in the image (it’s a spider plant, by the way) and generates a wall of text explaining what can be done to revitalize it. It even links to several YouTube videos at the end.

Assistant with Bard has something of a badly kept secret. It was originally announced back in October 2023 but has since seen multiple leaks. The biggest info dump by far occurred in early January revealing much of the user experience as well as “various in-development features.” What’s been missing up to this point is news on whether or not Assistant with Bard will have any sort of limitations. As it turns out, there may be a few restrictions.

Assistant Limitations

Mishaal Rahman, another industry insider, dove into Pixel Tips searching for more information on the update. He claims Assistant with Bard will only appear on single-screen Pixel smartphones powered by a Tensor chip. This includes the Pixel 6, Pixel 7, and Pixel 8 lines. Older models will not receive the upgrade and neither will the Pixel Tablet, Pixel Fold, or the “rumored Pixel Fold 2”.

Additionally, mobile devices must be running the Android 14 QPR2 beta “or the upcoming stable QPR2 release” although it’s most likely going to be the latter. Rahman states he found a publication date in the Pixels Tip app hinting at a March 2024 release. It’s important to point out that March is also the expected launch window for Android 14 QPR2 and the next Feature Drop for Pixel phones.

No word on whether or not other Android devices will receive Assistant with Bard. It seems it’ll be exclusive to Pixel for the moment. We could see the update elsewhere, however considering that key brands, like Samsung, prefer having their own AI, an Assistant with Bard expansion seems unlikely. But we could be wrong.

Until we learn more, check out TechRadar's list of the best Pixel phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Windows 11’s AI-powered Voice Clarity feature improves your video chats, plus setup has a new look (finally)

Windows 11 has a new preview build out that improves audio quality for your video chats and more besides.

Windows 11 preview build 26040 has been released in the Canary channel (the earliest test builds) complete with the Voice Clarity feature which was previously exclusive to owners of Surface devices.

Voice Clarity leverages AI to improve audio chat on your end, canceling out echo, reducing reverberation or other unwanted effects, and suppressing any intrusive background noises. In short, it helps you to be heard better, and your voice to be clearer.

The catch is that apps need to use Communications Signal Processing Mode to have the benefit of this feature, which is unsurprisingly what Microsoft’s own Phone Link app uses. WhatsApp is another example, plus some PC games will be good to go with this tech, so you can shout at your teammates and be crystal clear when doing so.

Voice Clarity is on by default – after all, there’s no real downside here, save for using a bit of CPU juice – but you can turn it off if you want.

Another smart addition here is a hook-up between your Android phone and Windows 11 PC for editing photos. Whenever you take a photo on your smartphone, it’ll be available on the desktop PC straight away (you’ll get a notification), and you can edit it in the Snipping Tool (rather than struggling to deal with the image on your handset).

For the full list of changes in build 26040, see Microsoft’s blog post, but another of the bigger introductions worth highlighting here is that the Windows 11 setup experience has been given a long overdue lick of paint.

Windows 11 Setup

(Image credit: Microsoft)

Analysis: Setting the scene

It’s about time Windows setup got some attention, as it has had the same basic look for a long time now. It’d be nice for the modernization to get a touch more sparkle, we reckon, though the improvement is a good one, and it’s not exactly a crucial part of the interface (given that you don’t see it after you’ve installed the operating system, anyway).

We have already seen the capability for Android phone photos to be piped to the Snipping Tool appear in the Dev channel last week, but it’s good to see a broader rollout to Canary testers. It is only rolling out, though, so bear in mind that you might not see it yet if you’re a denizen of the Canary channel.

As for Voice Clarity, clearly that’s a welcome touch of AI for all Windows 11 users. Whether you’re chatting to your family to catch up at the weekend, or you work remotely and use your Windows 11 PC for meetings, being able to be heard better by the person (or people) on the other end of the call is obviously a good thing.

You might also like…

TechRadar – All the latest technology news

Read More

Windows 11 update applies a bunch of fixes for a Start menu glitch, video chat bug and more

Windows 11 just received a new update which comes with a whole load of bug fixes for versions 23H2 and 22H2, including the resolution of an issue affecting video chats, and a problem with the Start menu.

Patch KB5034204 just became available, but it’s worth noting upfront that this is a preview update, so it’s still in beta effectively.

As mentioned, one of the more important fixes here is the smoothing over of a bug relating to video calls – now this one has been squashed, these calls should be more reliable. (So if you were having problems with video chat stability in one way or another, hopefully that’ll no longer be the case after this update).

If you own a pair of Bluetooth Low Energy (BLE) Audio earbuds, you may have experienced the sound dropping out when streaming music – that has also been resolved with KB5034204. Also, a problem with Bluetooth phone calls – where the audio fails to route through your PC, when you answer the call on the computer – has similarly been stamped out.

Another bug Microsoft has cured is search functionality failing to work on the Start menu.

Microsoft has also addressed a problem where troubleshooters fail – not very useful given that you only run a troubleshooter when you’re already trying to solve an issue with your Windows 11 system. That bug happens when using the Get Help app, we’re told.

There are a whole host of other fixes, too, including one for Gallery in File Explorer that means you can’t close a tooltip (a small flaw, but an annoying one). For the full list of fixes implemented, check out Microsoft’s support document.


Analysis: Take a chance – or not?

Should you download a preview update? This is a topic we’ve discussed before, and the short answer is probably not – unless you really need one of the fixes provided.

As mentioned, by its very nature, a preview update is not yet finished – that’s why these are marked as optional, and aren’t automatically piped through to your PC (you have to manually download them from Windows Update). In short, there’s more chance of things going wrong with a preview update.

However, if you’re one of the Windows 11 users who are experiencing a more aggravating issue, like video calls or your streaming music playback being ruined, then you might decide installing the update is likely worth the risk (which should be a limited risk, after all – these updates are nearly done at this stage).

That’s the other point to bear in mind, though – as they’re nearly done, you won’t have to wait long for the fully finished cumulative update to arrive next month. In this case, this preview will become the February update for Windows 11 released on February 13, so that’s only a few weeks away now.

Generally speaking, it’s probably worth holding out unless there’s something that’s really bugging you (pardon the pun) in Windows 11 right now, and it’s one of those listed fixes.

You might also like…

TechRadar – All the latest technology news

Read More