Annoying Windows 11 bug that distorted videos playing in Chrome or Edge browsers has finally been squashed

Perhaps one of the most annoying bugs in Windows 11 has finally been addressed and fixed by Microsoft in the latest update for the OS.

The glitch in question caused visual distortions in videos in Chromium-based browsers for some Windows 11 users, including Google Chrome, Microsoft Edge, and Opera.

The level of distortion changes from user to user, going by reports, but usually includes grey static and general nuttiness when you’re trying to watch a video in your browser! It sounds pretty nasty for those affected.

According to Windows Latest, the issue occurs mostly on PCs with Nvidia graphics cards, and speculation holds that the corruption may be related to Chromium power management. Thankfully, the June cumulative update (KB5039212) has finally squashed the bug, so it shouldn’t bother Windows 11 users any longer. 

A support document from Microsoft states: “This update addresses an issue that distorts parts of the screen. This occurs when you use a Chromium-based browser to play a video.”

The June update for Windows 11 also tackles issues with glitchy or unresponsive taskbars and problems some users had with their PC failing to return from hibernate mode.

Windows Latest tested the fix for visual glitches with videos and reported that it solves the bug. That’s good to hear and means that we have some sort of confirmation that the fix works, so hopefully if you’re experiencing the issue, you should soon see it resolved. 

This nasty browser-related bug has been around for quite some time now, and while I’m glad that the issue has finally been cured, it is rather odd that it’s taken this long. As to why, well, I can only guess the issue was more complex to address than it seems at face value, but at any rate, it’s not the first time we’ve had to wait for ages to get a Windows problem resolved.

You might also like…

TechRadar – All the latest technology news

Read More

YouTube could soon make it impossible to use ad blockers on its videos – here’s how

YouTube’s crusade against ad blockers has seen the platform try out multiple strategies, from auto-skipping entire videos to crippling third-party apps. Now they're trying something new, though. 

The company is now experimenting with what could be its most insidious tactic yet – server-side ad injection. This news comes from the developer behind SponsorBlock, a prominent ad blocker for YouTube, who sounded the alarm on X (the platform formerly known as Twitter).

Server-side ad injection (also called server-side ad insertion) is where websites directly integrate advertisements into video content on the server, hence the name. YouTube's current method is more akin to client-side ad insertion, or CSAI, which places advertisements on videos while on web browsers. 

Ad blockers operate by stopping CSAI ads, but they don’t work against SSAI (server-side ad injection) techniques. That’s because, under SSAI, advertisements are considered to be “indistinguishable from the video,” according to 9To5Google.

If YouTube decides to implement SSAI on a wide scale, it would essentially break ad blockers as they’d be unable to stop commercials. A small group of users on the YouTube subreddit have reported encountering the tech, with one of the top comments noting they’re seeing ads even though they use uBlock Origin on Firefox. Nothing they do to fix the problems seems to work. 

Possible workaround

Despite all the doom and gloom surrounding the situation, hope is not lost. The SponsorBlock developer made an FAQ addressing SSAI on GitHub, explaining this is not the end of the extension. 

They state that if YouTube decides to implement the injection, it would have to send data to the video player informing it how long an advertisement will last. It’s possible for ad blockers to obtain the data and utilize it to stop the commercial. 

But, giving an ad blocker the ability to do so will be difficult. It may be a while until these extensions can successfully stop SSAI. The developer states that “SponsorBlock will not work for people” while the experiment is underway.

New restrictions

In addition to SSAI, a group of developers found a potentially new restriction on YouTube, where the platform will tell you to log into your account before you can watch content. 

The website apparently wants to make sure “you’re not a bot.” Android Authority, in its report, believes YouTube might soon “limit logged-out video access in the future.” If this is ever introduced, it would severely limit how YouTube videos are shared. 

See more

Software developers are, however, a wily bunch. The team behind content downloader Cobalt has found a way to circumvent the restriction. But YouTube could roll out stronger limitations on content sharing and an even stronger crackdown on ad blockers.

Be sure to check out TechRadar's list of the best free YouTube download app for 2024.

You might also like

TechRadar – All the latest technology news

Read More

A new OpenAI Sora rival just landed for AI videos – and you can use it right now for free

The text-to-video AI boom has really kicked off in the past few months, with the only downside being that the likes of OpenAI Sora still aren't available for us to try. If you're tired of waiting, a new rival called Dream Machine just landed – and you can take it for a spin right now.

Dream Machine is made by Luma AI, which has previously released an app that helps you shoot 3D photos with your iPhone. Well, now it's turned its attention to generative video, which has a free tier that you can use right now with a Google account – albeit with some caveats.

The main one is that Dream Machine seems to be slightly overwhelmed at the time of writing. There's currently a banner on the site stating that “generations take 120 seconds” and that “due to high demand, requests will be queued”. Our text prompt took over 20 minutes to be processed, but the results (below) are pretty impressive.

Dream Machine's outputs are more limited in length and resolution compared to the likes of OpenAI's Sora and Kling AI, but it's a good taster of how these services will work. The clips it produces are five seconds long and in 1360×752 resolution. You just type a prompt into its search bar and wait for it to appear in your account, after which you can download a watermarked version. 

While there was a lengthy wait for the results (which should hopefully improve once initial demand has dropped), our prompt of 'a close-up of a dog in sunglasses driving a car through Las Vegas at night' produced a clip that was very close to what we envisaged. 

Dream Machine's free plan is capped at 30 generations a month, but if you need more there are Standard (120 generations, $ 29.99 a month, about £24, AU$ 45), Pro (400 generations, $ 99.99 a month, about £80, AU$ 150) and Premier (2,000 generations, $ 499.99 a month, about £390, AU$ 750).

A taste of AI videos to come

Like most generative AI video tools, questions remain about exactly what data Luma AI's was trained on – which means that its potential outside of personal use or improving your GIF game could be limited. It also isn't the first free text-to-video tool we've seen, with Runway's Gen 2 model coming out of beta last year.

The Dream Machine website also states that the tool does have technical limitations when it comes to handling text and motion, so there's plenty of trial-and-error involved. But as a taster of the more advanced (and no doubt more expensive) AI video generators to come, it's certainly a fun tool to test drive.

That's particularly the case, given that other alternatives like Google Veo currently have lengthy waitlists. Meanwhile, more powerful models like OpenAI's Sora (which can generate videos that are 60-seconds long) won't be available until later this year, while Kling AI is currently China-only.

This will certainly change as text-to-video generation becomes mainstream, but until then, Dream Machine is a good place to practice (if you don't mind waiting a while for the results).

You might also like…

TechRadar – All the latest technology news

Read More

Google Photos will soon fix your average videos with a single ‘enhance’ tap

While there are plenty of video editing tools built into smartphones, it can take some skill to pull off an edit that's pleasing to the eye. But Google Photos looks set to change that.

By digging into an upcoming version of the Photos app, Android Authority contributor and code-diver Assemble Debug found a feature called “Enhance your videos”, and with a bit of work, got it up and running. As one would guess from the name, the feature is used to enhance videos accessed via the Photos app in a single tap.

Enhance your videos can automatically adjust brightness, contrast, color saturation and other visual parameters for a selected video in order to deliver an edited version that should look better than the original, at least in the eyes of Google.

While this feature isn’t official yet, it may be somewhat familiar to Google Photos users, as there’s already an option to enhance photos in the web and mobile versions of the service. In my experience, the enhance option works rather well, though it’s far from perfect and can overbake its enhancements.

But it makes sense for Google to extend this enhancement function to videos, especially in the TikTok era; do go and check out the TechRadar TikTok for news, views and reactions to the latest tech.

One neat thing about Enhance your video, according to Android Authority, is that all the processing happens on-device, thereby bypassing the need for an internet connection and cloud-based processing. Whether this will work on older phones without AI-centric chipsets remains to be seen.

Given that Assemble Debug got the Enhance your video feature up and running, it looks like it could be nearing an official rollout. We can expect to hear more about this and other upcoming Google features, as well as Android 15, at Google I/O 2024, which is set to kick off on May 14.

You might also like

TechRadar – All the latest technology news

Read More

YouTube can stream 8K videos to your Meta Quest 3 – even though its displays aren’t 8K

Following news that Meta’s Quest 3’s getting some big mixed reality upgrades including an AI that can recognize furniture and improved passthrough quality, there’s yet another improvement on the way this time for one of my favorite Quest apps: YouTube.

That’s because the VR version of the video-sharing platform now supports 8K video playback on Quest 3 – up from the previous max of 4K.

To turn it on make sure you’re running YouTube VR app version 1.54 or later, then boot up a video that supports 8K, tap on the gear icon, and where it says Quality you want to change the resolution to 4320p – or 4320p60 if you want 8K at 60fps instead of the usual 30fps. If 4320p isn’t an option in this list unfortunately the video you want to watch isn’t streaming in 8K.

There are a few extra caveats. First, you’ll want a strong internet connection, because even if the video supports 8K playback you’ll struggle to stream it over weak WiFi – unless you like waiting for it to buffer. Oh, and one other important detail; the Quest 3 doesn’t have 8K displays. But that's not as big a problem as it might seem.

Method in the 8K madness

The Quest 3 has two displays (one for each eye) that boast 2,064 x 2,208 pixels each; 8K resolution is 7,680 × 4,320 pixels. Even if we combine the two displays they still boast only just over 25% as many pixels as an 8K display.

So is 8K streaming pointless? Well, not entirely. 

A Meta Quest 3 owner watching a spatial video of their husky dog in a field

Spatial video is 3D, but not as immersive as 360 video (Image credit: Meta)

For flat YouTube videos, playing them in 8K probably is worthless on Quest hardware. The only advantage you might find is that you’ll be seeing a downscaled video – the opposite of upscaled, where a higher resolution source is played at a lower resolution – which can sometimes lead to a more detailed image than simply streaming a video at the lower resolution.

The real improvement can be found instead with immersive 360-degree videos. 

To explain things simply: when you see a flat video you see the whole resolution in that 16:9 frame. In 360 videos the resolution is spread across a much larger image, and you only see portions of that image based on where you’re looking. That’s why – if you’ve watched 360 videos in VR – 4K content can look more like HD, and HD content can look like blurry messes.

By bumping things up to 8K you’ll find that immersive 3D video should look a lot more crisp – as the sections you’re looking at are now effectively 4K. So while you're not seeing 8K, you're still getting a higher resolution.

This update may also be a good future-proofing update for the next Meta hardware. With rumors that a Meta Quest Pro 2 could up the display game for Quest hardware, there’s a chance that it'll get closer to having actual 8K displays, though we’ll have to wait and see.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI’s impressive new Sora videos show it has serious sci-fi potential

OpenAI's Sora, its equivalent of image creation but for videos, made huge shockwaves in the swiftly advancing world of AI last month, and we’ve just caught a few new videos which are even more jaw-slackening than what we have already been treated to.

In case you somehow missed it, Sora is a text-to-video AI meaning you can write a simple request and it’ll compose a video (just as image generation previously worked, but obviously a much more complex endeavor).

An eye with the iris being a globe

(Image credit: OpenAI)

Now OpenAI’s Sora research lead Tim Brooks has released some new content generated by Sora on X (formerly Twitter). 

This is Sora’s crack at fulfilling the following request: “Fly through tour of a museum with many paintings and sculptures and beautiful works of art in all styles.”

Pretty impressive to say the least. On top of that, Bill Peebles, also a Sora research lead, showed us a clip generated from the following prompt: “An alien blending in naturally with new york city, paranoia thriller style, 35mm film.”

An alien character walking through a street

(Image credit: OpenAI)

Content creator Blaine Brown then stepped in to embellish the above clip, cutting it to repeat the footage and make it longer, while having the alien rapping, complete with lip-syncing. The music is generated by Suno AI by the way (with the lyrics written by Brown, mind), and lip-syncing is done with Pika Labs AI.

See more

Analysis: Still early days for Sora

Two people having dinner

(Image credit: OpenAI)

It’s worth underlining how fast things seem to be progressing with the capabilities of AI. Image creation powers were one thing – and extremely impressive in themselves – but this is entirely another. Especially when you remember that Sora is still just in testing at OpenAI, with a limited set of ‘red teamers’ (testers hunting out bugs and smoothing over those wrinkles).

The camera work in the museum fly-through flows realistically and feels nicely imaginative in the way it swoops around (albeit with the occasional judder). And the last tweet shows how you can take a base clip and flesh it out with content including AI-generated music.

Of course, AI can write a script as well, and so it begs the question: how long will it be before a blue alien is starring in an AI-generated post-apocalyptic drama. Or an (unintentional) comedy perhaps?

You get the idea, and we’re getting carried away, of course, but still – what AI could be capable of in just a few years is potentially mind-blowing, frankly.

Naturally, we’ll be seeing the cream of the crop of what Sora is capable of in these teasers, and there have been some buggy and weird efforts aired too. (Just as when ChatGPT and other AI chatbots first rolled onto the scene, we saw AI hallucinations and general unhinged behavior and replies).

Perhaps the broader worry with Sora, though, is how this might eventually displace, rather than assist, content creators. But that’s a fear to chew over on another day – not forgetting the potential for misuse with AI-created videos which we recently discussed in more depth here.

You might also like

TechRadar – All the latest technology news

Read More

Google’s impressive Lumiere shows us the future of making short-form AI videos

Google is taking another crack at text-to-video generation with Lumiere, a new AI model capable of creating surprisingly high-quality content. 

The tech giant has certainly come a long way from the days of Imagen Video. Subjects in Lumiere videos are no longer these nightmarish creatures with melting faces. Now things look much more realistic. Sea turtles look like sea turtles, fur on animals has the right texture, and people in AI clips have genuine smiles (for the most part). What’s more, there's very little of the weird jerky movement seen in other text-to-video generative AIs. Motion is largely smooth as butter. Inbar Mosseri, Research Team Lead at Google Research, published a video on her YouTube channel demonstrating Lumiere’s capabilities. 

Google put a lot of work into making Lumiere’s content appear as lifelike as possible. The dev team accomplished this by implementing something called Space-Time U-Net architecture (STUNet). The technology behind STUNet is pretty complex. But as Ars Technica explains, it allows Lumiere to understand where objects are in a video, how they move and change and renders these actions at the same time resulting in a smooth-flowing creation. 

This runs contrary to other generative platforms that first establish keyframes in clips and then fill in the gaps afterward. Doing so results in the jerky movement the tech is known for.

Well equipped

In addition to text-to-video generation, Lumiere has numerous features in its toolkit including support for multimodality. 

Users will be able to upload source images or videos to the AI so it can edit them according to their specifications. For example, you can upload an image of Girl with a Pearl Earring by Johannes Vermeer and turn it into a short clip where she smiles instead of blankly staring. Lumiere also has an ability called Cinemagraph which can animate highlighted portions of pictures.

Google demonstrates this by selecting a butterfly sitting on a flower. Thanks to the AI, the output video has the butterfly flapping its wings while the flowers around it remain stationary. 

Things become particularly impressive when it comes to video. Video Inpainting, another feature, functions similarly to Cinemagraph in that the AI can edit portions of clips. A woman’s patterned green dress can be turned into shiny gold or black. Lumiere goes one step further by offering Video Stylization for altering video subjects. A regular car driving down the road can be turned into a vehicle made entirely out of wood or Lego bricks.

Still in the works

It’s unknown if there are plans to launch Lumiere to the public or if Google intends to implement it as a new service. 

We could perhaps see the AI show up on a future Pixel phone as the evolution of Magic Editor. If you’re not familiar with it, Magic Editor utilizes “AI processing [to] intelligently” change spaces or objects in photographs on the Pixel 8. Video Inpainting, to us, seems like a natural progression for the tech.

For now, it looks like the team is going to keep it behind closed doors. As impressive as this AI may be, it still has its issues. Jerky animations are present. In other cases, subjects have limbs warping into mush. If you want to know more, Google’s research paper on Lumiere can be found on Cornell University’s arXiv website. Be warned: it's a dense read.

And be sure to check out TechRadar's roundup of the best AI art generators for 2024.

You might also like

TechRadar – All the latest technology news

Read More

WhatsApp’s desktop app now lets you send self-destructing photos and videos

Meta is rolling out a View Once feature to WhatsApp on desktop and web allowing users to send out time-sensitive content. 

The update was initially discovered by WABetaInfo as the tech giant has yet to formally announce it. Looking at WABetaInfo's report, it’s basically the exact same feature on the mobile. WhatsApp added View Once to its smartphone app back in 2022 as a new privacy tool. Pictures or videos sent to contacts in this manner cannot be saved. Once the recipient looks at the file, it’s gone forever. This ensures sensitive material is never seen by outside parties, shared with others, or risks being taken by a bad actor. Apparently, this was highly requested as WABetaInfo claims people complained about “the inability to send view once messages” on desktop. It seems Meta heard the outcry, although it did take the company over a year to respond.  

Vital details

There are some minor details you should know about. 

Recipients have 14 days to open the media or it’ll be automatically deleted, according to WhatsApp’s Help Center. The other person cannot take screenshots of the temporary content, but only if they have the latest version of WhatsApp installed. It is possible for others to screenshot a View Once file if they're running outdated software. As a result, the company recommends strictly sending content to trusted individuals. There are plans to rectify the privacy gap, however no word on when Meta will address this issue.

Do note you cannot send multiple temporary images at once. You have to send each file one by one. Plus, as pointed out by Windows Central, you can rewatch temporary videos “as often as you’d like”, but you have to stay in the interface. Clicking play prematurely or leaving the window will lose you access.

The update will be available to both Windows and macOS users. WABetaInfo states the update is being released in waves so only a select group has access to it at the moment. We recommend keeping an eye out for the patch when it arrives.

How to send View Once content

Sending a View Once photo is easy. After launching WhatsApp on desktop and selecting a chat, click the attachment icon next to the text box then choose an image. Above the send button is a number one inside of a disappearing circle. Clicking that icon activates the View Once function. Send the picture to someone and it'll delete the moment they close it. 

WhatsApp on web has a different layout, but luckily the process is exactly the same.

WhatsApp View Once on desktop

(Image credit: Future)

While we have you, be sure to follow TechRadar’s official WhatsApp channel to get our latest reviews sent right to your phone. 

You might also like

TechRadar – All the latest technology news

Read More

Google Bard can now watch YouTube videos for you (sort of)

Google has bolstered the powers of Bard AI regarding YouTube videos, with the AI now capable of tapping into a better level of understanding such content.

Google posted about the latest update for Bard and how these are the ‘first steps’ in allowing the AI to understand YouTube videos, and pull out relevant info from a clip as requested.

The example given is that you’re hunting out a YouTube explainer on how to bake a certain cake, and you can ask Bard how many eggs are required for the recipe in the video that pops up.

Bard is capable of taking in the whole video and summarizing it, or you can ask the AI specific questions as mentioned, with the new feature enabling the user to have ‘richer conversations’ with Bard on any given clip.

Another recent update for Bard improved its maths powers, specifically for equations and helping you solve tricky ones – complete with straightforward step-by-step explanations (just in English to begin with). Those equations can be typed in or supplied to Bard via an uploaded image.


Analysis: YouTube viewing companion

These are some useful new abilities, particularly the addition for YouTube, which builds on Google’s existing extensions for Bard that hook up to the company’s services including the video platform.

It’s going to be pretty handy to have Bard instantly pull up relevant details such as the mentioned quantities for recipes. Or indeed specifics you can’t recall when having just watched a video, to save you having to rewind back through to try and find those details.

The maths and equation-related skills are going to be a boon, too. The broad idea here is not just to show a solution, but teach how that solution was arrived at, thus equipping you to deal with other similar problems down the line.

Via Neowin

You might also like

TechRadar – All the latest technology news

Read More

Google Photos can now make automatic highlight videos of your life – here’s how

Google Photos is already capable of some increasingly impressive photo and video tricks – and now it's learned to create automatic highlight videos of the friends, family, and places you've chosen from your library.

The Google Photos app on Android and iOS already offers video creation tools, but this new update (rolling out from October 25) will let you search the people, places, or activities in your library that you'd like to star in an AI-created video. The app will then automatically rustle up a one-minute highlights video of all your chosen subjects.

This video will include a combination of video clips and photos, but Google Photos will also add music and sync the footage to those tunes. These kinds of auto-created highlight videos, which we've seen in the Google Photos Memories feature and elsewhere from the likes of GoPro, can be a little hit-and-miss in their execution, but we're looking forward to giving Google's new AI director a spin.

Fortunately, if you don't like some of Google Photos' choices, you can also trim or rearrange the clips, and pick some different music. You can see all of this in action in the example video below.

The Google Photos app showing an auto-created family video

(Image credit: Google)

So how will you be able to test-drive this new feature, once it rolls out on Android and iOS from October 25?

At the top of the app, hit the 'plus' icon and you'll see a new menu that includes options to create new Albums, Collages, Cinematic photos, Animations and, yes, Highlight videos.

Three phones on an orange background showing Google Photos video creation tools

(Image credit: Google)

Tap 'Highlight videos' and you'll see a search bar where you can search for your video stars, be that people, places, or even the years that events have taken place. From Google's demo, it looks like the default video length is one minute, but it's here that you can make further tweaks before hitting 'save'.

We've asked Google if this feature is coming to the web version of Google Photos and also Chromebooks, and will update this article when we hear back.

Tip of the AI iceberg

Google's main aim with photos and videos is to automate the kinds of edits that non-professionals have little time or appetite for – so this AI-powered video creator tool isn't a huge surprise.

We recently saw a related tool appear in Google Photos' Memories feature, which now lets you “co-author” Memories albums with friends and family. Collaborators can add their own photos and videos to your Memories, which can then be shared as a standalone video.

So whether you're looking to edit together your own highlights reels or, thanks to this new tool, let Google's algorithms do it for you, Google Photos is increasingly turning into the fuss-free place to do it.

The Google Pixel 8 Pro also recently debuted some impressive cloud-based video features, including Video Boost and Night Sight Video. The only slight shame is that these features require an internet connection rather than working on-device, though AI tools like Magic Eraser and Call Screen do at least work locally on your phone.

You might also like

TechRadar – All the latest technology news

Read More