Runway’s new OpenAI Sora rival shows that AI video is getting frighteningly realistic

Just a week on from arrival of Luma AI's Dream Machine, another big OpenAI Sora has just landed – and Runway's latest AI video generator might be the most impressive one yet.

Runway was one of the original text-to-video pioneers, launching its Gen-2 model back in March 2023. But its new Gen-3 Alpha model, which will apparently be “available for everyone over the coming days”, takes things up several notches with new photo-realistic powers and promises of real-world physics.

The demo videos (which you can see below) showcase how versatile Runway's new AI model is, with the clips including realistic human faces, drone shots, simulations of handheld cameras and atmospheric dreamscapes. Runway says that all of them were generated with Gen-3 Alpha “with no modifications”.

Apparently, Gen-3 Alpha is also “the first of an upcoming series of models” that have been trained “on a new infrastructure built for large-scale multimodal training”. Interestingly, Runway added that the new AI tool “represents a significant step towards our goal of building General World Models”, which could create possibilities for gaming and more.

A 'General World Model' is one that effectively simulates an environment, including its physics – which is why one of the sample videos shows the reflections on a woman's face as she looks through a train window.

These tools won't just be for us to level-up our GIF games either – Runway says it's “been collaborating and partnering with leading entertainment and media organizations to create custom versions of Gen-3 Alpha”, which means tailored versions of the model for specific looks and styles. So expect to see this tech powering adverts, shorts and more very soon.

When can you try it?

A middle-aged sad bald man becomes happy as a wig of curly hair and sunglasses fall suddenly on his head

(Image credit: Runway)

Last week, Luma AI's Dream Machine arrived to give us a free AI video generator to dabble with, but Runway's Gen-3 Alpha model is more targeted towards the other end of the AI video scale. 

It's been developed in collaboration with pro video creators with that audience in mind, although Runway says it'll be “available for everyone over the coming days”. You can create a free account to try Runway's AI tools, though you'll need to pay a monthly subscription (starting from $ 12 per month, or around £10 / AU$ 18 a month) to get more credits.

You can create videos using text prompts – the clip above, for example, was made using the prompt “a middle-aged sad bald man becomes happy as a wig of curly hair and sunglasses fall suddenly on his head”. Alternatively, you can use still images or videos as a starting point.

The realism on show is simultaneously impressive and slightly terrifying, but Runway states that the model will be released with a new set of safeguards against misuse, including an “in-house visual moderation system” and C2PA (Coalition for Content Provenance and Authenticity) provenance standards. Let the AI video battles commence.

You might also like…

TechRadar – All the latest technology news

Read More

Meta can’t stop leaking its next VR headset, as it accidentally shows off the Quest 3S

Meta has to know what it’s doing, because for the second time in as many weeks it has leaked the Meta Quest 3S – this time its next VR headset made a cameo in the background of a video filmed by its CTO.

In a video highlighting the new mixed-reality upgrades arriving as part of Horizon OS update v66 which Meta CTO Andrew Bosworth posted on Threads, we see a view of someone’s home office with some kind of Meta Quest headset on a desk in the background.

The thing is, this Quest device doesn’t appear to be anything we recognise. It looks too bulky to be a Meta Quest 3, while it has cameras in the wrong places and it isn't round enough to be a Quest 2. The white plastic cladding also confirms it’s not an original Quest or Quest Pro.

Instead, it looks nearly identical to the leaked Quest 3S design. Luna – the leaker sharing the bulk of the Quest 3S info – took to Twitter to point out this accidental teaser, which Bosworth then replied to, saying “love that higher quality video over on Threads…”

See more

This is far from a confirmation, but combined with the Quest 3S appearing accidentally on some Meta Quest Store pages it seems very likely that the so-called cheaper Quest 3 model is coming soon; most likely at Meta Connect 2024 which Meta revealed is taking place on September 25 and 26.

That said, with all these leaks Meta may make an earlier official teaser ahead of its wider reveal later this year to try and regain some control over the situation.

It’s time to eat my hat 

I was convinced the Meta Quest 3S wouldn’t return to the Oculus Quest 2’s bulky design when I first saw the leaks. I fully expected Meta to prioritize comfort as this was a major critique in Vision Pro reviews – Meta’s most high profile rival.

Instead I was prepared to see it shave off the price by using lower quality displays, less RAM, cheaper materials, or perhaps using a less impressive mixed-reality camera system. Heck, with all the hand-tracking updates we've seen, I wouldn’t have been surpised if the controllers had been let go – even if that wouldn’t be a great idea overall.

But with this latest leak I have to accept that I was wrong. The Quest 3S does look to be a Quest 3 in the Quest 2’s bulky body. The only remaining question is how much will it cost?

Oculus Quest 2 virtual reality headset under a green light

Welcome back Quest 2 design, we hardly missed you (Image credit: Shutterstock / Boumen Japet)

This is where I’m a little worried. If the Quest 3S isn’t the technical downgrade I was anticipating, can a price drop to the Quest 2’s launch price of $ 299 / £299 / AU$ 479 be justified? I mean, Meta can do whatever it wants, but pricing the 3S will be a challenge.

If it goes too low – which $ 299 / £299 / AU$ 479 feels like it might be – can we justify spending $ 499 / £479 / AU$ 799 on the full-on Meta Quest 3? If Meta instead aims higher, maybe $ 399 / £399 / AU$ 599, then this won’t feel like the budget Quest 2 replacement leaks have teased the device to be – and begs the question if it’s not just worth spending that bit extra to get the full-on Quest 3.

At least even if Meta does go for the cheaper end of the scale it won’t anywhere close to as big a burn to Meta Quest 3 customers as when it teased the Quest 3 as it’s “most powerful headset yet” less than six months after it launched the Quest Pro – with it then selling the Quest 3 for only a third of the Pro’s original price.

You might also like…

TechRadar – All the latest technology news

Read More

ChatGPT shows off impressive voice mode in new demo – and it could be a taste of the new Siri

ChatGPT's highly anticipated new Voice mode has just starred again in new demo that shows off its acting skills – and the video could be a taste of what we can expect from the reported new deal between Apple and OpenAI.

The ChatGPT app already has a Voice mode, but OpenAI showed off a much more impressive version during the launch of its new GPT-4o model in May. Unfortunately, that was then overshadowed by OpenAI's very public spat with Scarlett Johansson over the similarity of ChatGPT's Sky voice to her own in the movie Her. But OpenAI is hyping up the new mode again in the clip below.

The video shows someone writing a story and getting ChatGPT to effectively do improv drama, providing voices for a “majestic lion” and a mouse. Beyond the expressiveness of the voices, what's notable is how easy it is to interrupt the ChatGPT voice for a better conversational flow, and also the lack of latency.     

OpenAI says the new mode will “be rolling out in the coming weeks” and that's a pretty big deal. Not least because, as Bloomberg's Mark Gurman has again reported, Apple is expected to announce a new partnership with OpenAI at WWDC 2024 on June 10.   

Exactly how OpenAI's tech is going to be baked into iOS 18 remains to be seen, but Gurman's report states that Apple will be “infusing its Siri digital assistant with AI”. That means some of its off-device powers could tap into ChatGPT – and if it's anything like OpenAI's new demo, that would be a huge step forward from today's Siri.

Voice assistants finally grow up?

Siri's reported AI overhaul will likely be one of the bigger stories of WWDC 2024. According to Dag Kittlaus, who co-founded and ran Siri before Apple acquired it in 2010, the deal with OpenAI will likely be a “short- to medium-term relationship” while Apple plays catch up. But it's still a major surprise.

It's possible that Siri's AI improvements will be restricted to more minor, on-device functions, with Apple instead using its OpenAI partnership solely for text-based queries. After all, from iOS 15 onwards, Apple switched Siri's audio processing to being on-device by default, which meant you could use it without an internet connection.

But Bloomberg's Gurman claims that Apple has “forged a partnership to integrate OpenAI’s ChatGPT into the iPhone’s operating system”. If so, it's possible that one unlikely move could be followed by another, with Siri leaning on ChatGPT for off-device queries and a more conversational flow. It's already been possible to use ChatGPT with Siri for a while now using Apple's Shortcuts.

It wouldn't be the first time that Apple has integrated third-party software into iOS. Back on the original iPhone, Apple made a pre-installed YouTube app which was later removed once Google had made its own version. Gurman's sources noted that by outsourcing an AI chatbot, “Apple can distance itself from the technology itself, including its occasional inaccuracies and hallucinations.”

We're certainly looking forward to seeing how Apple weaves OpenAI's tech into iOS –and potentially Siri – at WWDC 2024.

You might also like

TechRadar – All the latest technology news

Read More

Windows 11’s Recall feature is already running on unsupported CPUs – and it shows why this is a bad idea

Windows 11 enthusiasts are already playing with the new – and controversialRecall feature the OS now has in preview (for 24H2), and have got it running on current Arm-based CPUs by fudging things.

While Recall is present in the recently released preview of the Windows 11 24H2 update, Microsoft makes it clear that the feature won’t work on current PCs, as it requires a Copilot+ PC (the new name for the ‘AI PC’).

In other words, Recall needs a device with a powerful enough NPU to run it (and other new AI features in 24H2), which is currently only the new Snapdragon X chips (and AMD plus Intel CPUs further down the line).

Even those Snapdragon laptops aren’t available just yet (they will be next month), but leaker Albacore has still managed to tinker under the hood of Windows 11 24H2 and get Recall working on a current Arm processor.

See more

You can see a video of Recall being summoned on a standard (non-Copilot+) Windows 11 PC in the above post on X (formerly Twitter). As Albacore says, it shows ‘screenray’ in action which refers to the context-sensitive mode entered when you find something using Recall and select it.

As you can see, if the search result you want is a text file, screenray presents options pertaining to what you’ll need to do – copy and paste text. Or if it’s an image, you’ll get choices to copy the picture or open it for editing in an app.


Analysis: Working as not intended

It’s pretty cool to see this feature working on a processor without the necessary strength in terms of a powerful NPU (like the new Snapdragon X silicon sports), but at the same time, it illustrates why that NPU is needed. As you probably noticed, the interaction with Recall and screenray therein looks a bit laggy here – what the NPU does is provide specific AI acceleration to ensure this process runs more smoothly.

Furthermore, the feature is still in testing within a preview build here, and that won’t help either.

Albacore even sounds hopeful about getting Recall working not just on current ARM chips, but also on existing AMD and Intel (x86) CPUs, which also can’t officially run the feature. (Again, even current-gen processors from Teams Red and Blue lack an NPU with enough raw grunt).

If that happens, we can expect a similar experience to what we see here – but it’s not possible yet anyway, as Microsoft has only provided the machine learning model bundles for Arm to laptop makers. These don’t exist for AMD or Intel CPUs yet (as there’s no need for them – Lunar Lake and Strix Point, which will drive Copilot+ PCs, are still some way off launching, but they are fully expected to debut before 2024 is out).

Ultimately, this is an interesting fudge for now, but it’s likely a bad idea to be trying to get Recall up and running on a PC it’s not intended for. Simply because there may well be scenarios where it truly bogs down – such as when you have a larger, sprawling library of snapshots piled up – and there are doubtless good reasons why Microsoft has the mentioned NPU requirement in place.

Mind you, not everyone wants Recall, anyhow: certainly not those more privacy-conscious Windows 11 users out there who have already made their feelings clear. Indeed, a privacy watchdog in the UK is already investigating Recall before Microsoft even has the functionality officially live. The result of that enquiry will certainly be interesting, and Microsoft may be worried about another scenario where a big Windows 11 feature is blocked in Europe due to more stringent data regulations.

You might also like…

TechRadar – All the latest technology news

Read More

Watch this: Adobe shows how AI and OpenAI’s Sora will change Premiere Pro and video editing forever

OpenAI's Sora gave us a glimpse earlier this year of how generative AI is going to change video editing – and now Adobe has shown off how that's going to play out by previewing of some fascinating new Premiere Pro tools.

The new AI-powered features, powered by Adobe Firefly, effectively bring the kinds of tricks we've seen from Google's photo-focused Magic Editor – erasing unwanted objects, adding objects and extending scenes – to video. And while it isn't the first piece of software to do that, seeing these tools in an industry standard app that's used by professionals is significant.

For a glimpse of what's coming “this year” to Premiere Pro and other video editing apps, check out the video below. In a new Generative panel, there's a new 'add object' option that lets you type in an object you want to add to the scene. This appears to be for static objects, rather than things like a galloping horse, but it looks handy for b-roll and backgrounds.

Arguably even more helpful is 'object removal', which uses Firefly's AI-based smart masking to help you quickly select an object to remove then make it vanish with a click. Alternatively, you can just combine the two tools to, for example, swap the watch that someone's wearing for a non-branded alternative.

One of the most powerful new AI-powered features in photo editing is extending backgrounds – called Generative Fill in Photoshop – and Premiere Pro will soon have a similar feature for video. Rather than extending the frame's size, Generative Extend will let you add frames to a video to help you, for example, pause on your character's face for a little longer. 

While Adobe hasn't given these tools a firm release date, only revealing that they're coming “later this year”, it certainly looks like they'll change Premiere Pro workflows in a several major ways. But the bigger AI video change could be yet to come… 

Will Adobe really plug into OpenAI's Sora?

A laptop screen showing AI video editing tools in Adobe Premiere Pro

(Image credit: Adobe)

The biggest Premiere Pro announcement, and also the most nebulous one, was Adobe's preview of third-party models for the editing app. In short, Adobe is planning to let you plug generative AI video tools including OpenAI's Sora, Runway and Pika Labs into Premiere Pro to sprinkle your videos with their effects.

In theory, that sounds great. Adobe showed an example of OpenAI's Sora generating b-roll with a text-to-video prompt, and Pika powering Generative Extend. But these “early examples” of Adobe's “research exploration” with its “friends” from the likes of OpenAI are still clouded in uncertainty.

Firstly, Adobe hasn't committed to launching the third-party plug-ins in the same way as its own Firefly-powered tools. That shows it's really only testing the waters with this part of the Premiere Pro preview. Also, the integration sits a little uneasily with Adobe's current stance on generative AI tools.

A laptop screen showing AI video editing tools in Adobe Premiere Pro

(Image credit: Adobe)

Adobe has sought to set itself apart from the likes of Midjourney and Stable Diffusion by highlighting that Adobe Firefly is only trained on Adobe Stock image library, which is apparently free of commercial, branded and trademark imagery. “We’re using hundreds of millions of assets, all trained and moderated to have no IP,” Adobe's VP of Generative AI, Alexandru Costin, told us earlier this year.

Yet a new report from Bloomberg claims that Firefly was partially trained on images generated by Midjourney (with Adobe suggesting that could account for 5% of Firefly's training data). And these previews of new alliances with generative video AI models, which are similarly opaque when it comes to their training data, again sits uneasily with Adobe's stance.

Adobe's potential get-out here is Content Credentials, a kind of nutrition label that's also coming to Premiere Pro and will add watermarks to clarify when AI was used in a video and with which model. Whether or not this is enough for Adobe to balance making a commercially-friendly pro video editor with keeping up in the AI race remains to be seen.

You might also like

TechRadar – All the latest technology news

Read More

Google’s impressive Lumiere shows us the future of making short-form AI videos

Google is taking another crack at text-to-video generation with Lumiere, a new AI model capable of creating surprisingly high-quality content. 

The tech giant has certainly come a long way from the days of Imagen Video. Subjects in Lumiere videos are no longer these nightmarish creatures with melting faces. Now things look much more realistic. Sea turtles look like sea turtles, fur on animals has the right texture, and people in AI clips have genuine smiles (for the most part). What’s more, there's very little of the weird jerky movement seen in other text-to-video generative AIs. Motion is largely smooth as butter. Inbar Mosseri, Research Team Lead at Google Research, published a video on her YouTube channel demonstrating Lumiere’s capabilities. 

Google put a lot of work into making Lumiere’s content appear as lifelike as possible. The dev team accomplished this by implementing something called Space-Time U-Net architecture (STUNet). The technology behind STUNet is pretty complex. But as Ars Technica explains, it allows Lumiere to understand where objects are in a video, how they move and change and renders these actions at the same time resulting in a smooth-flowing creation. 

This runs contrary to other generative platforms that first establish keyframes in clips and then fill in the gaps afterward. Doing so results in the jerky movement the tech is known for.

Well equipped

In addition to text-to-video generation, Lumiere has numerous features in its toolkit including support for multimodality. 

Users will be able to upload source images or videos to the AI so it can edit them according to their specifications. For example, you can upload an image of Girl with a Pearl Earring by Johannes Vermeer and turn it into a short clip where she smiles instead of blankly staring. Lumiere also has an ability called Cinemagraph which can animate highlighted portions of pictures.

Google demonstrates this by selecting a butterfly sitting on a flower. Thanks to the AI, the output video has the butterfly flapping its wings while the flowers around it remain stationary. 

Things become particularly impressive when it comes to video. Video Inpainting, another feature, functions similarly to Cinemagraph in that the AI can edit portions of clips. A woman’s patterned green dress can be turned into shiny gold or black. Lumiere goes one step further by offering Video Stylization for altering video subjects. A regular car driving down the road can be turned into a vehicle made entirely out of wood or Lego bricks.

Still in the works

It’s unknown if there are plans to launch Lumiere to the public or if Google intends to implement it as a new service. 

We could perhaps see the AI show up on a future Pixel phone as the evolution of Magic Editor. If you’re not familiar with it, Magic Editor utilizes “AI processing [to] intelligently” change spaces or objects in photographs on the Pixel 8. Video Inpainting, to us, seems like a natural progression for the tech.

For now, it looks like the team is going to keep it behind closed doors. As impressive as this AI may be, it still has its issues. Jerky animations are present. In other cases, subjects have limbs warping into mush. If you want to know more, Google’s research paper on Lumiere can be found on Cornell University’s arXiv website. Be warned: it's a dense read.

And be sure to check out TechRadar's roundup of the best AI art generators for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Meta Quest 3 teardown video shows lower price doesn’t mean low-quality

We just got a good look at the guts inside a Quest 3 headset. iFixit tore down the VR gear into its individual parts to find out if the device offers good performance for its price point. Short answer: Yes, it does although there are some design flaws that make it difficult to repair.

What’s notable about the Quest 3 is that it has better “mixed-reality capabilities” than the Quest Pro. It's able to automatically map out a room as well as accurately keep track of the distance between objects without needing a “safe space”. The former is made possible by a depth sensor while the latter is thanks to the “time of flight sensor”. iFixit makes the interesting observation that the time of flight components could fit perfectly in the Quest Pro. 

It’s worth mentioning Andrew Bosworth, Meta’s Chief Technology Officer, once stated the sensors were removed from the pro model because it added extra “cost and weight” without providing enough benefits.” The Quest 3 is much slimmer, clocking at 512g. 

Meta Quest 3 breakdown

(Image credit: iFixit)

Hardware improvements

Digging deeper into the headset, iFixit offered a zoomed-in look at the LCD panels through a powerful microscope. The screens output a resolution of 2,064 x 2,208 pixels per eye with a refresh rate of 120Hz. This is greater than the Quest Pro’s peak resolution of 1,920 x 1,800 pixels. The video explains that the Quest 3 can manipulate the intensity of color clusters, mixing everything into the high-quality visuals we see. Combining the LCD panels with the time of flight sensor results in a “much better [full-color] passthrough experience” than before.

Additionally, the headset has greater power behind it since it houses the Qualcomm Snapdragon 8 XR2 Gen 2 chipset.

Of course, iFixit took the time to judge the Quest 3 on its repairability and Meta did a good job on that front – for the most part. The controllers are easy to repair as their construction is relatively simple. They’re held together by a few screws, a magnet, and a series of ribbon cables at the top. Replacing the battery is also pretty easy as each half takes a single AA battery.

Awkward repairs

On the headset, it's a slightly different story. The battery on the main unit is replaceable, too. However, it’s located at the center of the device behind 50 screws, multiple coax cables, various connectors, a heatsink, and the mainboard. If you like to do your own repairs on your electronics, it may take you a while to fix the Quest 3.

Funnily enough, iFixit really makes a good case for why and how the Quest 3 is a better headset than the Quest Pro. Granted, it lacks face and eye tracking, but when you have a more immersive mixed reality, are people really going to miss them? Plus, it's half the price. If the Quest 3 is the new standard moving forward, it makes you wonder how Meta is going to improve on the Quest Pro 2 (assuming it’s in the works).

While we have you check out TechRadar’s list of the best VR headsets for 2023

You might also like

TechRadar – All the latest technology news

Read More

Meta Quest 3 video leak shows off thinner design and new controllers

The Meta Quest 3 (aka the Oculus Quest 3) is now official, but isn't due to go on sale until September or October time. If you're keen for an earlier look at the virtual reality headset before then, an unboxing video has made its way online.

This comes from @ZGFTECH on X/Twitter (via Android Authority), and we get a full look at the new device and the controllers that come with it. Meta has already published promo images of the headset, but it's interesting to see it in someone's hands.

As revealed by Meta chief Mark Zuckerberg, the Meta Quest 3 is some 40% thinner than the Oculus Quest 2 that came before it. From this video it looks like the Quest 2's silicone face pad and cloth strap have been carried over to the new piece of hardware.

You may recall that the Quest 2 originally shipped with foam padding, before Meta responded to complaints of skin irritation by replacing the foam with silicone. That lesson now appears to have been learned with this new device and the Meta Quest Pro.

See more

Take control

The controllers that come with the Meta Quest 3 look a lot like the ones supplied with the Meta Quest Pro, though these don't have built-in cameras. The ring design of the Oculus Quest 2 has been ditched, with integrated sensors and predictive AI taking over tracking duties, according to Meta.

As for the outer packaging, it's not particularly inspiring, featuring just the name of the device on the top. Presumably something a bit more eye-catching will be put together before the headset actually goes on sale.

It's not clear where the headset has been sourced from, but the device has clearly been in testing for a while. This is becoming something of a running theme too, because the Meta Quest Pro was leaked in similar fashion after being left behind in a hotel room.

We should get all the details about the Meta Quest 3, including the date when we'll actually be able to buy it, on September 27 at the Meta Connect event. TechRadar will of course bring you all the news from the show, and any further leaks that may emerge between then and now.

You might also like

TechRadar – All the latest technology news

Read More

Google Photos now shows you an AI-powered highlights reel of your life

The Google Photos app is getting a redesigned, AI-powered Memories feature to help you find your life's highlights among the clutter of your everyday snaps and videos.

The Memories carousel, which Google says is used by half a billion people every month, was introduced four years ago at the top of the Android and iOS app. It automatically picks out what it considers to be your most important photos and videos, but Google is now making it a more central part of the app. 

From today in the US (other regions in the “coming months”) the Memories feature is moving to the bottom of the app's navigation bar and getting some useful new tricks. One of these is the ability to “co-author” Memories albums with friends and family, in a similar way to shared albums. 

This feature sounds particularly handy for big events like weddings, as you'll be able to invite friends or family to collaborate on specific Memories and add their photos and videos to help fill in the gaps. You can also save any Memories that are shared with you to your Google Photos account.

Google is also promising to add a couple of other new features to Memories soon. If you're struggling to think of a title for your collection of snaps (which we can't imagine is a major issue) then you'll be able to use generative AI to come up with some suggested names like “a desert adventure”. This is apparently an experimental feature, and only currently available to “select accounts in the US”.

Perhaps more useful will be the option of sharing your Memories as videos, which means you'll be able to send them to friends and family who aren't on Google Photos in messaging apps like WhatsApp. Google says this is “coming soon”, but unfortunately hasn't yet given a rough timescale. Knowing Google, that could be anything from three months to three years, but we'll update this article when we hear something more specific.

Google upgrades the photo album

An Android phone on an orange background showing a photo of a kitten being shared in the Google Photos Memories feature

You can now share Memories albums with other Google Photos users in the updated version of the app (above). (Image credit: Google)

While these are only minor tweaks to the Google Photos app, they do show that Google increasingly sees its cloud photo service as a social app.

The ability to “co-author” Memories albums is something that'll no doubt be used by millions for events like weddings, vacations, pets, and celebrations. And as Google Photos isn't used by everyone, the incoming option to share Memories as videos to WhatsApp groups and other messaging apps should also prove popular.

On the other hand, these AI-powered photo albums have also sparked controversy with their sometimes insensitive surfacing of unwanted memories and photos. Google says that its updated Memories view lets you quickly add or remove specific photos or videos, or hide particular memories, to help with this.

On the whole, the Memories feature is certainly an upgrade to having to pass around a physical photo album, and its AI powers will no doubt improve rapidly if half a billion people continue to use it every month. If it works as well as it does in the demos, it could effectively be an automated highlights reel of your life.

TechRadar – All the latest technology news

Read More

Windows 11 preview shows a File Explorer ready to recommend what you open next

Microsoft is currently rolling out new File Explorer features via Insider Preview Build 23403 on Windows 11 with a big focus on streamlining work.

One of the more interesting features of this package is File Recommendations.  As the name suggests, the File Explorer will begin suggesting which files you should open on the home tab. It appears Microsoft created this tool for business-centric users, at least initially. The tool will only recommend cloud files associated with a particular account, “either owned by the user, or shared with the user.” You also have to be signed in to your Azure Active Directory account otherwise it doesn't work. Additionally, the company is limiting the number of people who will get to try out File Recommendations at this time. Microsoft states it wants to keep a close eye on feedback “before pushing it out to everyone.”

Less restricted are the new Access Keys for File Explorer. They’re simple, single keystroke shortcuts for “quickly [executing] a command.” For example, hitting the “O” key opens a file whereas pressing the “B” key sets it as a desktop background. To use this feature, you’ll have to first click on a file in File Explorer and then press the Menu key on your keyboard to make Access Keys pop up. If you don’t have a Menu key, hitting Shift and F10 at once does the same thing.

File Recommendations on File Explorer

File Recommendations on File Explorer (Image credit: Microsoft)

New updates

Moving past File Explorer, the rest of the features affect other native Windows 11 apps, namely the language side of things. For starters, Live Captions will be available in more languages including Japanese, and French, as well as other English dialects like Australian English. Speaking of which, the Voice Access app will now support those different dialects. Upon activating the app, “you will be prompted to download a speech model” for a specific dialect. Microsoft also redesigned Voice Access to make it more streamlined and easier to use. Each command will now have a description explaining what it does next to an example of how it can be used.

For the rest of the build, it’s all a collection of small tweaks; nothing really major. Changes include a VPN icon now appearing in the System Tray if you have one active, a new copy button for “quickly copying [2FA] codes in notification[s]”, and some bug fixes. If this piques your interest, you can try out Preview Build 23403 by joining the Dev Channel of the Windows 11 Insider Program.

It's worth mentioning that Microsoft has been working on overhauling File Explorer for some time now. It's unknown exactly what it'll have, but we’ve got a few hints like File Explorer being redesigned to make it more user-friendly. However, it’ll probably still be a while until we see the final product. If you don’t feel like waiting til then, be sure to check out our list of the best file managers for Windows

TechRadar – All the latest technology news

Read More