Watch this: Adobe shows how AI and OpenAI’s Sora will change Premiere Pro and video editing forever

OpenAI's Sora gave us a glimpse earlier this year of how generative AI is going to change video editing – and now Adobe has shown off how that's going to play out by previewing of some fascinating new Premiere Pro tools.

The new AI-powered features, powered by Adobe Firefly, effectively bring the kinds of tricks we've seen from Google's photo-focused Magic Editor – erasing unwanted objects, adding objects and extending scenes – to video. And while it isn't the first piece of software to do that, seeing these tools in an industry standard app that's used by professionals is significant.

For a glimpse of what's coming “this year” to Premiere Pro and other video editing apps, check out the video below. In a new Generative panel, there's a new 'add object' option that lets you type in an object you want to add to the scene. This appears to be for static objects, rather than things like a galloping horse, but it looks handy for b-roll and backgrounds.

Arguably even more helpful is 'object removal', which uses Firefly's AI-based smart masking to help you quickly select an object to remove then make it vanish with a click. Alternatively, you can just combine the two tools to, for example, swap the watch that someone's wearing for a non-branded alternative.

One of the most powerful new AI-powered features in photo editing is extending backgrounds – called Generative Fill in Photoshop – and Premiere Pro will soon have a similar feature for video. Rather than extending the frame's size, Generative Extend will let you add frames to a video to help you, for example, pause on your character's face for a little longer. 

While Adobe hasn't given these tools a firm release date, only revealing that they're coming “later this year”, it certainly looks like they'll change Premiere Pro workflows in a several major ways. But the bigger AI video change could be yet to come… 

Will Adobe really plug into OpenAI's Sora?

A laptop screen showing AI video editing tools in Adobe Premiere Pro

(Image credit: Adobe)

The biggest Premiere Pro announcement, and also the most nebulous one, was Adobe's preview of third-party models for the editing app. In short, Adobe is planning to let you plug generative AI video tools including OpenAI's Sora, Runway and Pika Labs into Premiere Pro to sprinkle your videos with their effects.

In theory, that sounds great. Adobe showed an example of OpenAI's Sora generating b-roll with a text-to-video prompt, and Pika powering Generative Extend. But these “early examples” of Adobe's “research exploration” with its “friends” from the likes of OpenAI are still clouded in uncertainty.

Firstly, Adobe hasn't committed to launching the third-party plug-ins in the same way as its own Firefly-powered tools. That shows it's really only testing the waters with this part of the Premiere Pro preview. Also, the integration sits a little uneasily with Adobe's current stance on generative AI tools.

A laptop screen showing AI video editing tools in Adobe Premiere Pro

(Image credit: Adobe)

Adobe has sought to set itself apart from the likes of Midjourney and Stable Diffusion by highlighting that Adobe Firefly is only trained on Adobe Stock image library, which is apparently free of commercial, branded and trademark imagery. “We’re using hundreds of millions of assets, all trained and moderated to have no IP,” Adobe's VP of Generative AI, Alexandru Costin, told us earlier this year.

Yet a new report from Bloomberg claims that Firefly was partially trained on images generated by Midjourney (with Adobe suggesting that could account for 5% of Firefly's training data). And these previews of new alliances with generative video AI models, which are similarly opaque when it comes to their training data, again sits uneasily with Adobe's stance.

Adobe's potential get-out here is Content Credentials, a kind of nutrition label that's also coming to Premiere Pro and will add watermarks to clarify when AI was used in a video and with which model. Whether or not this is enough for Adobe to balance making a commercially-friendly pro video editor with keeping up in the AI race remains to be seen.

You might also like

TechRadar – All the latest technology news

Read More

Google’s impressive Lumiere shows us the future of making short-form AI videos

Google is taking another crack at text-to-video generation with Lumiere, a new AI model capable of creating surprisingly high-quality content. 

The tech giant has certainly come a long way from the days of Imagen Video. Subjects in Lumiere videos are no longer these nightmarish creatures with melting faces. Now things look much more realistic. Sea turtles look like sea turtles, fur on animals has the right texture, and people in AI clips have genuine smiles (for the most part). What’s more, there's very little of the weird jerky movement seen in other text-to-video generative AIs. Motion is largely smooth as butter. Inbar Mosseri, Research Team Lead at Google Research, published a video on her YouTube channel demonstrating Lumiere’s capabilities. 

Google put a lot of work into making Lumiere’s content appear as lifelike as possible. The dev team accomplished this by implementing something called Space-Time U-Net architecture (STUNet). The technology behind STUNet is pretty complex. But as Ars Technica explains, it allows Lumiere to understand where objects are in a video, how they move and change and renders these actions at the same time resulting in a smooth-flowing creation. 

This runs contrary to other generative platforms that first establish keyframes in clips and then fill in the gaps afterward. Doing so results in the jerky movement the tech is known for.

Well equipped

In addition to text-to-video generation, Lumiere has numerous features in its toolkit including support for multimodality. 

Users will be able to upload source images or videos to the AI so it can edit them according to their specifications. For example, you can upload an image of Girl with a Pearl Earring by Johannes Vermeer and turn it into a short clip where she smiles instead of blankly staring. Lumiere also has an ability called Cinemagraph which can animate highlighted portions of pictures.

Google demonstrates this by selecting a butterfly sitting on a flower. Thanks to the AI, the output video has the butterfly flapping its wings while the flowers around it remain stationary. 

Things become particularly impressive when it comes to video. Video Inpainting, another feature, functions similarly to Cinemagraph in that the AI can edit portions of clips. A woman’s patterned green dress can be turned into shiny gold or black. Lumiere goes one step further by offering Video Stylization for altering video subjects. A regular car driving down the road can be turned into a vehicle made entirely out of wood or Lego bricks.

Still in the works

It’s unknown if there are plans to launch Lumiere to the public or if Google intends to implement it as a new service. 

We could perhaps see the AI show up on a future Pixel phone as the evolution of Magic Editor. If you’re not familiar with it, Magic Editor utilizes “AI processing [to] intelligently” change spaces or objects in photographs on the Pixel 8. Video Inpainting, to us, seems like a natural progression for the tech.

For now, it looks like the team is going to keep it behind closed doors. As impressive as this AI may be, it still has its issues. Jerky animations are present. In other cases, subjects have limbs warping into mush. If you want to know more, Google’s research paper on Lumiere can be found on Cornell University’s arXiv website. Be warned: it's a dense read.

And be sure to check out TechRadar's roundup of the best AI art generators for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Meta Quest 3 teardown video shows lower price doesn’t mean low-quality

We just got a good look at the guts inside a Quest 3 headset. iFixit tore down the VR gear into its individual parts to find out if the device offers good performance for its price point. Short answer: Yes, it does although there are some design flaws that make it difficult to repair.

What’s notable about the Quest 3 is that it has better “mixed-reality capabilities” than the Quest Pro. It's able to automatically map out a room as well as accurately keep track of the distance between objects without needing a “safe space”. The former is made possible by a depth sensor while the latter is thanks to the “time of flight sensor”. iFixit makes the interesting observation that the time of flight components could fit perfectly in the Quest Pro. 

It’s worth mentioning Andrew Bosworth, Meta’s Chief Technology Officer, once stated the sensors were removed from the pro model because it added extra “cost and weight” without providing enough benefits.” The Quest 3 is much slimmer, clocking at 512g. 

Meta Quest 3 breakdown

(Image credit: iFixit)

Hardware improvements

Digging deeper into the headset, iFixit offered a zoomed-in look at the LCD panels through a powerful microscope. The screens output a resolution of 2,064 x 2,208 pixels per eye with a refresh rate of 120Hz. This is greater than the Quest Pro’s peak resolution of 1,920 x 1,800 pixels. The video explains that the Quest 3 can manipulate the intensity of color clusters, mixing everything into the high-quality visuals we see. Combining the LCD panels with the time of flight sensor results in a “much better [full-color] passthrough experience” than before.

Additionally, the headset has greater power behind it since it houses the Qualcomm Snapdragon 8 XR2 Gen 2 chipset.

Of course, iFixit took the time to judge the Quest 3 on its repairability and Meta did a good job on that front – for the most part. The controllers are easy to repair as their construction is relatively simple. They’re held together by a few screws, a magnet, and a series of ribbon cables at the top. Replacing the battery is also pretty easy as each half takes a single AA battery.

Awkward repairs

On the headset, it's a slightly different story. The battery on the main unit is replaceable, too. However, it’s located at the center of the device behind 50 screws, multiple coax cables, various connectors, a heatsink, and the mainboard. If you like to do your own repairs on your electronics, it may take you a while to fix the Quest 3.

Funnily enough, iFixit really makes a good case for why and how the Quest 3 is a better headset than the Quest Pro. Granted, it lacks face and eye tracking, but when you have a more immersive mixed reality, are people really going to miss them? Plus, it's half the price. If the Quest 3 is the new standard moving forward, it makes you wonder how Meta is going to improve on the Quest Pro 2 (assuming it’s in the works).

While we have you check out TechRadar’s list of the best VR headsets for 2023

You might also like

TechRadar – All the latest technology news

Read More

Meta Quest 3 video leak shows off thinner design and new controllers

The Meta Quest 3 (aka the Oculus Quest 3) is now official, but isn't due to go on sale until September or October time. If you're keen for an earlier look at the virtual reality headset before then, an unboxing video has made its way online.

This comes from @ZGFTECH on X/Twitter (via Android Authority), and we get a full look at the new device and the controllers that come with it. Meta has already published promo images of the headset, but it's interesting to see it in someone's hands.

As revealed by Meta chief Mark Zuckerberg, the Meta Quest 3 is some 40% thinner than the Oculus Quest 2 that came before it. From this video it looks like the Quest 2's silicone face pad and cloth strap have been carried over to the new piece of hardware.

You may recall that the Quest 2 originally shipped with foam padding, before Meta responded to complaints of skin irritation by replacing the foam with silicone. That lesson now appears to have been learned with this new device and the Meta Quest Pro.

See more

Take control

The controllers that come with the Meta Quest 3 look a lot like the ones supplied with the Meta Quest Pro, though these don't have built-in cameras. The ring design of the Oculus Quest 2 has been ditched, with integrated sensors and predictive AI taking over tracking duties, according to Meta.

As for the outer packaging, it's not particularly inspiring, featuring just the name of the device on the top. Presumably something a bit more eye-catching will be put together before the headset actually goes on sale.

It's not clear where the headset has been sourced from, but the device has clearly been in testing for a while. This is becoming something of a running theme too, because the Meta Quest Pro was leaked in similar fashion after being left behind in a hotel room.

We should get all the details about the Meta Quest 3, including the date when we'll actually be able to buy it, on September 27 at the Meta Connect event. TechRadar will of course bring you all the news from the show, and any further leaks that may emerge between then and now.

You might also like

TechRadar – All the latest technology news

Read More

Google Photos now shows you an AI-powered highlights reel of your life

The Google Photos app is getting a redesigned, AI-powered Memories feature to help you find your life's highlights among the clutter of your everyday snaps and videos.

The Memories carousel, which Google says is used by half a billion people every month, was introduced four years ago at the top of the Android and iOS app. It automatically picks out what it considers to be your most important photos and videos, but Google is now making it a more central part of the app. 

From today in the US (other regions in the “coming months”) the Memories feature is moving to the bottom of the app's navigation bar and getting some useful new tricks. One of these is the ability to “co-author” Memories albums with friends and family, in a similar way to shared albums. 

This feature sounds particularly handy for big events like weddings, as you'll be able to invite friends or family to collaborate on specific Memories and add their photos and videos to help fill in the gaps. You can also save any Memories that are shared with you to your Google Photos account.

Google is also promising to add a couple of other new features to Memories soon. If you're struggling to think of a title for your collection of snaps (which we can't imagine is a major issue) then you'll be able to use generative AI to come up with some suggested names like “a desert adventure”. This is apparently an experimental feature, and only currently available to “select accounts in the US”.

Perhaps more useful will be the option of sharing your Memories as videos, which means you'll be able to send them to friends and family who aren't on Google Photos in messaging apps like WhatsApp. Google says this is “coming soon”, but unfortunately hasn't yet given a rough timescale. Knowing Google, that could be anything from three months to three years, but we'll update this article when we hear something more specific.

Google upgrades the photo album

An Android phone on an orange background showing a photo of a kitten being shared in the Google Photos Memories feature

You can now share Memories albums with other Google Photos users in the updated version of the app (above). (Image credit: Google)

While these are only minor tweaks to the Google Photos app, they do show that Google increasingly sees its cloud photo service as a social app.

The ability to “co-author” Memories albums is something that'll no doubt be used by millions for events like weddings, vacations, pets, and celebrations. And as Google Photos isn't used by everyone, the incoming option to share Memories as videos to WhatsApp groups and other messaging apps should also prove popular.

On the other hand, these AI-powered photo albums have also sparked controversy with their sometimes insensitive surfacing of unwanted memories and photos. Google says that its updated Memories view lets you quickly add or remove specific photos or videos, or hide particular memories, to help with this.

On the whole, the Memories feature is certainly an upgrade to having to pass around a physical photo album, and its AI powers will no doubt improve rapidly if half a billion people continue to use it every month. If it works as well as it does in the demos, it could effectively be an automated highlights reel of your life.

TechRadar – All the latest technology news

Read More

Windows 11 preview shows a File Explorer ready to recommend what you open next

Microsoft is currently rolling out new File Explorer features via Insider Preview Build 23403 on Windows 11 with a big focus on streamlining work.

One of the more interesting features of this package is File Recommendations.  As the name suggests, the File Explorer will begin suggesting which files you should open on the home tab. It appears Microsoft created this tool for business-centric users, at least initially. The tool will only recommend cloud files associated with a particular account, “either owned by the user, or shared with the user.” You also have to be signed in to your Azure Active Directory account otherwise it doesn't work. Additionally, the company is limiting the number of people who will get to try out File Recommendations at this time. Microsoft states it wants to keep a close eye on feedback “before pushing it out to everyone.”

Less restricted are the new Access Keys for File Explorer. They’re simple, single keystroke shortcuts for “quickly [executing] a command.” For example, hitting the “O” key opens a file whereas pressing the “B” key sets it as a desktop background. To use this feature, you’ll have to first click on a file in File Explorer and then press the Menu key on your keyboard to make Access Keys pop up. If you don’t have a Menu key, hitting Shift and F10 at once does the same thing.

File Recommendations on File Explorer

File Recommendations on File Explorer (Image credit: Microsoft)

New updates

Moving past File Explorer, the rest of the features affect other native Windows 11 apps, namely the language side of things. For starters, Live Captions will be available in more languages including Japanese, and French, as well as other English dialects like Australian English. Speaking of which, the Voice Access app will now support those different dialects. Upon activating the app, “you will be prompted to download a speech model” for a specific dialect. Microsoft also redesigned Voice Access to make it more streamlined and easier to use. Each command will now have a description explaining what it does next to an example of how it can be used.

For the rest of the build, it’s all a collection of small tweaks; nothing really major. Changes include a VPN icon now appearing in the System Tray if you have one active, a new copy button for “quickly copying [2FA] codes in notification[s]”, and some bug fixes. If this piques your interest, you can try out Preview Build 23403 by joining the Dev Channel of the Windows 11 Insider Program.

It's worth mentioning that Microsoft has been working on overhauling File Explorer for some time now. It's unknown exactly what it'll have, but we’ve got a few hints like File Explorer being redesigned to make it more user-friendly. However, it’ll probably still be a while until we see the final product. If you don’t feel like waiting til then, be sure to check out our list of the best file managers for Windows

TechRadar – All the latest technology news

Read More

New Windows 11 update shows Microsoft still wants to take down the iPad

Microsoft has released a software preview for Windows 11 that will make using the operating system on tablet devices, and 2-in-1 laptops, much better.

As DigitalTrends reports, Windows 11 Insider Preview Build 22563, which has just been released to people signed up to receive early versions of Windows 11 to test, optimizes the taskbar on tablets and 2-in-1 devices.

In the new update, the taskbar now has two states: a collapsed and expanded mode. When the taskbar is collapsed, it appears much thinner, giving you more screen real estate and helping to prevent accidental presses of taskbar buttons.

Meanwhile, the expanded mode makes the taskbar wider, allowing you to select items more easily, such as apps, using the touch screen.

Switching between the two modes looks pretty easy as well, and is done by simply swiping your finger up or down at the bottom of the tablet’s screen where the taskbar resides.

It seems that this version of the taskbar will only be available on Windows 11 tablets and 2-in-1 laptops, which have touchscreens that either detach from the keyboard, or can be folded back, and used as a tablet. Desktop PCs and traditional laptops won’t get this new taskbar.

As it’s currently in a Preview Build, it also means that regular Windows 11 users won’t see it just yet. However, if testing goes well and there’s a positive reaction from Windows Insiders, we could see the feature appear in a Windows 11 update sometime in the future.


Analysis: Microsoft’s tablet ambitions remain

Pics of Microsoft 8 2-in-1 PC

(Image credit: Microsoft India)

This new update shows that Microsoft’s tablet ambitions remain undeterred. While its rivals Apple and Google have found immense success with tablet devices, Microsoft has yet to do the same. Its attempts to take on the mighty iPad and gain tablet market share have been a mixed bag.

There was the deeply unpopular Windows 8, which dropped much of the classic interface of Windows, including the taskbar and Start menu, for an interface with large icons that was aimed at tablet use. The problem was, Windows 8 tablets were largely ignored, and desktop and laptop users hated having to put up with an interface that was designed for touchscreens they didn’t have.

Microsoft found more success with its Surface Pro line of 2-in-1 devices, alongside Windows 10, which struck a more even balance with an interface that was better suited to traditional PCs, while also having a tablet mode.

However, Surface Pro sales still lag behind iPad and Android tablet sales, but it seems Microsoft isn’t giving up. If Windows 11 continues to evolve to work even better on tablet devices, then this could be Microsoft’s best bet yet to take on Apple and Google.

TechRadar – All the latest technology news

Read More

Microsoft shows why Windows 11 needs TPM – even if some PCs are left out in the cold

Windows 11 security is something of a hot topic, as the revamped OS comes with much tighter defenses than Windows 10, but with the side-effect of creating controversy and confusion on the system requirements front (and indeed for gamers – more on that later).

However, Microsoft recently produced a video to show how Windows 11’s new protective measures – which include TPM (Trusted Platform Module), Secure Boot and VBS (Virtualization-Based Security) – help to make systems safer against hackers. Furthermore, it reminds us these moves are an extension of what was already happening with Windows 10 (but crucially, not on a compulsory level).

The clip stars Microsoft’s security expert Dave Weston who explains more about why this higher level of security, which entails the aforementioned raised hardware requirements – including support for TPM 2.0, which rules out a fair number of not-all-that-old PCs – is required to defend against some potentially nasty security breaches.

Weston shows how this nastiness could play out in real world situations, first of all demonstrating a remote attack leveraging an open RDP (remote desktop protocol) port, brute forcing the password, and then infecting the machine with ransomware. This was on a PC without TPM 2.0 and Secure Boot, and naturally, wouldn’t be possible on a Windows 11 system.

The second attack used for demo purposes is an in-person one using a PCI Leech device to access system memory and bypass fingerprint recognition to login. VBS stops this kind of attack being leveraged against a Windows 11 system, and the former remote attack is prevented by UEFI, Secure Boot and Trusted Boot (in conjunction with TPM).


Analysis: Land of confusion

This is an interesting look at the nuts-and-bolts of how these security countermeasures work against real life attacks. Clearly, in some scenarios there are good reasons for mandating TPM and the other mentioned security technologies to help keep a PC safer against a possible attack, whether that’s a remote or local intrusion.

No one is going to argue against better protection, but the issue with making these pieces of security tech a compulsory part of the system requirements is the confusion around whether or not a PC has these capabilities.

In some cases, newer machines do indeed have TPM on-board, it just isn’t enabled – leading to a frustrating situation where the owner of a modern device could be told it isn’t compatible with Windows 11. And while it might just be a case of switching TPM on, which isn’t difficult for a reasonably tech-savvy person, it could be very intimidating for a novice user (involving a trip to the BIOS, a scary place for the untrained eye).

VBS or Virtualization-Based Security has run into further controversy, as well, given that while this isn’t an issue for upgraders from Windows 10, it will be enabled by default on new PCs that come with Windows 11 – and it causes slowdown with gaming frame rates. By all accounts, VBS can be a pretty serious headwind for frame rates, too; and again, this adds to the confusion around what’s going on with Windows 11 machines in general.

Having a more secure PC is great, without a doubt, but there are costs here which have a potentially negative impact on the experience of some users adopting (or trying to adopt) Windows 11.

Via Neowin

TechRadar – All the latest technology news

Read More

Too Hot To Handle on Netflix is the new Love Is Blind if you like steamy dating shows

Too Hot to Handle on Netflix is the spiritual successor to Love is Blind, and the new steamy dating show is now streaming on the service, just in time for the weekend.

The Netflix series casts a number of gregarious, good-looking singles, sends them to an island resort, and asks them to cohabitate for a few weeks. 

The catch here, because these shows always need a catch to stay relevant, is that they can’t… canoodle. If they can abstain for physical intimacy for the length of the contest, they’ll win $ 100,000 – but hey, either way it’s a win-win amiright?

The series has eight 40-minute episodes that all dropped today… which will likely be gobbled up and all over social media by the time Sunday rolls around.

Does Netflix have the hots for trashy TV? 

So what's the deal with all these new dating shows on Netflix? While traditional cable has always relied on one or two of these types of shows to woo viewers during primetime, Netflix traditionally has strayed away from going there. 

But that seemingly changed with The Circle, a game show about catfishing your opponents through a pseudo social network, and also Love is Blind, which tasked contestants to go on a number of blind dates without seeing one another before picking a partner whom they’d marry at the end of the show. 

Honestly, you can't fault Netflix for falling into the same trap that other networks fall into – these shows are relatively cheap to make (there's no special effects or big-name actors) and they draw a lot of attention.

While this one probably won't hook me personally, it's nice to see Netflix keeping others entertained during a particularly un-fun time. 

TechRadar – All the latest technology news

Read More