Watch this: Adobe shows how AI and OpenAI’s Sora will change Premiere Pro and video editing forever

OpenAI's Sora gave us a glimpse earlier this year of how generative AI is going to change video editing – and now Adobe has shown off how that's going to play out by previewing of some fascinating new Premiere Pro tools.

The new AI-powered features, powered by Adobe Firefly, effectively bring the kinds of tricks we've seen from Google's photo-focused Magic Editor – erasing unwanted objects, adding objects and extending scenes – to video. And while it isn't the first piece of software to do that, seeing these tools in an industry standard app that's used by professionals is significant.

For a glimpse of what's coming “this year” to Premiere Pro and other video editing apps, check out the video below. In a new Generative panel, there's a new 'add object' option that lets you type in an object you want to add to the scene. This appears to be for static objects, rather than things like a galloping horse, but it looks handy for b-roll and backgrounds.

Arguably even more helpful is 'object removal', which uses Firefly's AI-based smart masking to help you quickly select an object to remove then make it vanish with a click. Alternatively, you can just combine the two tools to, for example, swap the watch that someone's wearing for a non-branded alternative.

One of the most powerful new AI-powered features in photo editing is extending backgrounds – called Generative Fill in Photoshop – and Premiere Pro will soon have a similar feature for video. Rather than extending the frame's size, Generative Extend will let you add frames to a video to help you, for example, pause on your character's face for a little longer. 

While Adobe hasn't given these tools a firm release date, only revealing that they're coming “later this year”, it certainly looks like they'll change Premiere Pro workflows in a several major ways. But the bigger AI video change could be yet to come… 

Will Adobe really plug into OpenAI's Sora?

A laptop screen showing AI video editing tools in Adobe Premiere Pro

(Image credit: Adobe)

The biggest Premiere Pro announcement, and also the most nebulous one, was Adobe's preview of third-party models for the editing app. In short, Adobe is planning to let you plug generative AI video tools including OpenAI's Sora, Runway and Pika Labs into Premiere Pro to sprinkle your videos with their effects.

In theory, that sounds great. Adobe showed an example of OpenAI's Sora generating b-roll with a text-to-video prompt, and Pika powering Generative Extend. But these “early examples” of Adobe's “research exploration” with its “friends” from the likes of OpenAI are still clouded in uncertainty.

Firstly, Adobe hasn't committed to launching the third-party plug-ins in the same way as its own Firefly-powered tools. That shows it's really only testing the waters with this part of the Premiere Pro preview. Also, the integration sits a little uneasily with Adobe's current stance on generative AI tools.

A laptop screen showing AI video editing tools in Adobe Premiere Pro

(Image credit: Adobe)

Adobe has sought to set itself apart from the likes of Midjourney and Stable Diffusion by highlighting that Adobe Firefly is only trained on Adobe Stock image library, which is apparently free of commercial, branded and trademark imagery. “We’re using hundreds of millions of assets, all trained and moderated to have no IP,” Adobe's VP of Generative AI, Alexandru Costin, told us earlier this year.

Yet a new report from Bloomberg claims that Firefly was partially trained on images generated by Midjourney (with Adobe suggesting that could account for 5% of Firefly's training data). And these previews of new alliances with generative video AI models, which are similarly opaque when it comes to their training data, again sits uneasily with Adobe's stance.

Adobe's potential get-out here is Content Credentials, a kind of nutrition label that's also coming to Premiere Pro and will add watermarks to clarify when AI was used in a video and with which model. Whether or not this is enough for Adobe to balance making a commercially-friendly pro video editor with keeping up in the AI race remains to be seen.

You might also like

TechRadar – All the latest technology news

Read More

Google Bard AI’s addition to Messages could change the way we text forever

Google’s experimental AI chatbot Bard may be coming to the Google Messages app in the near future – and it promises to bring some major upgrades to your phone-based chats. 

Tipster Assembler Debug uncovered the feature in the beta code of the Google Messages app. The AI-enhanced features are not yet available, and Assembler Debug states that it doesn’t seem to work. However, according to leaked images, you can use Bard to help you write text messages, as well as arrange a date and craft a message calling in sick to your boss, alongside other difficult conversations. 

Bard in Google Messages could also help to translate conversations and identify images, as well as explore interests. The code suggests it could provide book recommendations and recipe ideas, too.

According to the examination of its code, the app is believed to use your location data and past chat information to help generate accurate replies. However, you can provide feedback to Bard's response with a thumbs up or down by long pressing, as well as copy, forward, and favorite its answers, thus helping the AI learn if its reply was appropriate. 

See more

The project codename “Penpal” was noted in a beta version (20240111_04_RC00) of the Google Messages app. According to 9to5Google’s insights of the beta code, Bard can be accessed by selecting the “New conversation” option, allowing you to select Bard as a stand-alone chat option.

You must be eighteen-years-old to use it and conversations with Bard in the Messages app are not end-to-end encrypted or treated as private, unlike messages exchanged with your contacts. So you might want to avoid sending personal or sensitive messages through the app when Bard is enabled. 

Google states that chat histories are kept for eighteen months to help enhance Bard and could be reviewed by a human, but no information is associated with your account beyond three years. Google recommends not to say anything to Bard you wouldn't want others to see. Conversations with Bard could be reviewed by Google but are not accessible to other users. However, you can delete your chat history with Bard anytime, which will take 72 hours to remove the data.

Echoes of Allo

Bard AI's inclusion into the Messages app seems slightly reminiscent of the past project Google Allo, which incorporated the Google Assistant in both stand-alone requests and chats. This service was shut down in 2019 but it could live on in some way through this Bard integration.

When asked directly Bard said: “While I can't say for certain right now, there are strong indications that I might become available with Google RCS messages in the future.” 

Bard then went on to say that integration with Google Messages was being tested in March 2023 and the functionality aligns with Bard's capabilities to process language, generate text, and answer questions, as well as summarize information making it a natural fit for enhancing messages. 

The integration of AI into messaging apps reflects many companies' eagerness to infuse AI technologies into their upcoming smartphones, with Samsung’s Galaxy AI features being a recent example. Google, however, is no stranger to AI tools in its phones with features like Magic Eraser, Photo Unblur, or Live Translate all being staples of Pixel devices.

The implications of AI being added to messages are also intriguing, meaning you may never know if that thoughtful reply or fantastic date idea was thought up by a human or their AI assistant.

Although Bard’s inclusion in Google's messaging app isn’t yet available and no release date has been announced, Google could decide to not continue with the project. Google could go the Samsung route and make its functionality a subscription-based feature. However, all of this is speculation right now and we’ll have to wait to see exactly how much Bard will change the Messages app in the future.  

You may also like

TechRadar – All the latest technology news

Read More

Apple may be working on a way to let LLMs run on-device and change your iPhones forever

Apple researchers have apparently discovered a method that’ll allow iPhones to host and run their own large language models (LLMs).

With this tech, future iPhone models may finally have the generative AI features people have been eagerly waiting for. This information comes from a pair of papers published on arXiv, a research-sharing platform owned by Cornell University. The documents are pretty dense and can be tricky to read, so we’re going to break things down for you. But if you're interested in reading them yourself, the papers are free for everyone to check out.

One of the main problems with putting an LLM on a mobile device is the limited amount of memory on the hardware. As VentureBeat explains in their coverage, recent AI models like GPT-4 “contain hundreds of billions of parameters”, which is a quantity smartphones have difficulty handling. To address this issue, Apple researchers propose two techniques. The first is called windowing, a method where the on-board AI recycles already processed data instead of using new information. Its purpose is to take some of the load off the hardware.

The second is called row-column bundling. This collects data into big chunks for the AI to read; a method that will boost the LLM’s ability to “understand and generate language”, according to MacRumors. The paper goes on to say these two techniques will let AIs run “up to twice the size of the available [memory]” on an iPhone. It’s a technology Apple must nail down if it wants to deploy advanced models “in resource-limited environments”. Without it, the researchers' plans can't take off.

On-device avatars

The second paper is centered around iPhones potentially getting the ability to create animated 3D avatars. The content will be made using videos taken by the rear cameras through a process called HUGS (Human Gaussian Splats). This tech has existed in some form before this. However, Apple’s version is said to be able to render the avatars 100 times faster than older generations as well as capture the finer details like clothing and hair.

It’s unknown exactly what Apple intends to do with HUGS or any of the techniques mentioned earlier. However, this research could open the door for a variety of possibilities, including a more powerful version of Siri, “real-time language translation”, new photography features, and chatbots. 

Powered up Siri

These upgrades may be closer to reality than some might think.

Back in October, rumors surfaced claiming Apple is working on a smarter version of  Siri that'll be boosted by artificial intelligence and sporting some generative capabilities. One potential use case would be an integration with the Messages app, letting users ask it tough questions or have it finish up sentences “more effectively.” Regarding chatbots, there have been other rumors of the tech giant developing a conversational AI called Ajax. Some people have also thrown around “Apple GPT” as a potential name.

No word on when Apple’s AI projects will see the light of day. There has been speculation that something could roll out in late 2024 alongside the launch of iOS 18 and iPadOS 18, although exactly when we'll see any of this remains unknown.

Be sure to check out TechRadar's latest roundup of the best iPhone deals for December 2023.

You might also like

TechRadar – All the latest technology news

Read More