Adobe’s next big project is an AI that can upscale low-res video to 8x its original quality

A group of Adobe researchers recently published a paper on a new generative AI model called VideoGigaGAN and we believe it may launch on a future product. What it does is upscale low-quality videos by up to eight times their original resolution without sacrificing stability or important aspects of the source material. Several demo clips can be found on the project’s website showing off its abilities. It can turn a blurry 128×128 pixel resolution video of a waterfall into footage running at a resolution of 1,024×1,024 pixels.

Post by @luokai
View on Threads

What’s noteworthy about the AI is it doesn’t skimp out on the finer details. Skin texture, wrinkles, strands of hair, and more are visible on the faces of human subjects. The other demos also feature a similar level of quality. You can better make out a swan swimming in a pond and the blossom on a tree thanks to this tech. It may seem bizarre to be focusing so much on skin wrinkles or feathers. However, it is this level of detail that companies like Adobe must nail down if they aim to implement image-enhancing AI on a wide scale.

Improving AI

You probably have a couple of questions about the platform’s latest project like how does it work? Well, it’s complicated. 

The “GAN” in VideoGigaGAN stands for generative adversarial network, a type of AI capable of creating realistic images. Adobe’s version is specifically based on GigaGAN which specializes in upscaling generated content as well as real photos. The problem with this tech, as TheVerge points out, is that it can’t improve the quality of videos without having multiple problems crop up like weird artifacts. To solve this issue, Adobe researchers used a variety of techniques.

The research paper explains the whole process. You can read it yourself to get the full picture although it is dense material. Basically, they introduced a “flow-guided propagation module” to ensure consistency among a video’s frames, anti-aliasing to reduce artifacts, and a “high-frequency feature shuttle” to make up for sudden drops in detail. There is more to VideoGigaGAN than what we just described, but that’s the gist of it.

Potential inclusion

Will we see this on an upcoming Adobe product or roll out as a standalone app? Most likely – at least we think so. 

In the past year, the company has been focusing heavily on implementing artificial intelligence into its software from the launch of Firefly to Acrobat’s new assistant. A few months ago during Adobe MAX 2023, a video upscaler referred to as Project Res Up was previewed at the event and its performance resembles what we see in the VideoGigaGAN demos. An old movie from the 1940s goes from running at a 480 x 360 image resolution to a crisp 1,280 x 960. Blurry footage of an elephant in a river becomes crystal clear. The presenter even mentions how the software can upscale a clip to four times the original quality. 

Admittedly, this is conjecture, but it’s entirely possible VideoGigaGAN may be the engine behind Res-Up. Adobe’s future product could give people a way to upscale old family videos or low-quality footage into the movie we envision in our minds. Perhaps, the recent preview is a hint at an imminent release.

VideoGigaGAN is still deep in development so it’s unknown when or if it’ll come out. There are several obstacles in the way. The AI can’t properly process videos beyond 200 frames or render small objects, but we'll definitely be keeping an eye on it.

In the meantime, check out TechRadar's list of the best AI image upscalers for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Adobe’s new beta Express app gives you Firefly AI image generation for free

Adobe has released a new beta version of its Express app, letting users try out their Firefly generative AI on mobile for the first time.

The AI functions much like Firefly on the web since it has a lot of the same features. You can have the AI engine create images from a single text prompt, insert or remove objects from images, and add words with special effects. The service also offers resources like background music tracks, stock videos, and a content scheduler for posting on social media platforms. It’s important to mention that all these features and more normally require a subscription to Adobe Express Premium. But, according to the announcement, everything will be available for free while the beta is ongoing. Once it’s over, you’ll have to pay the $ 10-a-month subscription to keep using the tools 

Adobe Express with Firefly features

(Image credit: Adobe)

Art projects on the current Express app will not be found in the beta – at least not right now. Ian Wang, who is the vice president of product for Adobe Express, told The Verge that once Express with Firefly exits beta, all the “historical data from the old app” will carry over to the new one. 

The new replacement

Adobe is planning on making Express with Firefly the main platform moving forward. It’s unknown when the beta will end. A company representative couldn’t give us an exact date, but they told us the company is currently collecting feedback for the eventual launch. When the trial period ends, the representative stated, “All eligible devices will be automatically updated to the new [app]”.

We managed to gain access to the beta and the way it works is pretty simple. Upon installation, you’ll see a revolving carousel of the AI tools at the top. For this quick demo, we’ll have Firefly make an image from a text prompt. Tap the option, then enter whatever you want to see from the AI.

Adobe Express with Firefly demo

(Image credit: Future)

Give it a few seconds to generate the content where you’ll be given multiple pictures to choose from. From there, you edit the image to your liking. After you’re all done, you can publish the finished product on social media or share it with someone.

Availability

Android users can download the beta directly from the Google Play Store. iPhone owners, on the other hand, will have a harder time. Apple has restrictions on how many testers can have access to beta software at a time. iOS users will instead have to join Adobe’s waitlist first and wait to get chosen. If you’re one of the lucky few, the company will guide you through the process of installing the app on your iPhone.

There is a system requirements page listing all of the smartphones eligible for the beta, however, it doesn’t appear to be a super strict list. The device we used was a OnePlus Nord N20 and it ran the app just fine. Adobe’s website also has all the supported languages which include English, French, Korean, plus Brazilian Portuguese.

Check out TechRadar's list of the best photo editor for 2024 if you want more robust tools.

You might also like

TechRadar – All the latest technology news

Read More

Adobe’s new AI music tool could make you a text-to-musical genius

Adobe is getting into the music business as the company is previewing its new experimental generative AI capable of making background tracks.

It doesn’t have an official name yet since the tech is referred to as Project Music GenAI. The way it works, according to Adobe, is you enter a text prompt into the AI describing what you want to hear; be it “powerful rock,” “happy dance,” or “sad jazz”. Additionally, users will be able to upload music files to the generative engine for further manipulation. There will even be some editing tools in the workflow for on-the-fly adjustments. 

If any of this sounds familiar to you, that’s because we’ve seen this type of technology multiple times before. Last year, Meta launched MusicGen for creating short instrumentals and Google opened the doors to its experimental audio engine called Instrument Playground. But what’s different about Adobe’s tool is it offers easier, yet robust editing – as far as we can tell. 

Project Music GenAI isn’t publicly available. However, Adobe did recently publish a video on its official YouTube channel showing off the experiment in detail. 

Adobe in concert

The clip primarily follows a researcher at Adobe demonstrating what the AI can do. He starts by uploading the song Habanera from Georges Bizet’s opera Carmen and then proceeds to change the melody via a prompt. In one instance, the researcher instructed Project Music to make Habanera sound like an inspirational film score. Sure enough, the output became less playful and more uplifting. In another example, they gave the song a hip-hop-style accompaniment. 

When it comes to generating fresh content, Project Music can even make songs with different tempos and structures. There is a clear delineation between the intro, the verse, the chorus, and other parts of the track. It can even create indefinitely looping music for videos as well as fade-outs for the outro.

No experience necessary

These editing abilities may make Adobe’s Project Music better than Instrument Playground. Google’s engine has its own editing tools, however they’re difficult to use. It seems you need some production experience to get the most out of Instrument Playground. Project Music, on the other hand, aims to be more intuitive.

And if you're curious to know, Meta's MusicGen has no editing tools. To make changes, you have to remake the song from scratch.

In a report by TheVerge, Adobe states the current demo utilizes “public domain content” for content generation. It’s not totally clear whether people will be able to upload their own files to the final release. Speaking of which, a launch date for Project Music has yet to be revealed although Adobe will be holding its Summit event in Las Vegas beginning March 26. Still, we reached out to the company asking for information. This story will be updated at a later time.

In the meantime, check out TechRadar's list of the best audio editor for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Adobe’s new photo editor looks even more powerful than Google’s Magic Editor

Adobe MAX 2023 is less than a week away, and to promote the event, the company recently published a video teasing its new “object-aware editing engine” called Project Stardust.

According to the trailer, the feature has the ability to identify individual objects in a photograph and instantly separate them into their own layers. Those same objects can then be moved around on-screen or deleted. Selecting can be done either manually or automatically via the Remove Distractions tool. The software appears to understand the difference between the main subjects in an image and the people in the background that you want to get rid of.

What’s interesting is moving or deleting something doesn’t leave behind a hole. The empty space is filled in most likely by a generative AI model. Plus, you can clean up any left-behind evidence of a deleted item. In its sample image, Adobe erases a suitcase held by a female model and then proceeds to edit her hand so that she’s holding a bouquet of flowers instead.  

Image 1 of 2

Project Stardust editing

(Image credit: Adobe)
Image 2 of 2

Project Stardust generative AI

(Image credit: Adobe)

The same tech can also be used to change articles of clothing in pictures. A yellow down jacket can be turned into a black leather jacket or a pair of khakis into black jeans. To do this, users will have to highlight the piece of clothing and then enter what they want to see into a text prompt. 

Stardust replacement tool

(Image credit: Adobe)

AI editor

Functionally, Project Stardust operates similarly to Google’s Magic Editor which is a generative AI tool present on the Pixel 8 series. The tool lets users highlight objects in a photograph and reposition them in whatever manner they please. It, too, can fill gaps in images by creating new pixels. However, Stardust feels much more capable. The Pixel 8 Pro’s Magic Eraser can fill in gaps, but neither it nor Magic Editor can’t generate content. Additionally, Google’s version requires manual input whereas Adobe’s software doesn’t need it.

Seeing these two side-by-side, we can’t but wonder if Stardust is actually powered by Google’s AI tech. Very recently, the two companies announced they were entering a partnership “and offering a free three-month trial for Photoshop on the web for people who buy a Chromebook Plus device. Perhaps this “partnership” runs a lot deeper than free Photoshop considering how similar Stardust is to Magic Editor.

Impending reveal

We should mention that Stardust isn't perfect. If you look at the trailer, you'll notice some errors like random holes in the leather jacket and strange warping around the flower model's hands. But maybe what we see is Stardust in an early stage. 

There is still a lot we don’t know like whether it's a standalone app or will it be housed in, say, Photoshop? Is Stardust releasing in beta first or are we getting the final version? All will presumably be answered on October 10 when Adobe MAX 2023 kicks off. What’s more, the company will be showing other “AI features” coming to “Firefly, Creative Cloud, Express, and more.”

Be sure to check out TechRadar’s list of the best Photoshop courses online for 2023 if you’re thinking of learning the software, but don’t know where to start. 

You might also like

TechRadar – All the latest technology news

Read More

Adobe’s free version of Firefly finally exits beta – here’s how to access it

Adobe has announced it is expanding the general availability of its Firefly generative AI tool on the company’s free Express platform. 

More specifically, the Text to Image and Text Effects tools are finally exiting their months-long beta. The former, as the name suggests, allows users to create unique images just by entering a word prompt such as horses galloping or a monster in a forest. The latter lets people create floating text bubbles with fonts sporting special effects. These two are mainly used to create compelling content for a variety of use cases from enhancing plain-looking resumes to marketing material. Apparently the tools were a huge hit with users during the beta.

Firefly’s text features are available in over 100 different languages from Spanish, French, Japanese, and of course, English. What’s interesting is Adobe tells us the AI is “safe for commercial use.” Presumably, this means the model won’t generate anything inappropriate or totally random. What it does generate will fit the prompt you entered. 

How to use Firefly

Using the generative AIs is very easy to do. It honestly takes no time at all. First, head on over to the Adobe Express website, and then create an account if you haven’t done so already. Scroll down a little on the front page, and you’ll see the creation tools primed and ready to go.  

Adobe Express website

(Image credit: Future)

Enter whatever text prompt you have in mind, give Adobe Express a few seconds to generate the content, and you’re set. You can then edit the image further if you’d like via the kit on the left-hand side.

Adobe Firefly

(Image credit: Future)

Future updates

The rest of the Firefly update is mainly geared towards an entrepreneurial audience. Subscribers to either Adobe Creative Cloud or Express Premium will begin to receive Generative Credits that can be used to have Firefly create content. Additionally, the AI is being integrated into an Adobe asset library for businesses. There aren’t any new features for everyday, casual users – at least not right now. 

Adobe states it has plans to expand its Express platform within the coming months. Most notably, it wants to bring the “latest version” to mobile devices. So we might see the Firefly AI on smartphones by the end of the year. We reached out to Adobe for clarification. This story will be updated at a later time.

While we have you, be sure to check out TechRadar’s list of the best AI art generators for 2023. Any one of these is a good alternative for Firefly.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More