Adobe’s new AI music tool could make you a text-to-musical genius

Adobe is getting into the music business as the company is previewing its new experimental generative AI capable of making background tracks.

It doesn’t have an official name yet since the tech is referred to as Project Music GenAI. The way it works, according to Adobe, is you enter a text prompt into the AI describing what you want to hear; be it “powerful rock,” “happy dance,” or “sad jazz”. Additionally, users will be able to upload music files to the generative engine for further manipulation. There will even be some editing tools in the workflow for on-the-fly adjustments. 

If any of this sounds familiar to you, that’s because we’ve seen this type of technology multiple times before. Last year, Meta launched MusicGen for creating short instrumentals and Google opened the doors to its experimental audio engine called Instrument Playground. But what’s different about Adobe’s tool is it offers easier, yet robust editing – as far as we can tell. 

Project Music GenAI isn’t publicly available. However, Adobe did recently publish a video on its official YouTube channel showing off the experiment in detail. 

Adobe in concert

The clip primarily follows a researcher at Adobe demonstrating what the AI can do. He starts by uploading the song Habanera from Georges Bizet’s opera Carmen and then proceeds to change the melody via a prompt. In one instance, the researcher instructed Project Music to make Habanera sound like an inspirational film score. Sure enough, the output became less playful and more uplifting. In another example, they gave the song a hip-hop-style accompaniment. 

When it comes to generating fresh content, Project Music can even make songs with different tempos and structures. There is a clear delineation between the intro, the verse, the chorus, and other parts of the track. It can even create indefinitely looping music for videos as well as fade-outs for the outro.

No experience necessary

These editing abilities may make Adobe’s Project Music better than Instrument Playground. Google’s engine has its own editing tools, however they’re difficult to use. It seems you need some production experience to get the most out of Instrument Playground. Project Music, on the other hand, aims to be more intuitive.

And if you're curious to know, Meta's MusicGen has no editing tools. To make changes, you have to remake the song from scratch.

In a report by TheVerge, Adobe states the current demo utilizes “public domain content” for content generation. It’s not totally clear whether people will be able to upload their own files to the final release. Speaking of which, a launch date for Project Music has yet to be revealed although Adobe will be holding its Summit event in Las Vegas beginning March 26. Still, we reached out to the company asking for information. This story will be updated at a later time.

In the meantime, check out TechRadar's list of the best audio editor for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Google isn’t done trying to demonstrate Gemini’s genius and is working on integrating it directly into Android devices

Google’s newly reworked and rebranded family of generative artificial intelligence models, Gemini, may still be very much at the beginning of its development journey, but Google is making big plans for it. It’s planning to integrate Gemini into Android software for phones, and it’s predicted that users will be able to access it offline in 2025, according to a top executive at Google’s Pixel division, Brian Rakowski.

Gemini is a series of large language models that are designed to understand and generate human-like text and more, and the most compact, efficient model of these is Gemini Nano, intended for tasks on devices. This is the model that’s currently built and adapted to run on Pixel phones and other capable Android devices. According to Rakowski, Gemini Nano’s larger sibling models that require an internet connection to run (as they only live in Google’s data centers) are the ones expected to be integrated into new Android phones starting next year. 

Google has been able to do this thanks to recent breakthroughs in engineers’ ability to compress these bigger and more complex models to a size that was feasible for use on smaller devices. One of these larger sibling models is Gemini Ultra, which is considered a key competitor to Open AI’s premium GPT-4 chatbot, and the compressed version of it will be able to run on an Android phone with no extra assistance.

This would mean users could access the processing power that Google is offering with Gemini whether they’re connected to the internet or not, potentially improving their day-to-day experience with it. It also means whatever you enter into Gemini wouldn’t necessarily have to leave your phone for Gemini to process it (if Google wills it, that is), thereby making it easier to keep your entries and information private – cloud-based AI tools have been criticized in the past for having inferior digital security compared to locally-run models. Rakowski told CNBC that what users will experience on their devices will be “instantaneous without requiring a connection or subscription.”

Three Android phones on an orange background showing the Google Gemini Android app

(Image credit: Future)

A potential play to win users' favor 

MSPowerUser points out that the smartphone market has cooled down as of late, and some manufacturers might be trying to capture potential buyers’ attention by offering devices capable of utilizing what modern AI has to offer. While AI is an incredibly rich and intriguing area of research and novelty, it might not be enough to convince people to swap their old phone (which may already be capable of processing something like Gemini or ChatGPT) for a new one. Right now, the makers of AI hoping to raise trillions of dollars in funding are likely to offer versions that can run on existing devices so people can try it for themselves, and my guess is that satisfies most people’s AI appetites right now. 

Google, Microsoft, Amazon, and others are all trying to develop their own AI models and assistants to become the first to reap the rewards. Right now, it seems like AI models are extremely impressive and can be surprising, and they can help you at work (although caution should be heavily exercised if you do this), but their initial novelty is currently the biggest draw they have.

These tools will have to demonstrate continuous quality-of-life improvements to be significant enough to make the type of impression they’re aiming to make. I do believe steps like making their models widely available on users’ devices and giving users the option and the capability to use them offline is a step that could pay off for Google in the long run – and I would like to see other tech giants follow in its path. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More