YouTube may be planning to give us new AI song generators this year – and this time the music labels could let it happen

The battle between the music industry and the rampant, often copyright-infringing, use of AI to train and compile data sets has been heating up for quite some time. But now YouTube is reportedly negotiating with record labels to pay for that privilege instead.

It seems that Sony Music Entertainment, Universal Music Group, and Warner Records are in talks with the Google-owned platform about paying to license their songs for AI training, according to an article from the Financial Times (and reported on by Engadget). However, if this deal goes through, the individual artists, not the record companies, will most likely have the last word on their participation.

It’s no coincidence that these giants have been the focus of YouTube, either. Artificial intelligence music makers Suno and Udio have recently been hit with major lawsuits filed by the Recording Industry Association of America (RIAA) and major music labels for copyright infringement. The RIAA has also been backed by the likes of Sony Music Entertainment, UMG Recordings, Inc., and Warner Records, Inc.

Furthermore, this isn’t even the first time YouTube has been reportedly involved in ways to properly compensate music artists for generative AI use. In August 2023, the video platform announced its partnership with Universal Music Group to create YouTube’s Music AI Incubator program. This program would partner with music industry talent like artists, songwriters, and producers to decide on how to proceed with the advent of AI music.

Artists have been quite outspoken about generative AI use and music 

Judging from artists' past responses on the subject of AI, many of them have been very outspoken about its dangers and how it devalues their music. In April 2023, over 200 artists signed an open letter calling for protections for AI.

In a statement by the Artist Rights Alliance, those artists wrote: “This assault on human creativity must be stopped states. We must protect against the predatory use of AI to steal professional artists' voices and likenesses, violate creators' rights, and destroy the music ecosystem.”

Even artists who are more open to and have even benefited from generative AI’s usage regarding music ask to be properly included in any decision-making regarding such use, as asserted by an open letter from Creative Commons released in September 2023. 

According to said letter: “Sen. Schumer and Members of Congress, we appreciate…that your goal is to be inclusive, pulling from a range of ‘scientists, advocates, and community leaders’ who are actively engaged with the field. Ultimately, that must mean including artists like us.”

The general consensus from creatives in the music industry is that, whether for or against generative AI use, artists must be included in conversations and policy-making and that their works must be properly protected. And considering that artists are the ones with the most to lose, this is by far the best and most ethical way to approach this issue.

You might also like

TechRadar – All the latest technology news

Read More

AI music makers face recording industry legal battle of the bands that could spell trouble for your AI-generated tunes

Artificial intelligence music makers Suno and Udio have been hit with major lawsuits filed by the Recording Industry Association of America (RIAA) and major music labels for copyright infringement. The suits mark the latest battle over generative AI and synthetic media and the debate over whether they represent original creations or infringement of intellectual property rights.

The RIAA was joined by Sony Music Entertainment, UMG Recordings, Inc., and Warner Records, Inc. in the lawsuits. Suno was sued in the United States District Court for the District of Massachusetts, while Udio developer Uncharted Labs, Inc., was sued in the United States District Court for the Southern District of New York. The complaints allege that both companies have copied and exploited copyrighted sound recordings without permission.

Both Suno and Udio translate text prompts into music, much like other tools can create images or videos based on a user’s suggestion. While there are plenty of other music AI developers, Suno and Udio were likely picked because of their relatively successful products. Suno AI is part of the Microsoft Copilot generative AI assistant, while Udio went viral for the creation of “BBL Drizzy.” The recording agencies say the music generated by the AI models is not original but just a reworking of copyrighted material. Notably, the groups suing are making an effort to sound like they aren’t against the tech, just how it’s used by those companies. 

“The music community has embraced AI and we are already partnering and collaborating with responsible developers to build sustainable AI tools centered on human creativity that put artists and songwriters in charge,” RIAA Chairman and CEO Mitch Glazier said in a statement. “But we can only succeed if developers are willing to work together with us. Unlicensed services like Suno and Udio that claim it’s ‘fair’ to copy an artist’s life’s work and exploit it for their own profit without consent or pay set back the promise of genuinely innovative AI for us all.”

Press pause

This could be pivotal in the fight over music AI, which has been escalating for a while. The viral deepfakes of Ghostwriter and his multiple synthetic songs with voice clones of real artists attest to the growing interest, and to the RIAA, danger, of this technology. 

TikTok and YouTube have also been drawn into the fray. Earlier this year, music by UMG artists, including Taylor Swift, was temporarily removed from TikTok due to unresolved licensing issues, partly driven by concerns over AI-generated content. In response to similar issues, YouTube introduced a system last fall to remove AI-generated music upon the request of rights holders. In May, Sony Music issued warnings to hundreds of tech companies about the unauthorized use of copyrighted material, signaling the industry’s proactive stance against unlicensed AI-generated music.

The RIAA wants the courts to rule Suno and Udio infringed on their copyrights, get them to pay for it, and stop them from continuing to do so. Unsurprisingly, the companies being sued disagree. 

“Our technology is transformative, it is designed to generate completely new outputs, not to memorize and regurgitate pre-existing content,” Suno CEO Mikey Shulman said in a statement. “We would have been happy to explain this to the corporate record labels that filed this lawsuit (and in fact, we tried to do so), but instead of entertaining a good faith discussion, they’ve reverted to their old lawyer-led playbook. Suno is built for new music, new uses, and new musicians. We prize originality.” 

The lawsuit won’t immediately affect Suno and Udio and their customers barring some unlikely early ruling from the courts. But, a legal battle at this level suggests any easy compromise is off the table. The move may speed up the timetable for the creation of a regulatory framework and accompanying laws to back it up, however.

Depending on how that goes, people using Suno, Udio, and other AI audio makers may have to remove the music from anything they have published. I wouldn’t stake everything on the current AI music scene staying the same, but the technology will almost certainly still be around regardless of the lawsuit, just perhaps with new controls and official approval of any songs for training AI models.

You might also like…

TechRadar – All the latest technology news

Read More

Ray-Ban Meta smart glasses get new Amazon Music and mental health update

In a sign that they could follow the roughly monthly Meta Quest 3 software update schedule we’re used to, the Ray-Ban Meta smart glasses have received several new tools like Calm and improved Instagram integration just 29 days since they got their massive April 23 upgrade.

While none of these improvements seemingly include wider access to the still-in-beta and still-US-and-Canada exclusive Meta AI, they do include some helpful hands-free features that users can enjoy right now.

The first are new Meta AI prompts that allow you to enjoy guided meditation, mindfulness exercises, and self-care content through your smart specs by simply saying “Hey Meta, play the Daily Calm.” New Calm users will also be able to access a three-month free subscription through prompts in the Meta View app.

Beyond this, your smart specs can now directly stream tunes from Amazon Music using voice-only controls (joining Apple Music which added hands-free support in April) – you just need to connect your account via the Meta View app. There’s new Instagram Story sharing options, too.

Simply say something like, “Hey Meta, post a photo to Instagram” to snap a pic that’ll be shared automatically to your connected Instagram account.

As the Meta blog post sharing details of the update explains, these new Ray-Ban Meta smart glasses features are rolling out gradually. So if you don’t see the update in the Meta View app, don’t panic – you should get the update soon enough.

Three new styles

The Skyler Ray-Ban Meta smart glasses with pink lenses

The Skyler in Shiny Chalky Gray (above) are one of three new versions of the Ray-Ban Meta smart glasses (Image credit: Meta / Ray-Ban)

If you don’t like waiting for software changes, there are also some hardware updates – which are available now.

The internal specs are the exact same as the original models, but Meta and Ray-Ban have launched three new styles which are available in the US, Canada, Australia, and “throughout Europe.” They are:

  • Skyler in Shiny Chalky Gray with Gradient Cinnamon Pink Lenses
  • Skyler in Shiny Black with Transitions® Cerulean Blue Lenses
  • Headliner Low Bridge Fit in Shiny Black with Polar G15 Lenses

Hopefully this monthly software schedule will continue, and if it does maybe those of us outside the US might not have to wait too much longer for the Meta AI to hit our devices in a future update.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI’s Sora just made another brain-melting music video and we’re starting to see a theme

OpenAI's text-to-video tool has been a busy bee recently, helping to make a short film about a man with a balloon for a head and giving us a glimpse of the future of TED Talks – and now it's rustled up its first official music video for the synth-pop artist Washed Out (below).

This isn't the first music video we've seen from Sora – earlier this month we saw this one for independent musician August Kamp – but it is the first official commissioned example from an established music video director and artist.

That director is Paul Trillo, an artist who's previously made videos for the likes of The Shins and shared this new one on X (formerly Twitter). He said the video, which flies through a tunnel-like collage of high school scenes, was “an idea I had almost 10 years ago and then abandoned”, but that he was “finally able to bring it to life” with Sora.

It isn't clear exactly why Sora was an essential component for executing a fairly simple concept, but it helped make the process much simpler and quicker. Trillo points to one of his earlier music videos, The Great Divide for The Shins, which uses a similar effect but was “entirely 3D animated”.

As for how this new Washed Out video was made, it required less non-Sora help than the Shy Kids' Air Head video, which involved some lengthy post-production to create the necessary camera effects and consistency. For this one, Trillo said he used text-to-video prompts in Sora, then cut the resulting 55 clips together in Premiere Pro with only “very minor touch-ups”.

The result is a video that, like Sora's TED Talks creation (which was also created by Trillo), hints at the tool's strengths and weaknesses. While it does show that digital special effects are going to be democratized for visual projects with tight budgets, it also reveals Sora's issues with coherency across frames (as characters morph and change) and its persistent sense of uncanny valley.

Like the TED Talks video, a common technique to get around these limitations is the dreamy fly-through technique, which ensures that characters are only on-screen fleetingly and that any weird morphing is a part of the look rather than a jarring mistake. While it works for this video, it could quickly become a trope if it's over-used.

A music video tradition

Two people sitting on the top deck of a bus

(Image credit: OpenAI / Washed Out)

Music videos have long been pioneers of new digital technology – the Dire Straits video for Money For Nothing in 1985, for example, gave us an early taste of 3D animation, while Michael Jackson's Black Or White showed off the digital morphing trick that quickly became ubiquitous in the early 90s (see Terminator 2: Judgement Day). 

While music videos lack the cultural influence they once did, it looks like they'll again be a playground for AI-powered effects like the ones in this Washed Out creation. That makes sense because Sora, which OpenAI expects to release to the public “later this year”, is still well short of being good enough to be used in full-blown movies.

We can expect to see these kinds of effects everywhere by the end of the year, from adverts to TikTok promos. But like those landmark effects in earlier music videos, they will also likely date pretty quickly and become visual cliches that go out of fashion.

If Sora can develop at the same rate as OpenAI's flagship tool, ChatGPT, it could evolve into something more reliable, flexible, and mainstream – with Adobe recently hinting that the tool could soon be a plug-in for Adobe Premiere Pro. Until then, expect to see a lot more psychedelic Sora videos that look like a mashup of your dreams (or nightmares) from last night.

You might also like…

TechRadar – All the latest technology news

Read More

Ray Ban’s Meta Glasses now let you listen to Apple Music with voice controls for maximum nerd points

The Ray-Ban Meta smart glasses are still waiting on their big AI update – which is set to bring features like ‘Look and Ask’ out of the exclusive beta and bring them to everyone – but while we wait, a useful upgrade has just rolled out to the specs.

The big feature for many will be native Apple Music controls (via 9to5Mac). Previously you could play Apple Music through the Ray-Ban Meta glasses by using the app on your phone and touch controls on the glass’ arms, but this update allows you to use the Meta AI voice controls to play songs, playlists, albums, and stations from your music library for a hands-free experience.

The update also brings new touch controls. You touch and hold the side of the glasses to have Apple Music automatically play tracks based on your listening history.

The Apple Music app icon against a red background on an iPhone.

(Image credit: Brett Jordan / Unsplash)

Beyond Apple Music integration, the new update also allows you to use the glasses as a video source for WhatsApp and Messenger calls. This improves on pre-existing interoperability that allows you to send messages, and images or videos you captured using the glasses to contacts in these apps using the Meta AI.

You can also access a new command, “Hey Meta, what song is this?” to have your glasses tell you what song is playing through your smart specs. This isn’t quite as useful as recognizing tracks that are playing in public as you walk around, but could be handy if you like collecting playlists of new and unfamiliar artists.

To update your glasses to the latest version, simply go to the Meta View App, go to Settings, open the Your Glasses menu option, then Updates. You’ll also want to have your glasses to hand and make sure they’re turned on and connected to your phone via Bluetooth. If you can’t see the update – and your phone says it isn’t already on version 4.0 – then check the Play Store or App Store to see if the Meta View app itself needs an update.

You might also like

TechRadar – All the latest technology news

Read More

Google’s Gemini AI app could soon let you sync and control your favorite music streaming service

Google's latest AI experiment, Gemini, is about to get a whole lot more useful thanks to support for third-party music streaming services like Spotify and Apple Music. This new development was apparently found in Gemini’s settings, and users will be able to pick their preferred streaming service to use within Gemini.

Gemini has been running shifts all around different Google products, particularly as a digital assistant sometimes in place of and sometimes in tandem with Google Assistant

It’s still somewhat limited compared to Assistant and is not at the stage where it can fully replace the Google staple. One of these limitations is that it can’t enlist a streaming service of a user’s choice to play a song or other audio recording like many popular digital assistants (including Google Assistant) can. This might not be the case for long, however. 

The tech blog PiunikaWeb and X user @AssembleDebug claim that Gemini is getting the feature, and they have screenshots to back up their claim. 

Screenshots from PiunikaWeb’s tipster show that the Gemini app’s settings now have a new “Music” option, with text reading “Select preferred services used to play music” underneath. This will presumably allow users to choose from whatever streaming services Google deems compatible.

Once you choose a streaming service, Gemini will hopefully work seamlessly with that service and enable you to control it using voice commands. PiunikaWeb suggests that users will be able to use Gemini for song identification, possibly by letting Gemini listen to the song, and then interact with a streaming app to try and find the song that’s playing in their surroundings, similar to the way Shazam works. If that’s the case, that’s one fewer separate app you’ll need.

What we don't know yet, but hope to soon

Woman listening music on her headphones while resting on couch and holding her phone and looking out in the distance

(Image credit: Shutterstock/Dean Drobot)

This is all very exciting and from the screenshots, it looks like the feature is a good amount into development. 

It’s not clear if PiunikaWeb’s tipster could get the feature to actually work or which streaming services will work in sync with Gemini, and we don’t know when Google will roll this feature out. 

Still, it’s highly requested and a must if Google has plans for Gemini to take Assistant’s place, so it’ll probably be rolled out in a future Gemini update. It’s also indicative to me that Google seems pretty committed to expanding Gemini’s repertoire so that it joins Google’s other popular products and services. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Spotify’s rumored remix feature could completely change how we listen to music

Spotify is reportedly working on adding remixing tools to its streaming service, giving users a way to reimagine their favorite tracks. 

The news comes from The Wall Street Journal (WSJ) whose sources state people will be able to “speed up, mash-up, and otherwise edit songs” however they want. The article explains that one of the purported additions is a playback feature for controlling how fast or how slow a track plays. When you’re finished with a remix, you can then share it with other Spotify users, but not to third-party platforms or social media. There are licensing agreements in place that will prevent people from sharing their creations.

The availability of these tools will differ depending on the type of Spotify subscription you have. The “more basic features” such as the speed control will be on the basic plan; however, the “advanced song modification features” will be on the company’s long-rumored Supremium tier

Imminent launch

Several lines of code were discovered by Reddit user Hypixely on the Spotify subreddit revealing the company plans on introducing the remix patch as the “Music Pro” add-on. Accompanying text also talks about lossless audio arriving on the platform which could be referring to Supremium. The name of the plan isn't explicitly stated, but the clues are there. The fact that lossless was mentioned alongside the remix update could hint at an imminent release for both, although it may still be a while before we see either one.

According to The Wall Street Journal, the platform is currently hashing out the details with music rights holders. Development is still in the early stages, but once everything comes out, it could upend the way we enjoy music.

Analysis: if you can't beat them…

Arguably, some of the more popular versions of songs are remixes. Fan reinterpretations can alter the meaning of the original and even serve as an introduction to a new generation. As the WSJ points out, people like to add their own unique twists on a classic or edit them for dance challenges or memes. That type of content can be a very effective way of discovering new music. How many times have you seen people in the comments section asking for the source of a song or movie or whatever? It’s quite common.

As great as fan remixes may be, they’ve apparently become a bit of a problem. Musicians and labels don’t get paid for the content utilizing their work. The WSJ mentions how a “sped-up cover version” of the song “Somewhere Only We Know” by the rock band Keane has over 33 million tracks on Spotify. Record executives see this and force these platforms to do something.

There are different solutions to this problem. Spotify chose the path of “if you can’t beat them, join ‘em.” It’s a win-win scenario for everyone involved. Rather than ban the content, the company is choosing to embrace the remixes. People can be creative and artists can get paid.

If you want to flex that creative muscle, check out TechRadar's list of the best free music-making software for 2024.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI’s Sora just made its first music video and it’s like a psychedelic trip

OpenAI recently published a music video for the song Worldweight by August Kamp made entirely by their text-to-video engine, Sora. You can check out the whole thing on the company’s official YouTube channel and it’s pretty trippy, to say the least. Worldweight consists of a series of short clips in a wide 8:3 aspect ratio featuring fuzzy shots of various environments. 

You see a cloudy day at the beach, a shrine in the middle of a forest, and what looks like pieces of alien technology. The ambient track coupled with the footage results in a uniquely ethereal experience. It’s half pleasant and half unsettling. 

It’s unknown what text prompts were used on Sora; Kamp didn’t share that information. But she did explain the inspiration behind them in the description. She states that whenever she created the track, she imagined what a video representing Worldweight would look like. However, she lacked a way to share her thoughts. Thanks to Sora, this is no longer an issue as the footage displays what she had always envisioned. It's “how the song has always ‘looked’” from her perspective.

Embracing Sora

If you pay attention throughout the entire runtime, you’ll notice hallucinations. Leaves turn into fish, bushes materialize out of nowhere, and flowers have cameras instead of petals. But because of the music’s ethereal nature, it all fits together. Nothing feels out of place or nightmare-inducing. If anything, the video embraces the nightmares.

We should mention August Kamp isn’t the only person harnessing Sora for content creation. Media production company Shy Kids recently published a short film on YouTube called “Air Head” which was also made on the AI engine. It plays like a movie trailer about a man who has a balloon for a head.

Analysis: Lofty goals

It's hard to say if Sora will see widespread adoption judging by this content. Granted, things are in the early stages, but ready or not, that hasn't stopped OpenAI from pitching its tech to major Hollywood studios. Studio executives are apparently excited at the prospects of AI saving time and money on production. 

August Kamp herself is a proponent of the technology stating, “Being able to build and iterate on cinematic visuals intuitively has opened up categorically new lanes of artistry for me”. She looks forward to seeing “what other forms of storytelling” will appear as artificial intelligence continues to grow.

In our opinion, tools such Sora will most likely enjoy a niche adoption among independent creators. Both Kamp and Shy Kids appear to understand what the generative AI can and cannot do. They embrace the weirdness, using it to great effect in their storytelling. Sora may be great at bringing strange visuals to life, but in terms of making “normal-looking content”, that remains to be seen.

People still talk about how weird or nightmare-inducing content made by generative AI is. Unless OpenAI can surmount this hurdle, Sora may not amount to much beyond niche usage.

It’s still unknown when Sora will be made publicly available. OpenAI is holding off on a launch, citing potential interference in global elections as one of its reasons. Although, there are plans to release the AI by the end of 2024.

If you're looking for other platforms, check out TechRadar's list of the best AI video makers for 2024.

You might also like

TechRadar – All the latest technology news

Read More

YouTube Music will finally let you look up tracks just by singing into your phone

It took a little while, but YouTube Music is, at long last, giving users the ability to search for songs just by singing a tune into a smartphone’s microphone.

The general YouTube app has had this feature since mid-October 2023, and judging from recently found images on Reddit, the version on YouTube Music functions in the exact same way. In the upper right corner next to the search bar is an audio chart icon. Tapping it activates song search where you then either play, sing, or hum a tune into your device. 

Using the power of artificial intelligence, the app will quickly bring up a track that, according to 9To5Google, matches “the sound to the original recording.” The tool’s accuracy may depend entirely on your karaoke skills. 

Missing details

Because there hasn't an official announcement yet, there are a lot of missing details. For starters, it’s unknown how long you're supposed to sing or hum. The original tool required people to enter a three-second input before it could perform a search. Presumably it will take the same amount of time, but without official word from the platform, it’s hard to say with total confidence.

Online reports claim the update is already available on YouTube Music for iOS. However, 9To5Google states they couldn’t find the feature on either their iPhones or Android devices. Our Android phone didn’t receive the patch either so it’s probably seeing a limited release at the moment. 

We reached out to Google asking if it would like to share official info about YouTube Music’s song search tool alongside a couple of other questions. More specifically, we wanted to know if the feature is rolling out to everyone, or will it require a YouTube Music Premium plan? We will update if we get answers. 

You can't listen to music without a good pair of headphones. For recommendations, check out TechRadar's list of the best wireless headphones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Adobe’s new AI music tool could make you a text-to-musical genius

Adobe is getting into the music business as the company is previewing its new experimental generative AI capable of making background tracks.

It doesn’t have an official name yet since the tech is referred to as Project Music GenAI. The way it works, according to Adobe, is you enter a text prompt into the AI describing what you want to hear; be it “powerful rock,” “happy dance,” or “sad jazz”. Additionally, users will be able to upload music files to the generative engine for further manipulation. There will even be some editing tools in the workflow for on-the-fly adjustments. 

If any of this sounds familiar to you, that’s because we’ve seen this type of technology multiple times before. Last year, Meta launched MusicGen for creating short instrumentals and Google opened the doors to its experimental audio engine called Instrument Playground. But what’s different about Adobe’s tool is it offers easier, yet robust editing – as far as we can tell. 

Project Music GenAI isn’t publicly available. However, Adobe did recently publish a video on its official YouTube channel showing off the experiment in detail. 

Adobe in concert

The clip primarily follows a researcher at Adobe demonstrating what the AI can do. He starts by uploading the song Habanera from Georges Bizet’s opera Carmen and then proceeds to change the melody via a prompt. In one instance, the researcher instructed Project Music to make Habanera sound like an inspirational film score. Sure enough, the output became less playful and more uplifting. In another example, they gave the song a hip-hop-style accompaniment. 

When it comes to generating fresh content, Project Music can even make songs with different tempos and structures. There is a clear delineation between the intro, the verse, the chorus, and other parts of the track. It can even create indefinitely looping music for videos as well as fade-outs for the outro.

No experience necessary

These editing abilities may make Adobe’s Project Music better than Instrument Playground. Google’s engine has its own editing tools, however they’re difficult to use. It seems you need some production experience to get the most out of Instrument Playground. Project Music, on the other hand, aims to be more intuitive.

And if you're curious to know, Meta's MusicGen has no editing tools. To make changes, you have to remake the song from scratch.

In a report by TheVerge, Adobe states the current demo utilizes “public domain content” for content generation. It’s not totally clear whether people will be able to upload their own files to the final release. Speaking of which, a launch date for Project Music has yet to be revealed although Adobe will be holding its Summit event in Las Vegas beginning March 26. Still, we reached out to the company asking for information. This story will be updated at a later time.

In the meantime, check out TechRadar's list of the best audio editor for 2024.

You might also like

TechRadar – All the latest technology news

Read More