YouTube’s new AI tool will let you create your dream song with a famous singer’s voice

YouTube is testing a pair of experimental AI tools giving users a way to create short songs either via a text prompt or their own vocal sample.

The first one is called Dream Track, a feature harnessing the voices of a group of nine mainstream artists to generate 30-second music tracks for YouTube Shorts. The way it works is you enter a text prompt into the engine describing what you want to hear and then select a singer appearing in the tool’s carousel. Participating musicians include John Legend, Sia, and T-Pain; all of whom gave their consent to be a part of the program. Back in late October, a Bloomberg report made the rounds stating YouTube was working on AI tech allowing content creators “to produce songs using the voices of famous singers”, but couldn’t launch it due to the ongoing negotiations with record labels. Dream Track appears to be that self-same AI

YouTube's Dream Track on mobile

(Image credit: YouTube)

For the initial rollout, Dream Track will be available to a small group of American content creators on mobile devices. No word on if and when it’ll see a wider release or desktop version. 

The announcement post has a couple of videos demonstrating the feature. One of them simulates a user asking the AI to create a song about “a sunny morning in Florida” using T-Pain’s voice. In our opinion, it does a pretty good job of emulating his style and coming up with lyrics on the fly, although the performance does sound like it’s been through an Auto-Tune filter.

Voices into music

The second experiment is called Music AI Tools which, as we alluded to earlier, can generate bite-sized tracks by transforming an uploaded vocal sample. For example, a short clip of you humming can turn into a guitar riff. It even works in reverse as chords coming from a MIDI keyboard can be morphed into a choir

An image on Google’s DeepMind website reveals what the user interface for the Music AI Tool desktop app may look like. At first, we figured the layout would be relatively simple like Dream Track, however, it is a lot more involved. 

YouTube's Music AI Tool

(Image credit: Google)

The interface resembles a music editing program with a timeline at the top highlighting the input alongside several editing tools. These presumably would allow users a way to tweak certain elements in a generated track. Perhaps a producer wants to tone down the distortion on a guitar riff or bump up the piano section.

Google says it is currently testing this feature with those in YouTube’s Music AI Incubator program, which is an exclusive group consisting of “artists, songwriters, and producers” from across the music industry. No word on when it’ll see a wide release.

Analysis: Treading new waters

YouTube is pitching this recent foray as a new way for creative users to express themselves; a new way to empower fledgling musicians who may lack important resources to grow. However, if you look at this from the artists’ perspective, the attitude is not so positive. The platform compiled a series of quotes from the group of nine singers regarding Dream Track. Several mention the inevitability of generative AI in music and the need to be a part of it, with a few stating they will remain cautious towards the tech.

We may be reading too much into this, but we get the vibe that some aren’t totally on board with this tech. To quote one of our earlier reports, musicians see generative AI as something “they’ll have to deal with or they risk getting left behind.” 

YouTube says it’s approaching the situation with the utmost respect, ensuring “the broader music community” benefits. Hopefully, the platform will maintain its integrity moving forward.

While we have you, be sure to check out TechRadar's list of the best free music-making software for 2023.

You might also like

Follow TechRadar on TikTok for news, reviews, unboxings, and hot Black Friday deals!

TechRadar – All the latest technology news

Read More

The Spotify HiFi dream is still alive, as platform plans to do something “unique” someday

No one can blame you if you've given up on Spotify HiFi ever becoming a thing. It’s been two years since the initial announcement. However, all hope is not lost as the streaming service recently confirmed that it’s still working on the high-res audio tier.

This news comes from Spotify co-president Gustav Söderström who sat down for an interview on TheVerge’s podcast, Decoder. Confirming HiFi’s existence was pretty much the only straight answer he gave as the rest of the responses were vague at best. According to Söderström, the reason why the tier is taking so long is that the “industry changed and [Spotify] had to adapt”, but doesn’t elaborate any further. He does hint at the cost of HiFi and deals with music labels as being two major factors to the delay, and again, doesn't elaborate any further.

Söderström goes on to say Spotify wants to do something “unique” with HiFi and not “unnecessarily commoditize” itself by “[doing] what everyone else does”. When asked about an expected launch date and support for spatial audio, Söderström remained tight-lipped. There will be a “Spotify HiFi lossless-type experience at some point” in the future, however, that’s all the co-president was willing to divulge.

Söderström’s comment on needing to adapt to a changing industry is arguably the most telling in that whole exchange because it’s emblematic of the company’s recent moves. Pinning the delay of Spotify HiFi on not wanting to copy other platforms is rather ironic if you think about it. For starters, the streaming service is currently rolling out a redesign for its mobile app taking clear inspiration from TikTok. It now sports a vertical discovery feed as a way to encourage people to check out the latest songs or popular podcasts. You even have Spotify incorporating tech from OpenAI in its new DJ feature to simulate a real-life radio DJ. While these additions are great and everything, do users really want the TikTok experience and generative AI? From what we’ve seen, not really.

It appears the platform is more interested in growing its media library over providing HiFi. Spotify has grown its podcast content exponentially alongside real-time transcriptions. Also, the audiobook feed has a new preview feature where users can listen to a book for five minutes before purchasing. All this and still no high-res audio, at least any time soon. We asked Spotify if it could tell us more about its HiFi tier – anything at all. This story will be updated at a later time.

If you want to get high-res audio, there’s a way to do it with the right set of devices. Be sure to check out TechRadar’s guide on how to buy into high-res audio without the high prices

TechRadar – All the latest technology news

Read More

ChatGPT touches down on smartwatches – and it looks like a sci-fi dream

ChatGPT continues its march across the tech industry as it reaches a new frontier: smartwatches. Fitness brand Amazfit has revealed it’s going to be adding the generative AI as a feature on its GTR4 device.

Looking at a recently posted demo on LinkedIn, ChatGPT will be listed as ChatGenius in the GTR4’s menu, and from there, you can ask it whatever you want. The video shows someone asking how they can improve their running performance. Then in just a few seconds, ChatGenius responds with a several paragraph answer which you can read in its entirety by turning the watchface crown. Tap the screen to erase the previous response and you can ask a new question. You can even ask ChatGenius how your day was and it’ll tell you how many steps you took plus your current heart rate.

Beyond the demo, there’s very little information out there on how ChatGPT will work on the Amazfit GTR4. Other reports claim you can ask generic questions like the weather forecast or traffic, just like any other smartwatch. It’s also unknown which other Amazfit devices will even get the feature. The video alludes to ChatGPT support depending on the watch model and your location, with the United States being the only confirmed region at the time of this writing. 

We reached out to Amazfit about the availability of ChatGPT support as well as what else it can do. Can it, for instance, show different types of data or is it limited to just a few things? This story will be updated if we hear back. 

First-party support

The fact that Amazfit was able to beat out the tech giants in adding first-party support for generative AI to a smartwatch is a big accomplishment. The closest thing to ChatGPT on something like the Apple Watch is a third-party app called watchGPT for the Apple Watch. It works pretty much the same way. You open the app, ask a question, and you get a several-paragraph response. However, there are some notable differences.

For starters, you have to pay $ 3.99 to use it whereas Amazfit’s feature is free. But you can “share the outcome of your interaction” with other people either through text, email, or social media messages.  It’s unknown whether or not the GTR4 can do the same at this point. Either way, Amazfit has managed to break boundaries before anyone else. We think it’s only a matter of time before the likes of Apple or Google eventually add first-party generative AI support to their own smartwatches. The tech is already on browsers and search engines, after all.

Be sure to check out TechRadar’s recently updated list of the best cheap smartwatches for the year if you’re in the market for one. 

TechRadar – All the latest technology news

Read More

This Microsoft Edge update is a dream for all you clumsy typists

Spelling errors may soon be a thing of the past for Microsoft Edge users thanks to a new update coming to the software.

The company has revealed it is working on bringing a new “text predictions” feature to its browser that uses Microsoft's own in-house AI and ML technology to offer word suggestions to users.

This feature will initially be available to Windows 10 and Windows 11 users in the Edge Canary Channel, but should be coming to a wider audience soon.

Microsoft Edge text predictions

The change will see Microsoft Edge utilizing a similar process seen in the company's Outlook platform and Microsoft Editor service.

Predictions or suggestions will be displayed in a greyed-out suggestion box shown when the user is typing in Microsoft Edge. Users can accept a text prediction suggestion by clicking Tab or pressing the right arrow key – and to ignore a suggestion, just continue typing and the preview will disappear.

Users can try out the new addition now, but will need to be members of the Edge Canary Channel to do so. There's no news on a wider release date just yet, but given Microsoft's past track record, the tool should come to the market soon.

It's the latest in a series of recent upgrades for Microsoft Edge as the company looks to keep users engaged and away from competitors such as Google Chrome.

This includes the launch of a new “Games” panel in the browser, along with a new twist on the RSS-style Followable Web feature that lets users follow their favorite YouTube creators with the press of a button.

Although Chrome only offers text suggestions in the URL search bar, several other Google tools provide predictive text tools for users.

Autocorrect came to Google Docs back in February 2020, with the company's Smart Compose tool looking to help users stamp out spelling or grammar mistakes following its launch on Gmail all the way back in 2018.

Smart Compose automatically suggests the next few words of a sentence based on what you've already typed, learning from your writing habits to become more accurate over time.

Via WindowsLatest

TechRadar – All the latest technology news

Read More