AI music makers face recording industry legal battle of the bands that could spell trouble for your AI-generated tunes

Artificial intelligence music makers Suno and Udio have been hit with major lawsuits filed by the Recording Industry Association of America (RIAA) and major music labels for copyright infringement. The suits mark the latest battle over generative AI and synthetic media and the debate over whether they represent original creations or infringement of intellectual property rights.

The RIAA was joined by Sony Music Entertainment, UMG Recordings, Inc., and Warner Records, Inc. in the lawsuits. Suno was sued in the United States District Court for the District of Massachusetts, while Udio developer Uncharted Labs, Inc., was sued in the United States District Court for the Southern District of New York. The complaints allege that both companies have copied and exploited copyrighted sound recordings without permission.

Both Suno and Udio translate text prompts into music, much like other tools can create images or videos based on a user’s suggestion. While there are plenty of other music AI developers, Suno and Udio were likely picked because of their relatively successful products. Suno AI is part of the Microsoft Copilot generative AI assistant, while Udio went viral for the creation of “BBL Drizzy.” The recording agencies say the music generated by the AI models is not original but just a reworking of copyrighted material. Notably, the groups suing are making an effort to sound like they aren’t against the tech, just how it’s used by those companies. 

“The music community has embraced AI and we are already partnering and collaborating with responsible developers to build sustainable AI tools centered on human creativity that put artists and songwriters in charge,” RIAA Chairman and CEO Mitch Glazier said in a statement. “But we can only succeed if developers are willing to work together with us. Unlicensed services like Suno and Udio that claim it’s ‘fair’ to copy an artist’s life’s work and exploit it for their own profit without consent or pay set back the promise of genuinely innovative AI for us all.”

Press pause

This could be pivotal in the fight over music AI, which has been escalating for a while. The viral deepfakes of Ghostwriter and his multiple synthetic songs with voice clones of real artists attest to the growing interest, and to the RIAA, danger, of this technology. 

TikTok and YouTube have also been drawn into the fray. Earlier this year, music by UMG artists, including Taylor Swift, was temporarily removed from TikTok due to unresolved licensing issues, partly driven by concerns over AI-generated content. In response to similar issues, YouTube introduced a system last fall to remove AI-generated music upon the request of rights holders. In May, Sony Music issued warnings to hundreds of tech companies about the unauthorized use of copyrighted material, signaling the industry’s proactive stance against unlicensed AI-generated music.

The RIAA wants the courts to rule Suno and Udio infringed on their copyrights, get them to pay for it, and stop them from continuing to do so. Unsurprisingly, the companies being sued disagree. 

“Our technology is transformative, it is designed to generate completely new outputs, not to memorize and regurgitate pre-existing content,” Suno CEO Mikey Shulman said in a statement. “We would have been happy to explain this to the corporate record labels that filed this lawsuit (and in fact, we tried to do so), but instead of entertaining a good faith discussion, they’ve reverted to their old lawyer-led playbook. Suno is built for new music, new uses, and new musicians. We prize originality.” 

The lawsuit won’t immediately affect Suno and Udio and their customers barring some unlikely early ruling from the courts. But, a legal battle at this level suggests any easy compromise is off the table. The move may speed up the timetable for the creation of a regulatory framework and accompanying laws to back it up, however.

Depending on how that goes, people using Suno, Udio, and other AI audio makers may have to remove the music from anything they have published. I wouldn’t stake everything on the current AI music scene staying the same, but the technology will almost certainly still be around regardless of the lawsuit, just perhaps with new controls and official approval of any songs for training AI models.

You might also like…

TechRadar – All the latest technology news

Read More

AI-generated movies will be here sooner than you think – and this new Google DeepMind tool proves it

AI video generators like OpenAI's Sora, Luma AI's Dream Machine, and Runway Gen-3 Alpha have been stealing the headlines lately, but a new Google DeepMind tool could fix the one weakness they all share – a lack of accompanying audio.

A new Google DeepMind post has revealed a new video-to-audio (or 'V2A') tool that uses a combination of pixels and text prompts to automatically generate soundtracks and soundscapes for AI-generated videos. In short, it's another big step toward the creation of fully-automated movie scenes.

As you can see in the videos below, this V2A tech can combine with AI video generators (including Google's Veo) to create an atmospheric score, timely sound effects, or even dialogue that Google DeepMind says “matches the characters and tone of a video”.

Creators aren't just stuck with one audio option either – DeepMind's new V2A tool can apparently generate an “unlimited number of soundtracks for any video input” for any scene, which means you can nudge it towards your desired outcome with a few simple text prompts.

Google says its tool stands out from rival tech thanks to its ability to generate audio purely based on pixels – giving it a guiding text prompt is apparently purely optional. But DeepMind is also very aware of the major potential for misuses and deepfakes, which is why this V2A tool is being ringfenced as a research project – for now.

DeepMind says that “before we consider opening access to it to the wider public, our V2A technology will undergo rigorous safety assessments and testing”. It will certainly need to be rigorous, because the ten short video examples show that the tech has explosive potential, for both good and bad.

The potential for amateur filmmaking and animation is huge, as shown by the 'horror' clip below and one for a cartoon baby dinosaur. A Blade Runner-esque scene (below) showing cars skidding through a city with an electronic music soundtrack also shows how it could drastically reduce budgets for sci-fi movies. 

Concerned creators will at least take some comfort from the obvious dialogue limitations shown in the 'Claymation family' video. But if the last year has taught us anything, it's that DeepMind's V2A tech will only improve drastically from here.

Where we're going, we won't need voice actors

The combination of AI-generated videos with AI-created soundtracks and sound effects is a game-changer on many levels – and adds another dimension to an arms race that was already white hot.

OpenAI has already said that it has plans to add audio to its Sora video generator, which is due to launch later this year. But DeepMind's new V2A tool shows that the tech is already at an advanced stage and can create audio based purely on videos alone, rather than needing endless prompting.

DeepMind's tool works using a diffusion model that combines information taken from the video's pixels and the user's text prompts then spits out compressed audio that's then decoded into an audio waveform. It was apparently trained on a combination of video, audio, and AI-generated annotations.

Exactly what content this V2A tool was trained on isn't clear, but Google clearly has a potentially huge advantage in owning the world's biggest video-sharing platform, YouTube. Neither YouTube nor its terms of service are completely clear on how its videos might be used to train AI, but YouTube's CEO Neal Mohan recently told Bloomberg that some creators have contracts that allow their content to be used for training AI models.

Clearly, the tech still has some limitations with dialogue and it's still a long way from producing a Hollywood-ready finished article. But it's already a potentially powerful tool for storyboarding and amateur filmmakers, and hot competition with the likes of OpenAI means it's only going to improve rapidly from here.

You might also like…

TechRadar – All the latest technology news

Read More

OpenAI is working on a new tool to help you spot AI-generated images and protect you from deep fakes

You’ve probably noticed a few AI-generated images sprinkled throughout your different social media feeds – and there are likely a few you’ve probably scrolled right past, that may have slipped your keen eyes. 

For those of us who have been immersed in the world of generative AI, spotting AI images is a little easier, as you develop a mental checklist of what to look out for.

However, as the technology gets better and better, it is going to get a lot harder to tell. To solve this, OpenAI is developing new methods to track AI-generated images and prove what has and has not been artificially generated.

According to a blog post, OpenAI’s new proposed methods will add a tamper-resistant ‘watermark’ that will tag content with invisible ‘stickers.’ So, if an image is generated with OpenAI’s DALL-E generator, the classifier will flag it even if the image is warped or saturated.

The blog post claims the tool will have around a 98% accuracy when spotting images made with DALL-E. However, it will only flag 5-10% of pictures from other generators like Midjourney or Adobe Firefly

So, it’s great for in-house images, but not so great for anything produced outside of OpenAI. While it may not be as impressive as one would hope in some respects, it’s a positive sign that OpenAI is starting to address the flood of AI images that are getting harder and harder to distinguish.

Okay, so this may not seem like a big deal to some, as a lot of instances of AI-generated images are either memes or high-concept art that are pretty harmless. But that said, there’s also a surge of scenarios now where people are creating hyper-realistic fake photos of politicians, celebrities, people in their lives, and more besides, that could lead to misinformation being spread at an incredibly fast pace.

Hopefully, as these kinds of countermeasures get better and better, the accuracy will only improve, and we can have a much more accessible way to double-check the authenticity of the images we come across in our day-to-day life.

You might also like

TechRadar – All the latest technology news

Read More

Copilot gets a big redesign and a new way to edit your AI-generated images

It’s been one year since Bing Chat received its generative AI power-up and we’ve seen it change a lot since including a rebranding into Copilot. To celebrate the first anniversary, Microsoft decided to redesign Copilot’s homepage as well as introduce a new editing feature.

The company states when you visit the AI’s engine desktop website, “you will see… a cleaner, sleeker look”. In the middle of the page is a revolving carousel of sample prompts with an accompanying image. Its purpose, according to Microsoft, is to give you an idea of what Copilot is capable of; to get those creative juices flowing. It is certainly more engaging than the previous version. The old page had three sample text prompts next to each other with no indication that it could create images.

Copilot on mobile is receiving an identical update. The app has the same carousel of sample prompts with a picture above to give you some ideas. You also have the option to toggle GPT-4 for better results. Activating it turns the software’s blue accents to purple. 

Tweaking prompts

As for the feature mentioned earlier, it’s called Designer. It allows you to make tweaks to generated content like highlighting certain aspects, blurring the background, or adding a unique filter. As an example, let’s take Copilot’s suggestion of creating an image of an animal wearing a football helmet. Moving your cursor over the picture makes a bold white line appear around an object. Clicking it highlights the portion. 

A couple of options appear at the bottom of the window. We chose to tell Copilot to make the colors pop. After a few seconds, the finished product appears. You can then either undo the effect or keep it. For filters, you have eight to choose from. Pixel art, block print, and claymation are some of the selections. Like the edits before, applying a filter takes a few seconds. 

Image 1 of 3

Copilot highlighting subject

(Image credit: Future)
Image 2 of 3

Copilot making colors pop

(Image credit: Future)
Image 3 of 3

Copilot pixel filter

(Image credit: Future)

Designer is free for everyone to try out. However, subscribers to Copilot Pro will be given extra tools. They can resize generated content and regenerate images into either a square or landscape orientation. Microsoft says it eventually roll out a “Designer GPT” to Copilot. The company calls it a canvas of sorts where people can “visualize [their] ideas.” If we had to take a guess, it could be a publicly available GPT model that you can use to create editing tools. OpenAI offers a similar service with its online store. We reached out to Microsoft for more details. This story will be updated at a later time.

Check out TechRadar's list of the best free drawing software for 2024 if you'd like to find a way to make the edits yourself.

You might also like

TechRadar – All the latest technology news

Read More

Google’s Instrument Playground offers a taste of an AI-generated musical future

Google has opened the gates to its latest experimental AI called Instrument Playground which allows people to generate 20-second music tracks with a single text prompt.

If that description sounds familiar to you, that’s because other companies have something similar like Meta with MusicGen. Google’s version adds two unique twists. First, it’s claimed to be capable of emulating over 100 instruments from around the world. This includes common ones like the piano to more obscure woodwinds like the dizi from China. 

Secondly, the company states you can “add an adjective” to the prompt to give it a certain mood. For example, putting in the word “Happy” will have Instrument Playground generate an upbeat track while “Merry” will create something more Christmassy. It’s even possible to implement sound effects by choosing one of three modes: Ambient, Beat, and Pitch. For the musically inclined, you can activate Advanced mode to launch a sequencer where you can pull together up to four different AI instruments into one song.

Live demo

The Instrument Playground is publicly available so we decided to take it for a spin.

Upon going to the website, you’ll be asked what you want to play. If you’re having a hard time deciding, there is a link below the prompt that opens a list of 65 different instruments. We said we wanted an upbeat electric guitar, and to our surprise, the AI added backup vocals to the riff – sort of. Most of the lyrics are incomprehensible gibberish although Chrome’s Live Caption apparently picked up the word “Satan” in there.

The generated song plays once (although you can replay it at any time by clicking the Help icon). Afterward, you can use the on-screen keyboard to work on the track. It’s not very expansive as users will only be given access to 12 keys centered around the C Major and C Minor scales. What you see on the page is directly tied to the numbers on a computer keyboard so you can use those instead of having to slowly click each one with a mouse.

Instrumental Playground example

(Image credit: Future)

You can use the three modes mentioned earlier to manipulate the file. Ambient lets you alter the track as a whole, Beat highlights what the AI considers to be the “most interesting peaks”, and Pitch can alter the length of a select portion. Users can even shift the octave higher or lower. Be aware the editing tools are pretty rudimentary. This isn’t GarageBand.

Upon finishing, you can record an audio snippet which you can then download as a .wav file to your computer. 

In the works

If you’re interested in trying out Instrument Playground, keep in mind this is an experimental technology that is far from perfect. We’re not musicians, but even we could tell there were several errors in the generated music. Our drum sample had a piano playing in the back and the xylophone sounded like someone hitting a bunch of PVC pipes. 

We reached out to Google with several questions like when will the AI support 100 instruments (If you remember, it’s only at 65 at the time of this writing) and what the company intends to do with it. Right now, Instrument Playground feels like little more than a digital toy, only capable of creating simple beats. It'd be great to see it do more. This story will be updated at a later time.

While we have you, be sure to check out TechRadar's list of the best free music-making software in 2023

You might also like

TechRadar – All the latest technology news

Read More

Microsoft Edge may introduce a new AI-generated writing feature – and that makes me nervous

Microsoft Edge could drastically change the way we interact with written content on the web with a new AI writing feature that copies existing content and regurgitates the information in a more ‘personal’ tone at the click of a button. 

According to Windows Latest, the GPT-4-powered feature allows users to select text on a webpage and have it rewritten in a tone and length of their choice. Microsoft Edge’s AI offers customizable tones like professional, casual, enthusiastic and informational, as well as format options that include a simple paragraph, email or blog post layout. 

The feature is integrated into the browser itself, allowing more users to access it much quicker. This is helpful if you want to generate ideas or make a quick change to the tone. 

So far Microsoft has been testing the feature with a small group of users in the Canary version of Chromium Edge, so we’ll have to wait and see if the feature ends up making its way officially to Microsoft Edge. 

It’s not all bad is it? 

I don’t mean to harp on about the doom and gloom aspect of a feature like this, but we do have to think about the negatives of AI before the positives because once the technology is out there, it can’t be taken back.

Microsoft Edge’s AI could allow more people to break into blog writing who may feel a little nervous about getting their work out there without any mistakes in the copy. It would be useful while you’re researching and looking for a springboard for ideas and would help write boring but important emails without too much effort. 

However, because the tool is web-integrated and uses text on the web, it’ll become virtually impossible to detect whether or not a person's blog email or pitch has been plagiarized and AI-generated. Anyone could feed the tool a site’s copy, alter it slightly super quickly and pass it off as their own without any of the skill or hard work that goes into actually writing their own work. 

Microsoft’s efforts to cram artificial intelligence into its own products as quickly as possible, particularly after the success of Bing AI could have some unforeseen repercussions if it’s not careful. We can only hope that if Edge AI writer does make its debut, it proves me wrong and stays a writing tool, not a crutch.

You might also like…

TechRadar – All the latest technology news

Read More

Google wants you to send AI-generated poems using its strange digital postcards

Google has redesigned its little-known Arts & Culture app introducing new features plus an improved layout for easier exploration.

We wouldn’t blame you if you weren’t aware that Arts & Culture even existed in the first place. It is a pretty niche mobile app aimed at people who want to learn more about the art world and its history. It looks like Google is attempting to attract a bigger audience by making the Android app more “intuitive to explore… [while also] creating new ways to discover and engage with culture.” Leading the charge so to speak is the AI-powered Poem Postcards tool. Utilizing the company’s PaLM 2 Model, the tool asks you to select a famous art piece and then choose from a variety of poetic styles (sonnets, limericks, ballads just to name a few) in order to create an AI-generated poem.

Poem Postcards on Google Arts & Culture

(Image credit: Google)

After a few seconds, you can share your generated work with friends or have the AI write up something new. We should mention you can access Poem Postcards on your desktop via the Arts & Culture website although it appears to be “experimental”. So it may not work as well as its mobile counterpart.

Endless art feed

The other major feature is the aforementioned Inspire section which utilizes an endless scrolling feed akin to TikTok. It brings up a series of art pieces with the occasional cultural news story and exhibition advertisement stuffed in between. The app doesn’t just focus on paintings or sculptures either as the feed will throw in the occasional posts about movies, too. 

In the bottom right-hand corner of Inspire entries is a “cultural flywheel”. Tapping it opens a menu where you can discover tangentially related content. Google states it is “always investigating new ways to connect cultural content” meaning the flywheel will see its own set of updates over time.

As for the layout, the company has added buttons on the Explore tab for specific topics. If you want to look for art pertaining to sports, science, or even your favorite color, it’s all at your fingertips. There’s also a Play tab on the bottom bar where you enjoy games like the adorable Return of the Cat Mummy.

Arts & Culture new layout

(Image credit: Google)

The redesigned Arts & Culture app is currently available on Android through the Google Play Store with an iOS version “soon to follow”. The company says Poem Postcards is only available “in select countries”. We reached out to the tech giant for clarification. This story will be updated at a later time.

Be sure to check out TechRadar's list of the best drawing apps for 2023 if you ever decide to scratch that artistic itch.  

TechRadar – All the latest technology news

Read More

The voter’s guide to AI-generated election misinformation

We live in a time when AI-driven tech is starting to take shape in a real, tangible way and our human cognitive faculties may come in clutch in many ways we don’t even immediately realize. 

Multiple outlets and digital experts have put forth concerns about the upcoming 2024 US election (a traditionally very human affair) and the perpetual surge of information – and misinformation – driven by generative AI. We’ve seen recent elections in many countries happen in tandem with the formation of rapidly-growing pockets of users on social media platforms where misinformation can spread like wildfire.

These groups rapidly share information from dubious sources and questionable figures, false or incorrectly contextualized information from foreign agents or organizations, and misinformation from straight-up bogus news sites. In not-so-distant memory, we’ve witnessed the proliferation of conspiracy theories and efforts to discredit the outcomes of elections based on claims that have been proven false.

The upcoming 2024 US presidential race looks like it will be joining the series in this respect with the ease of content generation in our present AI-aided content era. 

The misinformation sensation

Experts in the field have made statements stating as much; AI-generated content that looks and sounds human is already saturating all kinds of content spaces. This adds to the work it takes to sort through and curate the sheer amount of information and data online, further depending on how much or how little reading and understanding a user is willing to do in the first place.

Such a sentiment is expressed by Ben Winters, senior counsel at the Electronic Privacy Information Center, a non-profit privacy research organization. “It will have no positive effects on the information ecosystem,” he says, and that this will continue to lower users’ trust in content they find online.

Manipulated images and other specifically-formulated media aren’t a new phenomenon – photoshopped pictures, impersonating emails, and robocalls are commonly found in our everyday lives. One huge issue with these – and other novel forms of misinformation – is how much easier it’s become to make such content.

A mobile device sat on a laptop keyboard with the ChatGPT blog announcement open in a browser window.

ChatGPT has become incredibly easy to access – and abuse. (Image credit: Shutterstock / Tada Images)

The ease of lying

Not only that, but it’s also become easier to target both specific groups and even specific individuals thanks to AI. With the right tools, it’s now possible to generate highly-tailored content much more efficiently.

If you’ve been following the stories of the development and public debut of AI tools like those developed by OpenAI, you already know that AI-assisted software can create audio based on pre-existing voice input, put together fairly convincing text in all types of tones and styles, and generate images of nearly anything you ask it to. It’s not difficult to imagine these faculties being used to make politically-motivated content of all kinds.

You need just at least a little technical literacy to engage with such tools, but otherwise, anyone’s targeted propaganda wish is AI’s command. While AI detection tools already exist and continue to be developed, they’ve demonstrated markedly mixed effectiveness

One extra wrinkle in all this, as Mekela Panditharatne, counsel for the democracy program at the Brennan Center for Justice at New York University School of Law, points out is that tools like Large Language Models (LLMs) such as ChatGPT and Google Bard are trained on an immense quantity of online data. To the public understanding, there’s no process to pick through and verify the accuracy of any one bit of information, so misinformation and false claims are folded into this.

OpenAI logo on wall

OpenAI recently shut down its own AI detection program, AI Classifier – but do companies creating AI tools have a moral responsibility to help separate man from machine? (Image credit: Shutterstock.com / rafapress)

Fighting the bots

There have also been some reactive efforts made by certain countries to start bringing forth legislation that attempts to begin addressing issues like these, and the tech companies running these services have put in some safeguarding measures.

Is it enough, though? I’m probably not alone in my hesitation to put my worries in this regard to rest, especially considering multiple countries have major elections coming up in the next year.

One such instance where there is a particular concern, highlighted by Panditharatne, is around swathes of content being generated and used to bombard people in order to discourage them from voting. As I mentioned above, it’s possible to automate large amounts of authentic-sounding material to this end, and this could convince someone that they are not able to (or simply shouldn’t) vote. 

That said, reacting may still not be all that effective. While it’s better than not addressing it at all, our memories and attentions are fickle things. Even if we see information that may be more correct or accurate, once we have an initial impression and opinion, it can be hard for our brains to accept it. “The exposure to the initial misinformation is hard to overcome once it happens,” says Chenhao Tan, an assistant professor of computer science at the University of Chicago. 

What can we do about it? 

Content that AI tools have spat out has already spread virally on social media platforms, and the American Association of Political Consultants has cautioned about the “threat to democracy” presented by AI-aided means like deepfaked videos. AI-generated videos and imagery have already been released from the likes of GOP presidential candidate, Ron DeSantis, and the Republican National Committee.

Darrell West from the Center for Technology Innovation, a think tank in Washington D.C., expects to see an increase in AI-created videos, audio, and images to paint political opponents in a bad light. He expressed concerns that voters might “take such claims at face value” and make voting decisions based on false information. 

Trump

A recent ‘attack ad’ ran by Republican presidential hopeful Ron DeSantis featured the voice of Donald Trump – but it was in fact AI-generated. (Image credit: Alex Wong/Getty Images)

So, now that I’ve loaded your plate with doom and gloom (sorry), what are we to do? Well, West recommends that you make an extra effort to consult a variety of media sources and double-check the veracity of claims, especially bold, decisive statements. He recommends that you “examine the source and see if it is a credible source of information.” 

Heather Kelly of the Washington Post has also written a longer guide on how to critically examine what you are consuming, especially with respect to political material. She recommends starting with your own judgment and considering if what you are consuming is an opportunity for misinformation in the first place and why, take your time to actually process and reflect on what you’re reading, watching, or looking at, and save sources you find helpful and informative to build up a collection you can consult as developments occur. 

In the end, it’s as it always has been: the last bastion against misinformation is always you, the reader, the voter. Although AI tools have made it easier to manufacture falsehoods, it’s ultimately up to us to verify that what we read is fact, not fiction. Bear that in mind the next time you’re watching a political ad – it only takes a minute to do your own research online.

TechRadar – All the latest technology news

Read More

Even OpenAI can’t tell the difference between original content and AI-generated content – and that’s worrying

Open AI, the creator of the incredibly popular AI chatbot ChatGPT, has officially shut down the tool it had developed for detecting content created by AI and not humans. ‘AI Classifier’ has been scrapped just six months after its launch – apparently due to a ‘low rate of accuracy’, says OpenAI in a blog post.

ChatGPT has exploded in popularity this year, worming its way into every aspect of our digital lives, with a slew of rival services and copycats. Of course, the flood of AI-generated content does bring up concerns from multiple groups surrounding inaccurate, inhuman content pervading our social media and newsfeeds.

Educators in particular are troubled by the different ways ChatGPT has been used to write essays and assignments that are passed off as original work. OpenAI’s classifier tool was designed to address these fears not just within education but wider spheres like corporate workspaces, medical fields, and coding-intensive careers. The idea behind the tool was that it should be able to determine whether a piece of text was written by a human or an AI chatbot, in order to combat misinformation

Plagiarism detection service Turnitin, often used by universities, recently integrated an ‘AI Detection Tool’ that has demonstrated a very prominent fault of being wrong on either side. Students and faculty have gone to Reddit to protest the inaccurate results, with students stating their own original work is being flagged as AI-generated content, and faculty complaining about AI work passing through these detectors unflagged.

Turnitin’s “AI Detection Tool” strikes (wrong) again from r/ChatGPT

It is an incredibly troubling thought: the idea that the makers of ChatGPT can no longer differentiate between what is a product of their own tool and what is not. If OpenAI can’t tell the difference, then what chance do we have? Is this the beginning of a misinformation flood, in which no one will ever be certain if what they read online is true? I don’t like to doomsay, but it’s certainly worrying.

TechRadar – All the latest technology news

Read More

Amazon has a big problem as AI-generated books flood Kindle Unlimited

Along with the impressive demonstrations and sounds of alarm that have come with the dawn of generative-text chatbots, we’re also now seeing some of the more questionable and perhaps less desirable outcomes starting to materialize. 

Authors and several news outlets have recently reported a significant uptick in AI-generated books showing up in multiple best-seller lists, many seemingly sounding like nonsense. 

Self-publishing, such as via Amazon’s Kindle Direct Program, has become a way for many genuine authors to bring their work to the public and build a following without the help of a large publisher. Because these self-publishing capabilities are purposely easy to sign up for, it seems anyone can generate endless AI-written books and upload them to be sold on Amazon’s eBook store and make them available for reading via Kindle Unlimited. 

Recently, an indie author, Caitlyn Lynch, tweeted about noticing that only 19 of the best sellers in the Teen & Young Adult Contemporary Romance eBooks top 100 chart on Amazon were real, legit books. The rest were nonsensical and incoherent, and seemingly AI-generated. 

See more

The Motherload website later looked into dozens of books on the platform and saw that a few days after Lynch’s tweets, the AI books had vanished from the best-seller lists, probably removed by Amazon. 

They were, however, still available for purchase, and had enjoyed a significant amount of visibility before vanishing. Also, as Lynch very understandably speculates, the mass uploading of AI-generated books could be used to facilitate click-farming, where 'bots' click through a book automatically, generating royalties from Amazon Kindle Unlimited, which pays authors by the amount of pages that are read in an ebook. So, it doesn’t matter that these books disappear. The people running such a scheme could just upload as many as they like to replace the removed ones. 

A major concern quickly emerges both for authors and readers – most of us readers are seeking out books that, at least for now, are written by human authors, and this makes it harder to find those kinds of books. Lynch, elaborating on her views in a Twitter thread, emphasized that this “will … be the death knell for Kindle Unlimited” if Amazon cannot contain this.  

Amazon Warehouse

(Image credit: Amazon)

What is Amazon doing about it?

Motherboard reached out to Amazon and received a reply that stated that it had “clear guidelines” for which books can be listed for sale and would investigate when concerns are raised in order to protect both readers and authors. It didn’t explicitly state that it was making an effort specifically to address the apparent spam-like persistent uploading of nonsensical and incoherent AI-generated books. It’s worth Amazon taking an active approach to rectify this issue in order to reassure readers that it’s worth continuing to support authors via ebook sales and page views (which result in royalties for authors on Kindle Unlimited), and reassure authors that it’s worth putting their work on sale on Amazon. 

We’ve also contacted Amazon to find out what it is doing about this, and we’ll update this story when we hear back.

AI-generated and assisted books aren’t totally new, and followed quite quickly after the debut of text-generator and image-generator Artificial Intelligence tools such as ChatGPT and Midjourney. These books were already contentious, as many artists and authors felt that such generated books denigrated the work it takes to put together, write, and publish a book. 

Furthermore, AI generators work by scraping huge amounts of visual and text content from the internet – some of which the creators of this content never consented to. 

Mass-flooding of best-seller lists with nonsensical books will only intensify these concerns of quality control and authenticity. It’s not clear why there is such a boom in AI-generated books appearing in best-seller lists, but many speculate that it’s due to bot-farming, where large amounts of books are automatically generated and published. In my opinion, if this is the case then it’s definitely up to Amazon to address this problem, as authors and readers don’t have the technical capabilities to counteract such operations.

AI Danger

(Image credit: Getty Images)

Not just about plagiarism

Chris Cowell, a software developer, talked to the Washington Post about such an instance where an AI had plagiarized his work, which was sold on Amazon. AI is still taking work from human authors, which raises concerns of plagiarism and copyright infringement, but there’s also the matter of AI text generators spitting out misinformation. 

That can then lead to one AI-written book using text from another AI-written book, without any fact-checking, and (especially in the instance of non-fiction books), a worrying feedback loop is created that spreads misinformation and makes it hard to pin down the origin of statements. 

For now, maybe Amazon will optimize its process of removing AI-generated nonsensical content as it appears, but greater efforts are needed. As of May 2023, Amazon’s Kindle Publishing didn’t require sellers to disclose if the book had been written (or illustrated) with the help of AI generators such as ChatGPT or Midjourney.

There's also a big problem that continues to plague Amazon and other online marketplaces for a multitude of products, and books are no exception: fake reviews. Text AI generators make this worse by making it easier to flood a review section both in the content and quantity. With Prime Day coming up, make sure you check out our guide on how to spot fake reviews on Amazon.

Unfortunately, along with all the positive new things that are possible with AI generators, inevitably, they can also be misused. Hopefully, Amazon acknowledges the growing concerns coming from both authors and readers, and makes efforts that help set a precedent for protecting human-created works – and their audiences.

TechRadar – All the latest technology news

Read More