Google has opened the gates to its latest experimental AI called Instrument Playground which allows people to generate 20-second music tracks with a single text prompt.
If that description sounds familiar to you, that’s because other companies have something similar like Meta with MusicGen. Google’s version adds two unique twists. First, it’s claimed to be capable of emulating over 100 instruments from around the world. This includes common ones like the piano to more obscure woodwinds like the dizi from China.
Secondly, the company states you can “add an adjective” to the prompt to give it a certain mood. For example, putting in the word “Happy” will have Instrument Playground generate an upbeat track while “Merry” will create something more Christmassy. It’s even possible to implement sound effects by choosing one of three modes: Ambient, Beat, and Pitch. For the musically inclined, you can activate Advanced mode to launch a sequencer where you can pull together up to four different AI instruments into one song.
Upon going to the website, you’ll be asked what you want to play. If you’re having a hard time deciding, there is a link below the prompt that opens a list of 65 different instruments. We said we wanted an upbeat electric guitar, and to our surprise, the AI added backup vocals to the riff – sort of. Most of the lyrics are incomprehensible gibberish although Chrome’s Live Caption apparently picked up the word “Satan” in there.
The generated song plays once (although you can replay it at any time by clicking the Help icon). Afterward, you can use the on-screen keyboard to work on the track. It’s not very expansive as users will only be given access to 12 keys centered around the C Major and C Minor scales. What you see on the page is directly tied to the numbers on a computer keyboard so you can use those instead of having to slowly click each one with a mouse.
You can use the three modes mentioned earlier to manipulate the file. Ambient lets you alter the track as a whole, Beat highlights what the AI considers to be the “most interesting peaks”, and Pitch can alter the length of a select portion. Users can even shift the octave higher or lower. Be aware the editing tools are pretty rudimentary. This isn’t GarageBand.
Upon finishing, you can record an audio snippet which you can then download as a .wav file to your computer.
In the works
If you’re interested in trying out Instrument Playground, keep in mind this is an experimental technology that is far from perfect. We’re not musicians, but even we could tell there were several errors in the generated music. Our drum sample had a piano playing in the back and the xylophone sounded like someone hitting a bunch of PVC pipes.
We reached out to Google with several questions like when will the AI support 100 instruments (If you remember, it’s only at 65 at the time of this writing) and what the company intends to do with it. Right now, Instrument Playground feels like little more than a digital toy, only capable of creating simple beats. It'd be great to see it do more. This story will be updated at a later time.
Microsoft Edge could drastically change the way we interact with written content on the web with a new AI writing feature that copies existing content and regurgitates the information in a more ‘personal’ tone at the click of a button.
According to Windows Latest, the GPT-4-powered feature allows users to select text on a webpage and have it rewritten in a tone and length of their choice. Microsoft Edge’s AI offers customizable tones like professional, casual, enthusiastic and informational, as well as format options that include a simple paragraph, email or blog post layout.
The feature is integrated into the browser itself, allowing more users to access it much quicker. This is helpful if you want to generate ideas or make a quick change to the tone.
So far Microsoft has been testing the feature with a small group of users in the Canary version of Chromium Edge, so we’ll have to wait and see if the feature ends up making its way officially to Microsoft Edge.
It’s not all bad is it?
I don’t mean to harp on about the doom and gloom aspect of a feature like this, but we do have to think about the negatives of AI before the positives because once the technology is out there, it can’t be taken back.
Microsoft Edge’s AI could allow more people to break into blog writing who may feel a little nervous about getting their work out there without any mistakes in the copy. It would be useful while you’re researching and looking for a springboard for ideas and would help write boring but important emails without too much effort.
However, because the tool is web-integrated and uses text on the web, it’ll become virtually impossible to detect whether or not a person's blog email or pitch has been plagiarized and AI-generated. Anyone could feed the tool a site’s copy, alter it slightly super quickly and pass it off as their own without any of the skill or hard work that goes into actually writing their own work.
Microsoft’s efforts to cram artificial intelligence into its own products as quickly as possible, particularly after the success of Bing AI could have some unforeseen repercussions if it’s not careful. We can only hope that if Edge AI writer does make its debut, it proves me wrong and stays a writing tool, not a crutch.
Google has redesigned its little-known Arts & Culture app introducing new features plus an improved layout for easier exploration.
We wouldn’t blame you if you weren’t aware that Arts & Culture even existed in the first place. It is a pretty niche mobile app aimed at people who want to learn more about the art world and its history. It looks like Google is attempting to attract a bigger audience by making the Android app more “intuitive to explore… [while also] creating new ways to discover and engage with culture.” Leading the charge so to speak is the AI-powered Poem Postcards tool. Utilizing the company’s PaLM 2 Model, the tool asks you to select a famous art piece and then choose from a variety of poetic styles (sonnets, limericks, ballads just to name a few) in order to create an AI-generated poem.
After a few seconds, you can share your generated work with friends or have the AI write up something new. We should mention you can access Poem Postcards on your desktop via the Arts & Culture website although it appears to be “experimental”. So it may not work as well as its mobile counterpart.
Endless art feed
The other major feature is the aforementioned Inspire section which utilizes an endless scrolling feed akin to TikTok. It brings up a series of art pieces with the occasional cultural news story and exhibition advertisement stuffed in between. The app doesn’t just focus on paintings or sculptures either as the feed will throw in the occasional posts about movies, too.
In the bottom right-hand corner of Inspire entries is a “cultural flywheel”. Tapping it opens a menu where you can discover tangentially related content. Google states it is “always investigating new ways to connect cultural content” meaning the flywheel will see its own set of updates over time.
As for the layout, the company has added buttons on the Explore tab for specific topics. If you want to look for art pertaining to sports, science, or even your favorite color, it’s all at your fingertips. There’s also a Play tab on the bottom bar where you enjoy games like the adorable Return of the Cat Mummy.
The redesigned Arts & Culture app is currently available on Android through the Google Play Store with an iOS version “soon to follow”. The company says Poem Postcards is only available “in select countries”. We reached out to the tech giant for clarification. This story will be updated at a later time.
We live in a time when AI-driven tech is starting to take shape in a real, tangible way and our human cognitive faculties may come in clutch in many ways we don’t even immediately realize.
Multiple outlets and digital experts have put forth concerns about the upcoming 2024 US election (a traditionally very human affair) and the perpetual surge of information – and misinformation – driven by generative AI. We’ve seen recent elections in many countries happen in tandem with the formation of rapidly-growing pockets of users on social media platforms where misinformation can spread like wildfire.
These groups rapidly share information from dubious sources and questionable figures, false or incorrectly contextualized information from foreign agents or organizations, and misinformation from straight-up bogus news sites. In not-so-distant memory, we’ve witnessed the proliferation of conspiracy theories and efforts to discredit the outcomes of elections based on claims that have been proven false.
The upcoming 2024 US presidential race looks like it will be joining the series in this respect with the ease of content generation in our present AI-aided content era.
The misinformation sensation
Experts in the field have made statements stating as much; AI-generated content that looks and sounds human is already saturating all kinds of content spaces. This adds to the work it takes to sort through and curate the sheer amount of information and data online, further depending on how much or how little reading and understanding a user is willing to do in the first place.
Such a sentiment is expressed by Ben Winters, senior counsel at the Electronic Privacy Information Center, a non-profit privacy research organization. “It will have no positive effects on the information ecosystem,” he says, and that this will continue to lower users’ trust in content they find online.
Manipulated images and other specifically-formulated media aren’t a new phenomenon – photoshopped pictures, impersonating emails, and robocalls are commonly found in our everyday lives. One huge issue with these – and other novel forms of misinformation – is how much easier it’s become to make such content.
The ease of lying
Not only that, but it’s also become easier to target both specific groups and even specific individuals thanks to AI. With the right tools, it’s now possible to generate highly-tailored content much more efficiently.
If you’ve been following the stories of the development and public debut of AI tools like those developed by OpenAI, you already know that AI-assisted software can create audio based on pre-existing voice input, put together fairly convincing text in all types of tones and styles, and generate images of nearly anything you ask it to. It’s not difficult to imagine these faculties being used to make politically-motivated content of all kinds.
You need just at least a little technical literacy to engage with such tools, but otherwise, anyone’s targeted propaganda wish is AI’s command. While AI detection tools already exist and continue to be developed, they’ve demonstrated markedly mixed effectiveness.
One extra wrinkle in all this, as Mekela Panditharatne, counsel for the democracy program at the Brennan Center for Justice at New York University School of Law, points out is that tools like Large Language Models (LLMs) such as ChatGPT and Google Bard are trained on an immense quantity of online data. To the public understanding, there’s no process to pick through and verify the accuracy of any one bit of information, so misinformation and false claims are folded into this.
Fighting the bots
There have also been some reactive efforts made by certain countries to start bringing forth legislation that attempts to begin addressing issues like these, and the tech companies running these services have put in some safeguarding measures.
Is it enough, though? I’m probably not alone in my hesitation to put my worries in this regard to rest, especially considering multiple countries have major elections coming up in the next year.
One such instance where there is a particular concern, highlighted by Panditharatne, is around swathes of content being generated and used to bombard people in order to discourage them from voting. As I mentioned above, it’s possible to automate large amounts of authentic-sounding material to this end, and this could convince someone that they are not able to (or simply shouldn’t) vote.
That said, reacting may still not be all that effective. While it’s better than not addressing it at all, our memories and attentions are fickle things. Even if we see information that may be more correct or accurate, once we have an initial impression and opinion, it can be hard for our brains to accept it. “The exposure to the initial misinformation is hard to overcome once it happens,” says Chenhao Tan, an assistant professor of computer science at the University of Chicago.
Darrell West from the Center for Technology Innovation, a think tank in Washington D.C., expects to see an increase in AI-created videos, audio, and images to paint political opponents in a bad light. He expressed concerns that voters might “take such claims at face value” and make voting decisions based on false information.
So, now that I’ve loaded your plate with doom and gloom (sorry), what are we to do? Well, West recommends that you make an extra effort to consult a variety of media sources and double-check the veracity of claims, especially bold, decisive statements. He recommends that you “examine the source and see if it is a credible source of information.”
Heather Kelly of the Washington Post has also written a longer guide on how to critically examine what you are consuming, especially with respect to political material. She recommends starting with your own judgment and considering if what you are consuming is an opportunity for misinformation in the first place and why, take your time to actually process and reflect on what you’re reading, watching, or looking at, and save sources you find helpful and informative to build up a collection you can consult as developments occur.
In the end, it’s as it always has been: the last bastion against misinformation is always you, the reader, the voter. Although AI tools have made it easier to manufacture falsehoods, it’s ultimately up to us to verify that what we read is fact, not fiction. Bear that in mind the next time you’re watching a political ad – it only takes a minute to do your own research online.
Open AI, the creator of the incredibly popular AI chatbot ChatGPT, has officially shut down the tool it had developed for detecting content created by AI and not humans. ‘AI Classifier’ has been scrapped just six months after its launch – apparently due to a ‘low rate of accuracy’, says OpenAI in a blog post.
ChatGPT has exploded in popularity this year, worming its way into every aspect of our digital lives, with a slew of rival services and copycats. Of course, the flood of AI-generated content does bring up concerns from multiple groups surrounding inaccurate, inhuman content pervading our social media and newsfeeds.
Educators in particular are troubled by the different ways ChatGPT has been used to write essays and assignments that are passed off as original work. OpenAI’s classifier tool was designed to address these fears not just within education but wider spheres like corporate workspaces, medical fields, and coding-intensive careers. The idea behind the tool was that it should be able to determine whether a piece of text was written by a human or an AI chatbot, in order to combat misinformation
Plagiarism detection service Turnitin, often used by universities, recently integrated an ‘AI Detection Tool’ that has demonstrated a very prominent fault of being wrong on either side. Students and faculty have gone to Reddit to protest the inaccurate results, with students stating their own original work is being flagged as AI-generated content, and faculty complaining about AI work passing through these detectors unflagged.
It is an incredibly troubling thought: the idea that the makers of ChatGPT can no longer differentiate between what is a product of their own tool and what is not. If OpenAI can’t tell the difference, then what chance do we have? Is this the beginning of a misinformation flood, in which no one will ever be certain if what they read online is true? I don’t like to doomsay, but it’s certainly worrying.
Along with the impressive demonstrations and sounds of alarm that have come with the dawn of generative-text chatbots, we’re also now seeing some of the more questionable and perhaps less desirable outcomes starting to materialize.
Authors and several news outlets have recently reported a significant uptick in AI-generated books showing up in multiple best-seller lists, many seemingly sounding like nonsense.
Self-publishing, such as via Amazon’s Kindle Direct Program, has become a way for many genuine authors to bring their work to the public and build a following without the help of a large publisher. Because these self-publishing capabilities are purposely easy to sign up for, it seems anyone can generate endless AI-written books and upload them to be sold on Amazon’s eBook store and make them available for reading via Kindle Unlimited.
Recently, an indie author, Caitlyn Lynch, tweeted about noticing that only 19 of the best sellers in the Teen & Young Adult Contemporary Romance eBooks top 100 chart on Amazon were real, legit books. The rest were nonsensical and incoherent, and seemingly AI-generated.
The AI bots have broken Amazon.Take a look at the Best Sellers in Teen & Young Adult Contemporary Romance eBooks top 100 chart.I can see 19 actual legit books.https://t.co/fy9rtV6Ck6The rest are AI nonsense clearly there to click farm.@AmazonKDP what are you doing about it? pic.twitter.com/cziuKcQrq3June 26, 2023
The Motherload website later looked into dozens of books on the platform and saw that a few days after Lynch’s tweets, the AI books had vanished from the best-seller lists, probably removed by Amazon.
They were, however, still available for purchase, and had enjoyed a significant amount of visibility before vanishing. Also, as Lynch very understandably speculates, the mass uploading of AI-generated books could be used to facilitate click-farming, where 'bots' click through a book automatically, generating royalties from Amazon Kindle Unlimited, which pays authors by the amount of pages that are read in an ebook. So, it doesn’t matter that these books disappear. The people running such a scheme could just upload as many as they like to replace the removed ones.
A major concern quickly emerges both for authors and readers – most of us readers are seeking out books that, at least for now, are written by human authors, and this makes it harder to find those kinds of books. Lynch, elaborating on her views in a Twitter thread, emphasized that this “will … be the death knell for Kindle Unlimited” if Amazon cannot contain this.
What is Amazon doing about it?
Motherboard reached out to Amazon and received a reply that stated that it had “clear guidelines” for which books can be listed for sale and would investigate when concerns are raised in order to protect both readers and authors. It didn’t explicitly state that it was making an effort specifically to address the apparent spam-like persistent uploading of nonsensical and incoherent AI-generated books. It’s worth Amazon taking an active approach to rectify this issue in order to reassure readers that it’s worth continuing to support authors via ebook sales and page views (which result in royalties for authors on Kindle Unlimited), and reassure authors that it’s worth putting their work on sale on Amazon.
We’ve also contacted Amazon to find out what it is doing about this, and we’ll update this story when we hear back.
AI-generated and assisted books aren’t totally new, and followed quite quickly after the debut of text-generator and image-generator Artificial Intelligence tools such as ChatGPT and Midjourney. These books were already contentious, as many artists and authors felt that such generated books denigrated the work it takes to put together, write, and publish a book.
Furthermore, AI generators work by scraping huge amounts of visual and text content from the internet – some of which the creators of this content never consented to.
Mass-flooding of best-seller lists with nonsensical books will only intensify these concerns of quality control and authenticity. It’s not clear why there is such a boom in AI-generated books appearing in best-seller lists, but many speculate that it’s due to bot-farming, where large amounts of books are automatically generated and published. In my opinion, if this is the case then it’s definitely up to Amazon to address this problem, as authors and readers don’t have the technical capabilities to counteract such operations.
That can then lead to one AI-written book using text from another AI-written book, without any fact-checking, and (especially in the instance of non-fiction books), a worrying feedback loop is created that spreads misinformation and makes it hard to pin down the origin of statements.
For now, maybe Amazon will optimize its process of removing AI-generated nonsensical content as it appears, but greater efforts are needed. As of May 2023, Amazon’s Kindle Publishing didn’t require sellers to disclose if the book had been written (or illustrated) with the help of AI generators such as ChatGPT or Midjourney.
There's also a big problem that continues to plague Amazon and other online marketplaces for a multitude of products, and books are no exception: fake reviews. Text AI generators make this worse by making it easier to flood a review section both in the content and quantity. With Prime Day coming up, make sure you check out our guide on how to spot fake reviews on Amazon.
Unfortunately, along with all the positive new things that are possible with AI generators, inevitably, they can also be misused. Hopefully, Amazon acknowledges the growing concerns coming from both authors and readers, and makes efforts that help set a precedent for protecting human-created works – and their audiences.