OpenAI is working on a new tool to help you spot AI-generated images and protect you from deep fakes

You’ve probably noticed a few AI-generated images sprinkled throughout your different social media feeds – and there are likely a few you’ve probably scrolled right past, that may have slipped your keen eyes. 

For those of us who have been immersed in the world of generative AI, spotting AI images is a little easier, as you develop a mental checklist of what to look out for.

However, as the technology gets better and better, it is going to get a lot harder to tell. To solve this, OpenAI is developing new methods to track AI-generated images and prove what has and has not been artificially generated.

According to a blog post, OpenAI’s new proposed methods will add a tamper-resistant ‘watermark’ that will tag content with invisible ‘stickers.’ So, if an image is generated with OpenAI’s DALL-E generator, the classifier will flag it even if the image is warped or saturated.

The blog post claims the tool will have around a 98% accuracy when spotting images made with DALL-E. However, it will only flag 5-10% of pictures from other generators like Midjourney or Adobe Firefly

So, it’s great for in-house images, but not so great for anything produced outside of OpenAI. While it may not be as impressive as one would hope in some respects, it’s a positive sign that OpenAI is starting to address the flood of AI images that are getting harder and harder to distinguish.

Okay, so this may not seem like a big deal to some, as a lot of instances of AI-generated images are either memes or high-concept art that are pretty harmless. But that said, there’s also a surge of scenarios now where people are creating hyper-realistic fake photos of politicians, celebrities, people in their lives, and more besides, that could lead to misinformation being spread at an incredibly fast pace.

Hopefully, as these kinds of countermeasures get better and better, the accuracy will only improve, and we can have a much more accessible way to double-check the authenticity of the images we come across in our day-to-day life.

You might also like

TechRadar – All the latest technology news

Read More

Copilot gets a big redesign and a new way to edit your AI-generated images

It’s been one year since Bing Chat received its generative AI power-up and we’ve seen it change a lot since including a rebranding into Copilot. To celebrate the first anniversary, Microsoft decided to redesign Copilot’s homepage as well as introduce a new editing feature.

The company states when you visit the AI’s engine desktop website, “you will see… a cleaner, sleeker look”. In the middle of the page is a revolving carousel of sample prompts with an accompanying image. Its purpose, according to Microsoft, is to give you an idea of what Copilot is capable of; to get those creative juices flowing. It is certainly more engaging than the previous version. The old page had three sample text prompts next to each other with no indication that it could create images.

Copilot on mobile is receiving an identical update. The app has the same carousel of sample prompts with a picture above to give you some ideas. You also have the option to toggle GPT-4 for better results. Activating it turns the software’s blue accents to purple. 

Tweaking prompts

As for the feature mentioned earlier, it’s called Designer. It allows you to make tweaks to generated content like highlighting certain aspects, blurring the background, or adding a unique filter. As an example, let’s take Copilot’s suggestion of creating an image of an animal wearing a football helmet. Moving your cursor over the picture makes a bold white line appear around an object. Clicking it highlights the portion. 

A couple of options appear at the bottom of the window. We chose to tell Copilot to make the colors pop. After a few seconds, the finished product appears. You can then either undo the effect or keep it. For filters, you have eight to choose from. Pixel art, block print, and claymation are some of the selections. Like the edits before, applying a filter takes a few seconds. 

Image 1 of 3

Copilot highlighting subject

(Image credit: Future)
Image 2 of 3

Copilot making colors pop

(Image credit: Future)
Image 3 of 3

Copilot pixel filter

(Image credit: Future)

Designer is free for everyone to try out. However, subscribers to Copilot Pro will be given extra tools. They can resize generated content and regenerate images into either a square or landscape orientation. Microsoft says it eventually roll out a “Designer GPT” to Copilot. The company calls it a canvas of sorts where people can “visualize [their] ideas.” If we had to take a guess, it could be a publicly available GPT model that you can use to create editing tools. OpenAI offers a similar service with its online store. We reached out to Microsoft for more details. This story will be updated at a later time.

Check out TechRadar's list of the best free drawing software for 2024 if you'd like to find a way to make the edits yourself.

You might also like

TechRadar – All the latest technology news

Read More

Google’s Instrument Playground offers a taste of an AI-generated musical future

Google has opened the gates to its latest experimental AI called Instrument Playground which allows people to generate 20-second music tracks with a single text prompt.

If that description sounds familiar to you, that’s because other companies have something similar like Meta with MusicGen. Google’s version adds two unique twists. First, it’s claimed to be capable of emulating over 100 instruments from around the world. This includes common ones like the piano to more obscure woodwinds like the dizi from China. 

Secondly, the company states you can “add an adjective” to the prompt to give it a certain mood. For example, putting in the word “Happy” will have Instrument Playground generate an upbeat track while “Merry” will create something more Christmassy. It’s even possible to implement sound effects by choosing one of three modes: Ambient, Beat, and Pitch. For the musically inclined, you can activate Advanced mode to launch a sequencer where you can pull together up to four different AI instruments into one song.

Live demo

The Instrument Playground is publicly available so we decided to take it for a spin.

Upon going to the website, you’ll be asked what you want to play. If you’re having a hard time deciding, there is a link below the prompt that opens a list of 65 different instruments. We said we wanted an upbeat electric guitar, and to our surprise, the AI added backup vocals to the riff – sort of. Most of the lyrics are incomprehensible gibberish although Chrome’s Live Caption apparently picked up the word “Satan” in there.

The generated song plays once (although you can replay it at any time by clicking the Help icon). Afterward, you can use the on-screen keyboard to work on the track. It’s not very expansive as users will only be given access to 12 keys centered around the C Major and C Minor scales. What you see on the page is directly tied to the numbers on a computer keyboard so you can use those instead of having to slowly click each one with a mouse.

Instrumental Playground example

(Image credit: Future)

You can use the three modes mentioned earlier to manipulate the file. Ambient lets you alter the track as a whole, Beat highlights what the AI considers to be the “most interesting peaks”, and Pitch can alter the length of a select portion. Users can even shift the octave higher or lower. Be aware the editing tools are pretty rudimentary. This isn’t GarageBand.

Upon finishing, you can record an audio snippet which you can then download as a .wav file to your computer. 

In the works

If you’re interested in trying out Instrument Playground, keep in mind this is an experimental technology that is far from perfect. We’re not musicians, but even we could tell there were several errors in the generated music. Our drum sample had a piano playing in the back and the xylophone sounded like someone hitting a bunch of PVC pipes. 

We reached out to Google with several questions like when will the AI support 100 instruments (If you remember, it’s only at 65 at the time of this writing) and what the company intends to do with it. Right now, Instrument Playground feels like little more than a digital toy, only capable of creating simple beats. It'd be great to see it do more. This story will be updated at a later time.

While we have you, be sure to check out TechRadar's list of the best free music-making software in 2023

You might also like

TechRadar – All the latest technology news

Read More

Microsoft Edge may introduce a new AI-generated writing feature – and that makes me nervous

Microsoft Edge could drastically change the way we interact with written content on the web with a new AI writing feature that copies existing content and regurgitates the information in a more ‘personal’ tone at the click of a button. 

According to Windows Latest, the GPT-4-powered feature allows users to select text on a webpage and have it rewritten in a tone and length of their choice. Microsoft Edge’s AI offers customizable tones like professional, casual, enthusiastic and informational, as well as format options that include a simple paragraph, email or blog post layout. 

The feature is integrated into the browser itself, allowing more users to access it much quicker. This is helpful if you want to generate ideas or make a quick change to the tone. 

So far Microsoft has been testing the feature with a small group of users in the Canary version of Chromium Edge, so we’ll have to wait and see if the feature ends up making its way officially to Microsoft Edge. 

It’s not all bad is it? 

I don’t mean to harp on about the doom and gloom aspect of a feature like this, but we do have to think about the negatives of AI before the positives because once the technology is out there, it can’t be taken back.

Microsoft Edge’s AI could allow more people to break into blog writing who may feel a little nervous about getting their work out there without any mistakes in the copy. It would be useful while you’re researching and looking for a springboard for ideas and would help write boring but important emails without too much effort. 

However, because the tool is web-integrated and uses text on the web, it’ll become virtually impossible to detect whether or not a person's blog email or pitch has been plagiarized and AI-generated. Anyone could feed the tool a site’s copy, alter it slightly super quickly and pass it off as their own without any of the skill or hard work that goes into actually writing their own work. 

Microsoft’s efforts to cram artificial intelligence into its own products as quickly as possible, particularly after the success of Bing AI could have some unforeseen repercussions if it’s not careful. We can only hope that if Edge AI writer does make its debut, it proves me wrong and stays a writing tool, not a crutch.

You might also like…

TechRadar – All the latest technology news

Read More

Google wants you to send AI-generated poems using its strange digital postcards

Google has redesigned its little-known Arts & Culture app introducing new features plus an improved layout for easier exploration.

We wouldn’t blame you if you weren’t aware that Arts & Culture even existed in the first place. It is a pretty niche mobile app aimed at people who want to learn more about the art world and its history. It looks like Google is attempting to attract a bigger audience by making the Android app more “intuitive to explore… [while also] creating new ways to discover and engage with culture.” Leading the charge so to speak is the AI-powered Poem Postcards tool. Utilizing the company’s PaLM 2 Model, the tool asks you to select a famous art piece and then choose from a variety of poetic styles (sonnets, limericks, ballads just to name a few) in order to create an AI-generated poem.

Poem Postcards on Google Arts & Culture

(Image credit: Google)

After a few seconds, you can share your generated work with friends or have the AI write up something new. We should mention you can access Poem Postcards on your desktop via the Arts & Culture website although it appears to be “experimental”. So it may not work as well as its mobile counterpart.

Endless art feed

The other major feature is the aforementioned Inspire section which utilizes an endless scrolling feed akin to TikTok. It brings up a series of art pieces with the occasional cultural news story and exhibition advertisement stuffed in between. The app doesn’t just focus on paintings or sculptures either as the feed will throw in the occasional posts about movies, too. 

In the bottom right-hand corner of Inspire entries is a “cultural flywheel”. Tapping it opens a menu where you can discover tangentially related content. Google states it is “always investigating new ways to connect cultural content” meaning the flywheel will see its own set of updates over time.

As for the layout, the company has added buttons on the Explore tab for specific topics. If you want to look for art pertaining to sports, science, or even your favorite color, it’s all at your fingertips. There’s also a Play tab on the bottom bar where you enjoy games like the adorable Return of the Cat Mummy.

Arts & Culture new layout

(Image credit: Google)

The redesigned Arts & Culture app is currently available on Android through the Google Play Store with an iOS version “soon to follow”. The company says Poem Postcards is only available “in select countries”. We reached out to the tech giant for clarification. This story will be updated at a later time.

Be sure to check out TechRadar's list of the best drawing apps for 2023 if you ever decide to scratch that artistic itch.  

TechRadar – All the latest technology news

Read More

The voter’s guide to AI-generated election misinformation

We live in a time when AI-driven tech is starting to take shape in a real, tangible way and our human cognitive faculties may come in clutch in many ways we don’t even immediately realize. 

Multiple outlets and digital experts have put forth concerns about the upcoming 2024 US election (a traditionally very human affair) and the perpetual surge of information – and misinformation – driven by generative AI. We’ve seen recent elections in many countries happen in tandem with the formation of rapidly-growing pockets of users on social media platforms where misinformation can spread like wildfire.

These groups rapidly share information from dubious sources and questionable figures, false or incorrectly contextualized information from foreign agents or organizations, and misinformation from straight-up bogus news sites. In not-so-distant memory, we’ve witnessed the proliferation of conspiracy theories and efforts to discredit the outcomes of elections based on claims that have been proven false.

The upcoming 2024 US presidential race looks like it will be joining the series in this respect with the ease of content generation in our present AI-aided content era. 

The misinformation sensation

Experts in the field have made statements stating as much; AI-generated content that looks and sounds human is already saturating all kinds of content spaces. This adds to the work it takes to sort through and curate the sheer amount of information and data online, further depending on how much or how little reading and understanding a user is willing to do in the first place.

Such a sentiment is expressed by Ben Winters, senior counsel at the Electronic Privacy Information Center, a non-profit privacy research organization. “It will have no positive effects on the information ecosystem,” he says, and that this will continue to lower users’ trust in content they find online.

Manipulated images and other specifically-formulated media aren’t a new phenomenon – photoshopped pictures, impersonating emails, and robocalls are commonly found in our everyday lives. One huge issue with these – and other novel forms of misinformation – is how much easier it’s become to make such content.

A mobile device sat on a laptop keyboard with the ChatGPT blog announcement open in a browser window.

ChatGPT has become incredibly easy to access – and abuse. (Image credit: Shutterstock / Tada Images)

The ease of lying

Not only that, but it’s also become easier to target both specific groups and even specific individuals thanks to AI. With the right tools, it’s now possible to generate highly-tailored content much more efficiently.

If you’ve been following the stories of the development and public debut of AI tools like those developed by OpenAI, you already know that AI-assisted software can create audio based on pre-existing voice input, put together fairly convincing text in all types of tones and styles, and generate images of nearly anything you ask it to. It’s not difficult to imagine these faculties being used to make politically-motivated content of all kinds.

You need just at least a little technical literacy to engage with such tools, but otherwise, anyone’s targeted propaganda wish is AI’s command. While AI detection tools already exist and continue to be developed, they’ve demonstrated markedly mixed effectiveness

One extra wrinkle in all this, as Mekela Panditharatne, counsel for the democracy program at the Brennan Center for Justice at New York University School of Law, points out is that tools like Large Language Models (LLMs) such as ChatGPT and Google Bard are trained on an immense quantity of online data. To the public understanding, there’s no process to pick through and verify the accuracy of any one bit of information, so misinformation and false claims are folded into this.

OpenAI logo on wall

OpenAI recently shut down its own AI detection program, AI Classifier – but do companies creating AI tools have a moral responsibility to help separate man from machine? (Image credit: Shutterstock.com / rafapress)

Fighting the bots

There have also been some reactive efforts made by certain countries to start bringing forth legislation that attempts to begin addressing issues like these, and the tech companies running these services have put in some safeguarding measures.

Is it enough, though? I’m probably not alone in my hesitation to put my worries in this regard to rest, especially considering multiple countries have major elections coming up in the next year.

One such instance where there is a particular concern, highlighted by Panditharatne, is around swathes of content being generated and used to bombard people in order to discourage them from voting. As I mentioned above, it’s possible to automate large amounts of authentic-sounding material to this end, and this could convince someone that they are not able to (or simply shouldn’t) vote. 

That said, reacting may still not be all that effective. While it’s better than not addressing it at all, our memories and attentions are fickle things. Even if we see information that may be more correct or accurate, once we have an initial impression and opinion, it can be hard for our brains to accept it. “The exposure to the initial misinformation is hard to overcome once it happens,” says Chenhao Tan, an assistant professor of computer science at the University of Chicago. 

What can we do about it? 

Content that AI tools have spat out has already spread virally on social media platforms, and the American Association of Political Consultants has cautioned about the “threat to democracy” presented by AI-aided means like deepfaked videos. AI-generated videos and imagery have already been released from the likes of GOP presidential candidate, Ron DeSantis, and the Republican National Committee.

Darrell West from the Center for Technology Innovation, a think tank in Washington D.C., expects to see an increase in AI-created videos, audio, and images to paint political opponents in a bad light. He expressed concerns that voters might “take such claims at face value” and make voting decisions based on false information. 

Trump

A recent ‘attack ad’ ran by Republican presidential hopeful Ron DeSantis featured the voice of Donald Trump – but it was in fact AI-generated. (Image credit: Alex Wong/Getty Images)

So, now that I’ve loaded your plate with doom and gloom (sorry), what are we to do? Well, West recommends that you make an extra effort to consult a variety of media sources and double-check the veracity of claims, especially bold, decisive statements. He recommends that you “examine the source and see if it is a credible source of information.” 

Heather Kelly of the Washington Post has also written a longer guide on how to critically examine what you are consuming, especially with respect to political material. She recommends starting with your own judgment and considering if what you are consuming is an opportunity for misinformation in the first place and why, take your time to actually process and reflect on what you’re reading, watching, or looking at, and save sources you find helpful and informative to build up a collection you can consult as developments occur. 

In the end, it’s as it always has been: the last bastion against misinformation is always you, the reader, the voter. Although AI tools have made it easier to manufacture falsehoods, it’s ultimately up to us to verify that what we read is fact, not fiction. Bear that in mind the next time you’re watching a political ad – it only takes a minute to do your own research online.

TechRadar – All the latest technology news

Read More

Even OpenAI can’t tell the difference between original content and AI-generated content – and that’s worrying

Open AI, the creator of the incredibly popular AI chatbot ChatGPT, has officially shut down the tool it had developed for detecting content created by AI and not humans. ‘AI Classifier’ has been scrapped just six months after its launch – apparently due to a ‘low rate of accuracy’, says OpenAI in a blog post.

ChatGPT has exploded in popularity this year, worming its way into every aspect of our digital lives, with a slew of rival services and copycats. Of course, the flood of AI-generated content does bring up concerns from multiple groups surrounding inaccurate, inhuman content pervading our social media and newsfeeds.

Educators in particular are troubled by the different ways ChatGPT has been used to write essays and assignments that are passed off as original work. OpenAI’s classifier tool was designed to address these fears not just within education but wider spheres like corporate workspaces, medical fields, and coding-intensive careers. The idea behind the tool was that it should be able to determine whether a piece of text was written by a human or an AI chatbot, in order to combat misinformation

Plagiarism detection service Turnitin, often used by universities, recently integrated an ‘AI Detection Tool’ that has demonstrated a very prominent fault of being wrong on either side. Students and faculty have gone to Reddit to protest the inaccurate results, with students stating their own original work is being flagged as AI-generated content, and faculty complaining about AI work passing through these detectors unflagged.

Turnitin’s “AI Detection Tool” strikes (wrong) again from r/ChatGPT

It is an incredibly troubling thought: the idea that the makers of ChatGPT can no longer differentiate between what is a product of their own tool and what is not. If OpenAI can’t tell the difference, then what chance do we have? Is this the beginning of a misinformation flood, in which no one will ever be certain if what they read online is true? I don’t like to doomsay, but it’s certainly worrying.

TechRadar – All the latest technology news

Read More

Amazon has a big problem as AI-generated books flood Kindle Unlimited

Along with the impressive demonstrations and sounds of alarm that have come with the dawn of generative-text chatbots, we’re also now seeing some of the more questionable and perhaps less desirable outcomes starting to materialize. 

Authors and several news outlets have recently reported a significant uptick in AI-generated books showing up in multiple best-seller lists, many seemingly sounding like nonsense. 

Self-publishing, such as via Amazon’s Kindle Direct Program, has become a way for many genuine authors to bring their work to the public and build a following without the help of a large publisher. Because these self-publishing capabilities are purposely easy to sign up for, it seems anyone can generate endless AI-written books and upload them to be sold on Amazon’s eBook store and make them available for reading via Kindle Unlimited. 

Recently, an indie author, Caitlyn Lynch, tweeted about noticing that only 19 of the best sellers in the Teen & Young Adult Contemporary Romance eBooks top 100 chart on Amazon were real, legit books. The rest were nonsensical and incoherent, and seemingly AI-generated. 

See more

The Motherload website later looked into dozens of books on the platform and saw that a few days after Lynch’s tweets, the AI books had vanished from the best-seller lists, probably removed by Amazon. 

They were, however, still available for purchase, and had enjoyed a significant amount of visibility before vanishing. Also, as Lynch very understandably speculates, the mass uploading of AI-generated books could be used to facilitate click-farming, where 'bots' click through a book automatically, generating royalties from Amazon Kindle Unlimited, which pays authors by the amount of pages that are read in an ebook. So, it doesn’t matter that these books disappear. The people running such a scheme could just upload as many as they like to replace the removed ones. 

A major concern quickly emerges both for authors and readers – most of us readers are seeking out books that, at least for now, are written by human authors, and this makes it harder to find those kinds of books. Lynch, elaborating on her views in a Twitter thread, emphasized that this “will … be the death knell for Kindle Unlimited” if Amazon cannot contain this.  

Amazon Warehouse

(Image credit: Amazon)

What is Amazon doing about it?

Motherboard reached out to Amazon and received a reply that stated that it had “clear guidelines” for which books can be listed for sale and would investigate when concerns are raised in order to protect both readers and authors. It didn’t explicitly state that it was making an effort specifically to address the apparent spam-like persistent uploading of nonsensical and incoherent AI-generated books. It’s worth Amazon taking an active approach to rectify this issue in order to reassure readers that it’s worth continuing to support authors via ebook sales and page views (which result in royalties for authors on Kindle Unlimited), and reassure authors that it’s worth putting their work on sale on Amazon. 

We’ve also contacted Amazon to find out what it is doing about this, and we’ll update this story when we hear back.

AI-generated and assisted books aren’t totally new, and followed quite quickly after the debut of text-generator and image-generator Artificial Intelligence tools such as ChatGPT and Midjourney. These books were already contentious, as many artists and authors felt that such generated books denigrated the work it takes to put together, write, and publish a book. 

Furthermore, AI generators work by scraping huge amounts of visual and text content from the internet – some of which the creators of this content never consented to. 

Mass-flooding of best-seller lists with nonsensical books will only intensify these concerns of quality control and authenticity. It’s not clear why there is such a boom in AI-generated books appearing in best-seller lists, but many speculate that it’s due to bot-farming, where large amounts of books are automatically generated and published. In my opinion, if this is the case then it’s definitely up to Amazon to address this problem, as authors and readers don’t have the technical capabilities to counteract such operations.

AI Danger

(Image credit: Getty Images)

Not just about plagiarism

Chris Cowell, a software developer, talked to the Washington Post about such an instance where an AI had plagiarized his work, which was sold on Amazon. AI is still taking work from human authors, which raises concerns of plagiarism and copyright infringement, but there’s also the matter of AI text generators spitting out misinformation. 

That can then lead to one AI-written book using text from another AI-written book, without any fact-checking, and (especially in the instance of non-fiction books), a worrying feedback loop is created that spreads misinformation and makes it hard to pin down the origin of statements. 

For now, maybe Amazon will optimize its process of removing AI-generated nonsensical content as it appears, but greater efforts are needed. As of May 2023, Amazon’s Kindle Publishing didn’t require sellers to disclose if the book had been written (or illustrated) with the help of AI generators such as ChatGPT or Midjourney.

There's also a big problem that continues to plague Amazon and other online marketplaces for a multitude of products, and books are no exception: fake reviews. Text AI generators make this worse by making it easier to flood a review section both in the content and quantity. With Prime Day coming up, make sure you check out our guide on how to spot fake reviews on Amazon.

Unfortunately, along with all the positive new things that are possible with AI generators, inevitably, they can also be misused. Hopefully, Amazon acknowledges the growing concerns coming from both authors and readers, and makes efforts that help set a precedent for protecting human-created works – and their audiences.

TechRadar – All the latest technology news

Read More