The AI backlash begins: artists could protect against plagiarism with this powerful tool

A team of researchers at the University of Chicago has created a tool aimed to help online artists “fight back against AI companies” by inserting, in essence, poison pills into their original work.

Called Nightshade, after the family of toxic plants, the software is said to introduce poisonous pixels to digital art that messes with the way generative AIs interpret them. The way models like Stable Diffusion work is they scour the internet, picking up as many images as they can to use as training data. What Nightshade does is exploit this “security vulnerability”. As explained by the MIT Technology Review, these “poisoned data samples can manipulate models into learning” the wrong thing. For example, it could see a picture of a dog as a cat or a car as a cow.

Poison tactics

As part of the testing phase, the team fed Stable Diffusion infected content and “then prompted it to create images of dogs”. After being given 50 samples, the AI generated pictures of misshapen dogs with six legs. After 100, you begin to see something resembling a cat. Once it was given 300, dogs became full-fledged cats. Below, you'll see the other trials.

Nightshade tests

(Image credit: University of Chicago/MIT Technology Review)

The report goes on to say Nightshade also affects “tangentially related” ideas because generative AIs are good “at making connections between words”. Messing with the word “dog” jumbles similar concepts like puppy, husky, or wolf. This extends to art styles as well. 

Nightshade's tangentially related samples

(Image credit: University of Chicago/MIT Technology Review)

It is possible for AI companies to remove the toxic pixels. However as the MIT post points out, it is “very difficult to remove them”. Developers would have to “find and delete each corrupted sample.” To give you an idea of how tough this would be, a 1080p image has over two million pixels. If that wasn’t difficult enough, these models “are trained on billions of data samples.” So imagine looking through a sea of pixels to find the handful messing with the AI engine.

At least, that’s the idea. Nightshade is still in the early stages. Currently, the tech “has been submitted for peer review at [the] computer security conference Usenix.” MIT Technology Review managed to get a sneak peek.

Future endeavors

We reached out to team lead, Professor Ben Y. Zhao at the University of Chicago, with several questions. 

He told us they do have plans to “implement and release Nightshade for public use.” It’ll be a part of Glaze as an “optional feature”. Glaze, if you’re not familiar, is another tool Zhao’s team created giving artists the ability to “mask their own personal style” and stop it from being adopted by artificial intelligence. He also hopes to make Nightshade open source, allowing others to make their own venom.

Additionally, we asked Professor Zhao if there are plans to create a Nightshade for video and literature. Right now, multiple literary authors are suing OpenAI claiming the program is “using their copyrighted works without permission.” He states developing toxic software for other works will be a big endeavor “since those domains are quite different from static images. The team has “no plans to tackle those, yet.” Hopefully someday soon.

So far, initial reactions to Nightshade are positive. Junfeng Yang, a computer science professor at Columbia University, told Technology Review this could make AI developers “respect artists’ rights more”. Maybe even be willing to pay out royalties.

If you're interested in picking up illustration as a hobby, be sure to check out TechRadar's list of the best digital art and drawing software in 2023.

You might also like

TechRadar – All the latest technology news

Read More

Twitter begins to expand its downvote feature – but is it needed?

Twitter has been working on a way to enable users to downvote tweets, while not making them public, since early 2020, however the company is expanding the feature to more users across the world, not just in US.

Downvoting was also confined to web users, but this wider testing of the feature will also apply to some iOS and Android users, where you may start to see a downward-facing arrow on some tweets.

This won't hide the tweet or let the tweeter, or your followers, know that you've downvoted. This is more for Twitter to help refine its algorithm in helping you find better curated tweets. However, users aren't convinced.


A 'hide tweet' option instead?

If you use Reddit, you'll see the downvote button everywhere. It's a major part of the site's design, as it shows other users how well the post has been responded to by others.

But Twitter has gone down another avenue here, where the downvotes are seemingly just for the company to help improve its service, which seems over-engineered.

See more

Other companies such as YouTube has changed how it displays dislikes, with the option remaining, but the number of dislikes being hidden, and that has also proved controversial so far. The co-founder of YouTube, Jawed Karim, spoke of his frustrations in how the platform may decline after this change.

But with Twitter, it feels as though it's a feature that doesn't need to exist. While the company explains that it's to help the content you see, there's still the bigger problem of harassment and bullying that many users have been subjected to.

Showcasing a downvote button for Twitter's algorithm is backward, and instead, there should be other features, and beefed-up existing ones to help curb the harassment.

A 'I don't like this tweet' would be beneficial, alongside more streamlined ways to report abuse on the platform. Having a downvote button that benefits Twitter, not the user may be something that will urge the company to put the feature on pause for now, and to see how it can better serve users, rather than the algorithm.

TechRadar – All the latest technology news

Read More