OpenAI’s big launch event kicks off soon – so what can we expect to see? If this rumor is right, a powerful next-gen AI model

Rumors that OpenAI has been working on something major have been ramping up over the last few weeks, and CEO Sam Altman himself has taken to X (formerly Twitter) to confirm that it won’t be GPT-5 (the next iteration of its breakthrough series of large language models) or a search engine to rival Google. What a new report, the latest in this saga, suggests is that OpenAI might be about to debut a more advanced AI model with built-in audio and visual processing.

OpenAI is towards the front of the AI race, striving to be the first to realize a software tool that comes as close as possible to communicating in a similar way to humans, being able to talk to us using sound as well as text, and also capable of recognizing images and objects. 

The report detailing this purported new model comes from The Information, which spoke to two anonymous sources who have apparently been shown some of these new capabilities. They claim that the incoming model has better logical reasoning than those currently available to the public, being able to convert text to speech. None of this is new for OpenAI as such, but what is new is all this functionality being unified in the rumored multimodal model. 

A multimodal model is one that can understand and generate information across multiple modalities, such as text, images, audio, and video. GPT-4 is also a multimodal model that can process and produce text and images, and this new model would theoretically add audio to its list of capabilities, as well as a better understanding of images and faster processing times.

OpenAI CEO Sam Altman attends the artificial intelligence Revolution Forum. New York, US - 13 Jan 2023

(Image credit: Shutterstock/photosince)

The bigger picture that OpenAI has in mind

The Information describes Altman’s vision for OpenAI’s products in the future as involving the development of a highly responsive AI that performs like the fictional AI in the film “Her.” Altman envisions digital AI assistants with visual and audio abilities capable of achieving things that aren’t possible yet, and with the kind of responsiveness that would enable such assistants to serve as tutors for students, for example. Or the ultimate navigational and travel assistant that can give people the most relevant and helpful information about their surroundings or current situation in an instant.

The tech could also be used to enhance existing voice assistants like Apple’s Siri, and usher in better AI-powered customer service agents capable of detecting when a person they’re talking to is being sarcastic, for example.

According to those who have experience with the new model, OpenAI will make it available to paying subscribers, although it’s not known exactly when. Apparently, OpenAI has plans to incorporate the new features into the free version of its chatbot, ChatGPT, eventually. 

OpenAI is also reportedly working on making the new model cheaper to run than its most advanced model available now, GPT-4 Turbo. The new model is said to outperform GPT-4 Turbo when it comes to answering many types of queries, but apparently it’s still prone to hallucinations,  a common problem with models such as these.

The company is holding an event today at 10am PT / 1pm ET / 6pm BST (or 3am AEST on Tuesday, May 14, in Australia), where OpenAI could preview this advanced model. If this happens, it would put a lot of pressure on one of OpenAI’s biggest competitors, Google.

Google is holding its own annual developer conference, I/O 2024, on May 14, and a major announcement like this could steal a lot of thunder from whatever Google has to reveal, especially when it comes to Google’s AI endeavor, Gemini

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

TOPS explained – exactly how powerful is Apple’s new M4 iPad chip?

Apple announced the M4 chip, a powerful new upgrade that will arrive in next-generation iPad (and, further down the line, the best Macbooks and Macs). You can check out our beat-by-beat coverage of the Apple event, but one element of the presentation has left some users confused: what exactly does TOPS mean?

TOPS is an acronym for 'trillion operations per second', and is essentially a hardware-specific measure of AI capabilities. More TOPS means faster on-chip AI performance, in this case the Neural Engine found on the Apple M4 chip.

The M4 chip is capable of 38 TOPS – that's 38,000,000,000,000 operations per second. If that sounds like a staggeringly massive number, well, it is! Modern neural processing units (NPUs) like Apple's Neural Engine are advancing at an incredibly rapid rate; for example, Apple's own A16 Bionic chip, which debuted in the iPhone 14 Pro less than two years ago, offered 17 TOPS.

Apple's new chip isn't even the most powerful AI chip about to hit the market – Qualcomm's upcoming Snapdragon X Elite purportedly offers 45 TOPS, and is expected to land in Windows laptops later this year.

How is TOPS calculated?

The processes by which we measure AI performance are still in relative infancy, but TOPS provides a useful and user-accessible metric for discerning how 'good' at handling AI tools a given processor is.

I'm about to get technical, so if you don't care about the mathematics, feel free to skip ahead to the next section! The current industry standard for calculating TOPS is TOPS = 2 × MAC unit count × Frequency / 1 trillion. 'MAC' stands for multiply-accumulate; a MAC operation is basically a pair of calculations (a multiplication and an addition) that are run by each MAC unit on the processor once every clock cycle, powering the formulas that make AI models function. Every NPU has a set number of MAC units determined by the NPU's microarchitecture.

'Frequency' here is defined by the clock speed of the processor in question – specifically, how many cycles it can process per second. It's a common metric also used in CPUs, GPUs, and other components, essentially denoting how 'fast' the component is. 

So, to calculate how many operations per second an NPU can handle, we simply multiply the MAC unit count by 2 for our number of operations, then multiply that by the frequency. This gives us an 'OPS' figure, which we then divide by a trillion to make it a bit more palatable (and kinder on your zero key when typing it out).

Simply put, more TOPS means better, faster AI performance.

Adobe Premiere Pro's Firefly Video AI tools in action

Adobe’s Firefly generative AI tool can be hardware-accelerated by your device’s NPU. (Image credit: Adobe)

Why is TOPS important?

TOPS is, in the simplest possible terms, our current best way to judge the performance of a device for running local AI workloads. This applies both to the industry and the wider public; it's a straightforward number that lets professionals and consumers immediately compare the baseline AI performance of different devices.

TOPS is only applicable for on-device AI, meaning that cloud-based AI tools (like the internet's favorite AI bot, ChatGPT) don't typically benefit from better TOPS. However, local AI is becoming more and more prevalent, with popular professional software like the Adobe Creative Cloud suite starting to implement more AI-powered features that depend on the capabilities of your device.

It should be noted that TOPS is by no means a perfect metric. At the end of the day, it's a theoretical figure derived from hardware statistics and can differ greatly from real-world performance. Factors such as power availability, thermal systems, and overclocking can impact the actual speed at which an NPU can run AI workloads.

To that end, though, we're now starting to see AI benchmarks crop up, such as Procyon AI from UL Benchmarks (makers of the popular 3DMark and PCMark benchmarking programs). These can provide a much more realistic idea of how well a  You can expect to see TechRadar running AI performance tests as part of our review benchmarking in the near future!

TechRadar – All the latest technology news

Read More

OpenAI just gave artists access to Sora and proved the AI video tool is weirder and more powerful than we thought

A man with a balloon for a head is somehow not the weirdest thing you'll see today thanks to a series of experimental video clips made by seven artists using OpenAI's Sora generative video creation platform.

Unlike OpenAI's ChatGPT AI chatbot and the DALL-E image generation platform, the company's text-to-video tool still isn't publicly available. However, on Monday, OpenAI revealed it had given Sora access to “visual artists, designers, creative directors, and filmmakers” and revealed their efforts in a “first impressions” blog post.

While all of the films ranging in length from 20 seconds to a minute-and-a-half are visually stunning, most are what you might describe as abstract. OpenAI's Artist In Residence Alex Reben's 20-second film is an exploration of what could very well be some of his sculptures (or at least concepts for them), and creative director Josephine Miller's video depicts models melded with what looks like translucent stained glass.

Not all the videos are so esoteric.

OpenAI Sora AI-generated video image by Don Allen Stevenson III

OpenAI Sora AI-generated video image by Don Allen Stevenson III (Image credit: OpenAI sora / Don Allen Stevenson III)

If we had to give out an award for most entertaining, it might be multimedia production company shy kids' “Air Head”. It's an on-the-nose short film about a man whose head is a hot-air-filled yellow balloon. It might remind you of an AI-twisted version of the classic film, The Red Balloon, although only if you expected the boy to grow up and marry the red balloon and…never mind.

Sora's ability to convincingly merge the fantastical balloon head with what looks like a human body and a realistic environment is stunning. As shy kids' Walter Woodman noted, “As great as Sora is at generating things that appear real, what excites us is its ability to make things that are totally surreal.” And yes, it's a funny and extremely surreal little movie.

But wait, it gets stranger.

The other video that will have you waking up in the middle of the night is digital artist Don Allen Stevenson III's “Beyond Our Reality,” which is like a twisted National Geographic nature film depicting never-before-seen animal mergings like the Girafflamingo, flying pigs, and the Eel Cat. Each one looks as if a mad scientist grabbed disparate animals, carved them up, and then perfectly melded them to create these new chimeras.

OpenAI and the artists never detail the prompts used to generate the videos, nor the effort it took to get from the idea to the final video. Did they all simply type in a paragraph describing the scene, style, and level of reality and hit enter, or was this an iterative process that somehow got them to the point where the man's balloon head somehow perfectly met his shoulders or the Bunny Armadillo transformed from grotesque to the final, cute product?

That OpenAI has invited creatives to take Sora for a test run is not surprising. It's their livelihoods in art, film, and animation that are most at risk from Sora's already impressive capabilities. Most seem convinced it's a tool that can help them more quickly develop finished commercial products.

“The ability to rapidly conceptualize at such a high level of quality is not only challenging my creative process but also helping me evolve in storytelling. It's enabling me to translate my imagination with fewer technical constraints,” said Josephine Miller in the blog post.

Go watch the clips but don't blame us if you wake up in the middle of the night screaming.

You might also like

TechRadar – All the latest technology news

Read More

Google Lens just got a powerful AI upgrade – here’s how to use it

We've just seen the Samsung Galaxy S24 series unveiled with plenty of AI features packed inside, but Google isn't slowing down when it comes to upgrading its own AI tools – and Google Lens is the latest to get a new feature.

The new feature is actually an update to the existing multisearch feature in Google Lens, which lets you tweak searches you run using an image: as Google explains, those queries can now be more wide-ranging and detailed.

For example, Google Lens already lets you take a photo of a pair of red shoes, and append the word “blue” to the search so that the results turn up the same style of shoes, only in a blue color – that's the way that multisearch works right now.

The new and improved multisearch lets you add more complicated modifiers to an image search. So, in Google's own example, you might search with a photo of a board game (above), and ask “what is this game and how is it played?” at the same time. You'd get instructions for playing it from Google, rather than just matches to the image.

All in on AI

Two phones on an orange background showing Google Lens

(Image credit: Google)

As you would expect, Google says this upgrade is “AI-powered”, in the sense that image recognition technology is being applied to the photo you're using to search with. There's also some AI magic applied when it comes to parsing your text prompt and correctly summarizing information found on the web.

Google says the multisearch improvements are rolling out to all Google Lens users in the US this week: you can find it by opening up the Google app for Android or iOS, and then tapping the camera icon to the right of the main search box (above).

If you're outside the US, you can try out the upgraded functionality, but only if you're signed up for the Search Generative Experience (SGE) trial that Google is running – that's where you get AI answers to your searches rather than the familiar blue links.

Also just announced by Samsung and Google is a new Circle to Search feature, which means you can just circle (or scribble on) anything on screen to run a search for it on Google, making it even easier to look up information visually on the web.

You might also like

TechRadar – All the latest technology news

Read More

Google Gemini is its most powerful AI brain so far – and it’ll change the way you use Google

Google has announced the new Gemini artificial intelligence (AI) model, an AI system that will power a host of the company’s products, from the Google Bard chatbot to its Pixel phones. The company calls Gemini “the most capable and general model we’ve ever built,” claiming it would make AI “more helpful for everyone.”

Gemini will come in three 'sizes': Ultra, Pro and Nano, with each one designed for different uses. All of them will be multimodal, meaning they’ll be able to handle a wide range of inputs, with Google saying that Gemini can take text, code, audio, images and video as prompts.

While Gemini Ultra is designed for extremely demanding use cases such as in data centers, Gemini Nano will fit in your smartphone, raising the prospect of the best Android smartphones gaining a significant AI advantage.

With all of this new power, Google insists that it conducted “rigorous testing” to identify and prevent harmful results arising from people’s use of Gemini. That was challenging, the company said, because the multimodal nature of Gemini means two seemingly innocuous inputs (such as text and an image) can be combined to create something offensive or dangerous.

Coming to all your services and devices

Google has been under pressure to catch up with OpenAI’s ChatGPT and its advanced AI capabilities. Just a few days ago, in fact, news was circulating that Google had delayed its Gemini announcement until next year due to its apparent poor performance in a variety of languages. 

Now, it turns out that news was either wrong or Google is pressing ahead despite Gemini’s rumored imperfections. On this point, it’s notable that Gemini will only work in English at first.

What does Gemini mean for you? Well, if you use a Pixel 8 Pro phone, Google says it can now run Gemini Nano, bringing all of its AI capabilities to your pocket. According to a Google blog post, Gemini is found in two new Pixel 8 Pro features: Smart Reply in Gboard, which suggests message replies to you, and Summarize in Recorder, which can sum up your recorded conversations and presentations.

The Google Bard chatbot has also been updated to run Gemini, which the company says is “the biggest upgrade to Bard since it launched.” As well as that, Google says that “Gemini will be available in more of our products and services like Search, Ads, Chrome and Duet AI” in the coming months, Google says.

As part of the announcement, Google revealed a slate of Gemini demonstrations. These show the AI guessing what a user was drawing, playing music to match a drawing, and more.

Gemini vs ChatGPT

Google Gemini revealed at Google I/O 2023

(Image credit: Google)

It’s no secret that OpenAI’s ChatGPT has been the most dominant AI tool for months now, and Google wants to end that with Gemini. The company has made some pretty bold claims about its abilities, too.

For instance, Google says that Gemini Ultra’s performance exceeds current state-of-the-art results in “30 of the 32 widely-used academic benchmarks” used in large language model (LLM) research and development. In other words, Google thinks it eclipses GPT-4 in nearly every way.

Compared to the GPT-4 LLM that powers ChatGPT, Gemini came out on top in seven out of eight text-based benchmarks, Google claims. As for multimodal tests, Gemini won in all 10 benchmarks, as per Google’s comparison.

Does this mean there’s a new AI champion? That remains to be seen, and we’ll have to wait for more real-world testing from independent users. Still, what is clear is that Google is taking the AI fight very seriously. The ball is very much in OpenAI’s (and Microsoft's) court now.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft’s AI tinkering continues with powerful new GPT-4 Turbo upgrade for Copilot in Windows 11

Bing AI, which Microsoft recently renamed from Bing Chat to Copilot – yes, even the web-based version is now officially called Copilot, just to confuse everyone a bit more – should get GPT-4 Turbo soon enough, but there are still issues to resolve around the implementation.

Currently, Bing AI runs GPT-4, but GPT-4 Turbo will allow for various benefits including more accurate responses to queries and other important advancements.

We found out more about how progress was coming with the move to GPT-4 Turbo thanks to an exchange on X (formerly Twitter) between a Bing AI user and Mikhail Parakhin, Microsoft’s head of Advertising and Web Services.

As MS Power User spotted, Ricardo, a denizen of X, noted that they just got access to Bing’s redesigned layout and plug-ins, and asked: “Does Bing now use GPT-4 Turbo?”

As you can see in the below tweet, Parakhin responded to say that GPT-4 Turbo is not yet working in Copilot, as a few kinks still need to be ironed out.

See more

Of course, as well as Copilot on the web (formerly Bing Chat), this enhancement will also come to Copilot in Windows 11, too (which is essentially Bing AI – just with bells and whistles added in terms of controls for Windows and manipulating settings).


Analysis: Turbo mode

We’re taking the comment that a ‘few’ kinks are still to be resolved as a suggestion that much of the work around implementing GPT-4 Turbo has been carried out. Meaning that GPT-4 Turbo could soon arrive in Copilot, or we can certainly keep our fingers crossed that this is the case.

Expect it to bring in more accurate and relevant responses to queries as noted, and it’ll be faster too (as the name suggests). As Microsoft observes, it “has the latest training data with knowledge up to April 2023” – though it’s still in preview. OpenAI only announced GPT-4 Turbo earlier this month, and said that it’s also going to be cheaper to run (for developers paying for GPT-4, that is).

In theory, it should represent a sizeable step forward for Bing AI, and that’s something to look forward to hopefully in the near future.

You might also like …

TechRadar – All the latest technology news

Read More

Google Photos update could make it a powerful new reminders app

Google Photos continues to get smarter and it could soon gain the ability to let you set reminders for certain tasks and events, all from within the photo management app. 

There’s already a myriad of smart AI-powered options within Google Photos, from being able to extract text from an image, translate languages, and use the Google Lens feature to pick out even more information in photos and search Google for highlighted items. But this forthcoming reminder function, spotted by The SpAndroid, continues to build out the Photos app into more than just a place to store, edit and peruse shots. 

Much like the “Copy text”, “Search” and “Listen”  ‘chips’ (aka prompts) that pop up to offer you various options, an incoming Google Photos update could soon serve up the option to “Set reminder”. 

Tapping this effectively lets you create a calendar entry for a corresponding Google Calendar app. So let's say you snapped a photo of a restaurant board offering specials on certain days, you could use the new feature to then set a calendar reminder to check out the restaurant on a particular day.

As someone who snaps photos on his phone to serve as reminders and reference points, this new feature seems particularly handy. Sure, it’s not hard to bounce into a calendar app and set your own reminder, but being able to do things with fewer taps or swipes through app menus is certainly appealing to me. And it also means the information you’re after is right in front of you, rather than forcing you to bounce between apps.

Unfortunately, this reminder feature doesn't appear to have rolled out widely yet, with it not popping up in Google Photos on my iPhone 13 Pro or Google Pixel 7 Pro. However such updates can take time to roll out worldwide. I’m running Google Photos version 6.60, so I may need to wait until version 6.61 as that was used by The SpAndroid to test the reminder feature.

Ever smarter software 

Given Google is pushing AI-powered tools into its software, as well as Pixel phones with the Pixel 8 Pro at the top of the pile, it’s no surprise to see it bolster Photos with AI-centric features. 

It might seem creepy that Google could extract all manner of information from your smartphone snaps, but these tools can be very handy at times, letting you do more with less back and forth between apps. 

I’m actually keen to see Google do more in embracing interoperability between its app ecosystem. I’d like Google Maps to pull Google Photos into my timeline so I can better retrace my steps when trying to remember where I went and when; you can manually add photos to Maps and map locations can be automatically added to photos, but it doesn't quite feel like there’s perfect harmony between the apps. 

Nevertheless, it’s neat to see how Google Photos continues to evolve. I only hope it sticks to the side of being handy and not fall into the realms of creepiness. 

You might also like

TechRadar – All the latest technology news

Read More

Bing AI could soon be much more versatile and powerful thanks to plug-ins

Microsoft’s Bing AI may be close to finally getting plug-ins, a feature that has been experimented with before, and will make the chatbot considerably more versatile and powerful (in theory, anyway).

Windows Latest reports that the update to add plug-ins has rolled out to a ‘small’ number of Bing Chat users over the weekend, and the tech site was one of those to get access.

Note that it appears the rollout is only happening for those using the Canary version of Microsoft’s Edge browser (and Windows Latest only got the feature in that preview release, not in the finished version of Edge).

We’re told that the AI currently offers five plug-ins to testers and you can pick any three of those to use in a session. If you want to change plug-ins, you’ll need to start a new Bing Chat session.

Windows Latest carried out some testing with a couple of those plug-ins, and the results seemed useful, with the OpenTable add-on providing some restaurant recommendations in a query.

Other plug-ins available in testing include Kayak, Klarna, and a shopping add-on for buying suggestions – we’ve already got you covered there, of course, especially for the imminent Black Friday sale – but it may be the case that different plug-ins appear for different users.


Analysis: Faster and better

Eventually, of course, there will be a whole load of plug-ins for the Bing AI, or that’s certainly Microsoft’s plan, although they’ll doubtless be rolled out in stages over time. One of those will be the much-awaited ‘no search’ function that was switched to be implemented via a plug-in not so long ago. (This allows the user to specify that the AI can’t use search content scraped from the web in its responses).

We’ve seen plug-ins in a limited test rollout before (in August), but they were pulled, so this is effectively a return of the feature – hinting it might arrive sooner rather than later.

Fingers crossed, and the good news is that Windows Latest observes that these new plug-ins seem to be more responsive and work better than the old efforts (performance-related concerns are likely one of the reasons that the test plug-ins got pulled earlier this year).

You might also like …

TechRadar – All the latest technology news

Read More

ChatGPT Plus gets big upgrade that makes it more powerful and easier to use

ChatGPT is undoubtedly one of the best artificial intelligence (AI) tools you can use right now, but a new update could make it even better by increasing the range of file types it can work with, as well as making it a little more independent when it comes to switching modes.

The changes are currently being tested in beta and are expected to come to ChatGPT Plus, the paid-for version of OpenAI’s chatbot that costs $ 20 / £16 / AU$ 28 a month. As detailed by ChatGPT user luokai on Threads (via The Verge), these changes could make a big difference to how you use the AI tool.

Specifically, ChatGPT Plus members are now able to upload various files that the chatbot can use to generate results. For instance, luokai demonstrated how ChatGPT can analyze a PDF that a user uploads, then answer a question based on the contents of that PDF.

Elsewhere, the beta version of ChatGPT can create images based on a picture uploaded by a user. That could make the chatbot much more able to generate the type of content you’re after, without just having to solely rely on your prompt or description.

Automatic mode switching

ChatGPT responding to the prompt 'is there life after death?'

(Image credit: Shutterstock / Ascannio)

That’s not all this beta update brings with it. As well as file analysis, ChatGPT could soon be able to switch modes without any user input, in a move that might make the tool much less cumbersome to use.

Right now, you need to tell ChatGPT exactly what mode you want to use, such as Browse with Bing. In the current beta, though, ChatGPT is able to determine the mode automatically based on your conversation with the chatbot.

That can extend to generating Python code or opting to use Dall-E to generate an image too, meaning you should be able to get results much closer to what you wanted without having to make an educated guess as to the best mode to use.

All of these changes could make OpenAI’s chatbot much easier to use if you’re a ChatGPT Plus subscriber. There’s no word yet on when the features will be fully rolled out, so stay tuned for more news on that front.

You might also like

TechRadar – All the latest technology news

Read More

The AI backlash begins: artists could protect against plagiarism with this powerful tool

A team of researchers at the University of Chicago has created a tool aimed to help online artists “fight back against AI companies” by inserting, in essence, poison pills into their original work.

Called Nightshade, after the family of toxic plants, the software is said to introduce poisonous pixels to digital art that messes with the way generative AIs interpret them. The way models like Stable Diffusion work is they scour the internet, picking up as many images as they can to use as training data. What Nightshade does is exploit this “security vulnerability”. As explained by the MIT Technology Review, these “poisoned data samples can manipulate models into learning” the wrong thing. For example, it could see a picture of a dog as a cat or a car as a cow.

Poison tactics

As part of the testing phase, the team fed Stable Diffusion infected content and “then prompted it to create images of dogs”. After being given 50 samples, the AI generated pictures of misshapen dogs with six legs. After 100, you begin to see something resembling a cat. Once it was given 300, dogs became full-fledged cats. Below, you'll see the other trials.

Nightshade tests

(Image credit: University of Chicago/MIT Technology Review)

The report goes on to say Nightshade also affects “tangentially related” ideas because generative AIs are good “at making connections between words”. Messing with the word “dog” jumbles similar concepts like puppy, husky, or wolf. This extends to art styles as well. 

Nightshade's tangentially related samples

(Image credit: University of Chicago/MIT Technology Review)

It is possible for AI companies to remove the toxic pixels. However as the MIT post points out, it is “very difficult to remove them”. Developers would have to “find and delete each corrupted sample.” To give you an idea of how tough this would be, a 1080p image has over two million pixels. If that wasn’t difficult enough, these models “are trained on billions of data samples.” So imagine looking through a sea of pixels to find the handful messing with the AI engine.

At least, that’s the idea. Nightshade is still in the early stages. Currently, the tech “has been submitted for peer review at [the] computer security conference Usenix.” MIT Technology Review managed to get a sneak peek.

Future endeavors

We reached out to team lead, Professor Ben Y. Zhao at the University of Chicago, with several questions. 

He told us they do have plans to “implement and release Nightshade for public use.” It’ll be a part of Glaze as an “optional feature”. Glaze, if you’re not familiar, is another tool Zhao’s team created giving artists the ability to “mask their own personal style” and stop it from being adopted by artificial intelligence. He also hopes to make Nightshade open source, allowing others to make their own venom.

Additionally, we asked Professor Zhao if there are plans to create a Nightshade for video and literature. Right now, multiple literary authors are suing OpenAI claiming the program is “using their copyrighted works without permission.” He states developing toxic software for other works will be a big endeavor “since those domains are quite different from static images. The team has “no plans to tackle those, yet.” Hopefully someday soon.

So far, initial reactions to Nightshade are positive. Junfeng Yang, a computer science professor at Columbia University, told Technology Review this could make AI developers “respect artists’ rights more”. Maybe even be willing to pay out royalties.

If you're interested in picking up illustration as a hobby, be sure to check out TechRadar's list of the best digital art and drawing software in 2023.

You might also like

TechRadar – All the latest technology news

Read More