Popular AI art tool Dall-E gets a big upgrade from ChatGPT creator OpenAI

If you’ve ever messed around with AI tools online, chances are you’ve used Dall-E. OpenAI’s AI art generator is user-friendly and offers a free version, which is why we named it the best tool for beginners in our list of the best AI art generators.

You might’ve heard the name from Dall-E mini, a basic AI image generator made by Boris Dayma that enjoyed a decent amount of viral popularity back in 2021 thanks to its super-simple functionality and free access. But OpenAI’s version is more sophisticated – now more than ever, thanks to the Dall-E 3 update.

As reported by Reuters, OpenAI confirmed on September 20th that the new-and-improved Dall-E would be available to paying ChatGPT Plus and Enterprise subscribers in October (though an official release date has not been announced yet). An OpenAI spokesperson noted that “DALL-E 3 can translate nuanced requests into extremely detailed and accurate images”, hopefully signally a boost in the tool’s graphical capabilities – something competitors Midjourney and Stable Diffusion arguably do better right now.

Another small step for AI

Although ChatGPT creator OpenAI has become embroiled in lawsuits over the use of human-created material for training its AI models, the Dall-E 3 upgrade actually does feel like a step in the right direction.

In addition to technical improvements to the art generation tool, the new version will also deliver a host of security and safeguarding features, some of which are arguably sorely needed for AI image production services.

Most prominent is a set of mitigations within the software that prevents Dall-E 3 from being used to generate pictures of real-world living public figures or art in the style of a living artist. Combined with new safeguards that will (hopefully) prevent the generation of violent, inappropriate, or otherwise harmful images, I can see Dall-E 3 setting the new benchmark for legality and morality in the generative AI space.

It’s an unpleasant topic, but there’s no denying the potential dangers of art theft, deepfake videos, and ‘revenge porn’ when it comes to AI art tools. OpenAI has also stated that Dall-E creators will be able to opt out of having their work used to train future text-to-image tools, which will hopefully preserve some originality – so I’m going to be cautiously optimistic about this update, despite my previous warning about the dangers of AI.

You might also like

TechRadar – All the latest technology news

Read More

Even OpenAI can’t tell the difference between original content and AI-generated content – and that’s worrying

Open AI, the creator of the incredibly popular AI chatbot ChatGPT, has officially shut down the tool it had developed for detecting content created by AI and not humans. ‘AI Classifier’ has been scrapped just six months after its launch – apparently due to a ‘low rate of accuracy’, says OpenAI in a blog post.

ChatGPT has exploded in popularity this year, worming its way into every aspect of our digital lives, with a slew of rival services and copycats. Of course, the flood of AI-generated content does bring up concerns from multiple groups surrounding inaccurate, inhuman content pervading our social media and newsfeeds.

Educators in particular are troubled by the different ways ChatGPT has been used to write essays and assignments that are passed off as original work. OpenAI’s classifier tool was designed to address these fears not just within education but wider spheres like corporate workspaces, medical fields, and coding-intensive careers. The idea behind the tool was that it should be able to determine whether a piece of text was written by a human or an AI chatbot, in order to combat misinformation

Plagiarism detection service Turnitin, often used by universities, recently integrated an ‘AI Detection Tool’ that has demonstrated a very prominent fault of being wrong on either side. Students and faculty have gone to Reddit to protest the inaccurate results, with students stating their own original work is being flagged as AI-generated content, and faculty complaining about AI work passing through these detectors unflagged.

Turnitin’s “AI Detection Tool” strikes (wrong) again from r/ChatGPT

It is an incredibly troubling thought: the idea that the makers of ChatGPT can no longer differentiate between what is a product of their own tool and what is not. If OpenAI can’t tell the difference, then what chance do we have? Is this the beginning of a misinformation flood, in which no one will ever be certain if what they read online is true? I don’t like to doomsay, but it’s certainly worrying.

TechRadar – All the latest technology news

Read More

There’s trouble in AI paradise as Microsoft and OpenAI butt heads

OpenAI warned Microsoft early this year about rushing into integrating GPT-4 (a more advanced but less ‘stable’ language model) into Bing without further training. Microsoft pushed ahead anyway. 

This led to a rush of unhinged and strange behaviour from the Bing AI tool. Now, a new report by the Wall Street Journal details how there is “conflict and confusion” between the companies and their fragile alliance.

Keeping in mind Microsoft doesn't own OpenAI outright (something it often does to up-and-coming companies), but instead invested a 49% stake in the startup, the arrangement gave Microsoft early access to OpenAI’s ChatGPT and Dall-E to boost Bing’s search engine.

It’s a setup that benefits both parties, as OpenAI gains a stable financial investment and servers for hosting, where Microsoft gets early access to the previously mentioned tools, forcing Google and others to scramble to catch up. The WSJ article describes this as an “open relationship” where Microsoft maintains significant influence without outright control. 

Retro illustration of a man and robot butting heads

(Image credit: studiostoks / Shutterstock)

Butting of heads

Microsoft's rush to incorporate GPT-4 into Bing search without the necessary further training has built resentment on both sides. 

According to the article, some Microsoft employees feel slighted by the fact that Microsoft's in-house AI projects are being overlooked in favour of OpenAI, which despite their partnership is free to work with Microsoft’s rivals. 

Apparently, there's also a feeling that OpenAI is not allowing some people at Microsoft full access to its tech. Some also feel that OpenAI's warning about not rushing into using its tech is hypocritical, as many feel OpenAI rushed out ChatGPT.

This has led to a situation where the pair are working together, and yet against each other at the same time. Time will tell whether that leads to healthy competition or a very messy breakup. 

TechRadar – All the latest technology news

Read More