OpenAI is working on a new tool to help you spot AI-generated images and protect you from deep fakes

You’ve probably noticed a few AI-generated images sprinkled throughout your different social media feeds – and there are likely a few you’ve probably scrolled right past, that may have slipped your keen eyes. 

For those of us who have been immersed in the world of generative AI, spotting AI images is a little easier, as you develop a mental checklist of what to look out for.

However, as the technology gets better and better, it is going to get a lot harder to tell. To solve this, OpenAI is developing new methods to track AI-generated images and prove what has and has not been artificially generated.

According to a blog post, OpenAI’s new proposed methods will add a tamper-resistant ‘watermark’ that will tag content with invisible ‘stickers.’ So, if an image is generated with OpenAI’s DALL-E generator, the classifier will flag it even if the image is warped or saturated.

The blog post claims the tool will have around a 98% accuracy when spotting images made with DALL-E. However, it will only flag 5-10% of pictures from other generators like Midjourney or Adobe Firefly

So, it’s great for in-house images, but not so great for anything produced outside of OpenAI. While it may not be as impressive as one would hope in some respects, it’s a positive sign that OpenAI is starting to address the flood of AI images that are getting harder and harder to distinguish.

Okay, so this may not seem like a big deal to some, as a lot of instances of AI-generated images are either memes or high-concept art that are pretty harmless. But that said, there’s also a surge of scenarios now where people are creating hyper-realistic fake photos of politicians, celebrities, people in their lives, and more besides, that could lead to misinformation being spread at an incredibly fast pace.

Hopefully, as these kinds of countermeasures get better and better, the accuracy will only improve, and we can have a much more accessible way to double-check the authenticity of the images we come across in our day-to-day life.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI just gave artists access to Sora and proved the AI video tool is weirder and more powerful than we thought

A man with a balloon for a head is somehow not the weirdest thing you'll see today thanks to a series of experimental video clips made by seven artists using OpenAI's Sora generative video creation platform.

Unlike OpenAI's ChatGPT AI chatbot and the DALL-E image generation platform, the company's text-to-video tool still isn't publicly available. However, on Monday, OpenAI revealed it had given Sora access to “visual artists, designers, creative directors, and filmmakers” and revealed their efforts in a “first impressions” blog post.

While all of the films ranging in length from 20 seconds to a minute-and-a-half are visually stunning, most are what you might describe as abstract. OpenAI's Artist In Residence Alex Reben's 20-second film is an exploration of what could very well be some of his sculptures (or at least concepts for them), and creative director Josephine Miller's video depicts models melded with what looks like translucent stained glass.

Not all the videos are so esoteric.

OpenAI Sora AI-generated video image by Don Allen Stevenson III

OpenAI Sora AI-generated video image by Don Allen Stevenson III (Image credit: OpenAI sora / Don Allen Stevenson III)

If we had to give out an award for most entertaining, it might be multimedia production company shy kids' “Air Head”. It's an on-the-nose short film about a man whose head is a hot-air-filled yellow balloon. It might remind you of an AI-twisted version of the classic film, The Red Balloon, although only if you expected the boy to grow up and marry the red balloon and…never mind.

Sora's ability to convincingly merge the fantastical balloon head with what looks like a human body and a realistic environment is stunning. As shy kids' Walter Woodman noted, “As great as Sora is at generating things that appear real, what excites us is its ability to make things that are totally surreal.” And yes, it's a funny and extremely surreal little movie.

But wait, it gets stranger.

The other video that will have you waking up in the middle of the night is digital artist Don Allen Stevenson III's “Beyond Our Reality,” which is like a twisted National Geographic nature film depicting never-before-seen animal mergings like the Girafflamingo, flying pigs, and the Eel Cat. Each one looks as if a mad scientist grabbed disparate animals, carved them up, and then perfectly melded them to create these new chimeras.

OpenAI and the artists never detail the prompts used to generate the videos, nor the effort it took to get from the idea to the final video. Did they all simply type in a paragraph describing the scene, style, and level of reality and hit enter, or was this an iterative process that somehow got them to the point where the man's balloon head somehow perfectly met his shoulders or the Bunny Armadillo transformed from grotesque to the final, cute product?

That OpenAI has invited creatives to take Sora for a test run is not surprising. It's their livelihoods in art, film, and animation that are most at risk from Sora's already impressive capabilities. Most seem convinced it's a tool that can help them more quickly develop finished commercial products.

“The ability to rapidly conceptualize at such a high level of quality is not only challenging my creative process but also helping me evolve in storytelling. It's enabling me to translate my imagination with fewer technical constraints,” said Josephine Miller in the blog post.

Go watch the clips but don't blame us if you wake up in the middle of the night screaming.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT takes the mic as OpenAI unveils the Read Aloud feature for your listening pleasure

OpenAI looks like it’s been hard at work, making moves like continuing to improve the GPT store and recently sharing demonstrations of one of the other highly sophisticated models in its pipeline, the video-generation tool Sora. That said, it looks like it’s not completely resting on ChatGPT’s previous success and giving the impressive AI chatbot the capability to read its responses out loud. The feature is being rolled out on both the web version and the mobile versions of the chatbot. 

The new feature will be called 'Read Aloud', as per an official X (formerly Twitter) post from the generative artificial intelligence (AI) company. These will come in useful for many users, including those who have different accessibility needs and people using the chatbot while on the go.

Users can try it for themselves now, according to the Verge, either on the web version of ChatGPT or a mobile version (iOS and Android), and they will be given five different voices they can select from that ChatGPT can use. The feature is available to try whether you use the free version available to all users, GPT-3.5, or the premium paid version, GPT-4. When it comes to languages, users can expect to be able to use the Read Aloud feature in 37 languages (for now) and ChatGPT will be given the ability to autodetect the language that the conversation is happening in. 

If you want to try it on the desktop version of ChatGPT, there should be a speaker icon that shows up below the generated text that activates the feature. If you'd like to try it on a mobile app version, users can tap on and hold the text to open the Read Aloud feature player. In the player, users can play, pause, and rewind the reading of ChatGPTs’ response. Bear in mind that the feature is still being rolled out, so not every user in every region will have access just yet.

A step in the right direction for ChatGPT

This isn’t the first voice-related feature that ChatGPT has received, with Open AI introducing a voice chat feature in September 2023, which allowed users to make inquiries using voice input instead of typing. Users can keep this setting on, prompting ChatGPT to always respond out loud to their inputs.

The debut of this feature comes at an interesting time, as Anthropic recently introduced similar features to its own generative AI models, including Claude. Anthropic is an OpenAI competitor that’s recently seen major amounts of investment from Amazon. 

Overall, this new feature is great news in my eyes (or ears), primarily for expanding accessibility to ChatGPT, but also because I've had a Read-Aloud plugin for ChatGPT in my browser for a while now. I find it interesting to listen to and analyze ChatGPT’s responses out loud, especially as I’m researching and writing. After all, its responses are designed to be as human-like as possible, and a big part of how we process actual real-life human communication is by speaking and listening to each other. 

Giving Chat-GPT a capability like this can help users think about how well ChatGPT is responding, as it makes use of another one of our primary ways of receiving verbal information. Beyond the obvious accessibility benefits for blind or partially-sighted users, I think this is a solid move by OpenAI in cementing ChatGPT as the go-to generative AI tool, opening up another avenue for humans to connect to it. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

ChatGPT is broken again and it’s being even creepier than usual – but OpenAI says there’s nothing to worry about

OpenAI has been enjoying the limelight this week with its incredibly impressive Sora text-to-video tool, but it looks like the allure of AI-generated video might’ve led to its popular chatbot getting sidelined, and now the bot is acting out.

Yes, ChatGPT has gone insane–- or, more accurately, briefly went insane for a short period sometime in the past 48 hours. Users have reported a wild array of confusing and even threatening responses from the bot; some saw it get stuck in a loop of repeating nonsensical text, while others were subjected to invented words and weird monologues in broken Spanish. One user even stated that when asked about a coding problem, ChatGPT replied with an enigmatic statement that ended with a claim that it was ‘in the room’ with them.

Naturally, I checked the free version of ChatGPT straight away, and it seems to be behaving itself again now. It’s unclear at this point whether the problem was only with the paid GPT-4 model or also the free version, but OpenAI has acknowledged the problem, saying that the “issue has been identified” and that its team is “continuing to monitor the situation”. It did not, however, provide an explanation for ChatGPT’s latest tantrum.

This isn’t the first time – and it won’t be the last

ChatGPT has had plenty of blips in the past – when I set out to break it last year, it said some fairly hilarious things – but this one seems to have been a bit more widespread and problematic than past chatbot tomfoolery.

It’s a pertinent reminder that AI tools in general aren’t infallible. We recently saw Air Canada forced to honor a refund after its AI-powered chatbot invented its own policies, and it seems likely that we’re only going to see more of these odd glitches as AI continues to be implemented across the different facets of our society. While these current ChatGPT troubles are relatively harmless, there’s potential for real problems to arise – that Air Canada case feels worryingly like an omen of things to come, and may set a real precedent for human moderation requirements when AI is deployed in business settings.

OpenAI CEO Sam Altman speaking during Microsoft's February 7, 2023 event

OpenAI CEO Sam Altman doesn’t want you (or his shareholders) to worry about ChatGPT. (Image credit: JASON REDMOND/AFP via Getty Images)

As for exactly why ChatGPT had this little episode, speculation is currently rife. This is a wholly different issue to user complaints of a ‘dumber’ chatbot late last year, and some paying users of GPT-4 have suggested it might be related to the bot’s ‘temperature’.

That’s not a literal term, to be clear: when discussing chatbots, temperature refers to the degree of focus and creative control the AI exerts over the text it produces. A low temperature gives you direct, factual answers with little to no character behind them; a high temperature lets the bot out of the box and can result in more creative – and potentially weirder – responses.

Whatever the cause, it’s good to see that OpenAI appears to have a handle on ChatGPT again. This sort of ‘chatbot hallucination’ is a bad look for the company, considering its status as the spearpoint of AI research, and threatens to undermine users’ trust in the product. After all, who would want to use a chatbot that claims to be living in your walls?

TechRadar – All the latest technology news

Read More

OpenAI quietly slips in update for ChatGPT that allows users to tag their own custom-crafted chatbots

OpenAI is continuing to cement its status as the leading force in generative AI, adding a nifty little feature with little fanfare: the ability to tag a custom-created GPT bot with an ‘@’ in the prompt. 

In November 2023, custom ChatGPT-powered chatbots were introduced by OpenAI that would help users have specific types of conversations. These were named GPTs and customers who subscribed to OpenAI’s premium ChatGPT Plus service were able to build their own GPT-powered chatbot for their own purposes using OpenAI’s easy-to-use GPT-building interface. Users would then be able to help train and improve their own GPTs over time, making them “smarter” and better at accomplishing tasks asked of them by users. 

Also, earlier this year, OpenAI debuted the GPT store which allowed users to create their own GPT bots for specific categories like education, productivity, and “just for fun,” and then make them available for other users. Once they’re on the GPT store, the AI chatbots become searchable, can compete and rank in leaderboards against GPTs created by other users, and eventually users will even be able to earn money for their creators. 

Surprising new feature

It seems OpenAI has now made it easier to switch to a custom GPT chatbot, with an eagle-eyed ChatGPT fan, @danshipper, spotting that you can summon a GPTs with an ‘@’ while chatting with ChatGPT.

See more

Cybernews suggests that it’ll make switching between these different custom GPT personas more fluid and easier to use. OpenAI hasn’t publicized this new development yet, and it seems like this change specifically applies to ChatGPT Plus subscribers. 

This would somewhat mimic existing functionalities of apps like Discord and Slack, and could prove popular with ChatGPT users who wanted to make their own personal chatbot ecosystems populated by custom GPT chatbots that can be interacted with in a similar manner to those apps.

However, it’s interesting that OpenAI hasn’t announced or even mentioned this update, leaving users to discover it by themselves. It’s a distinctive approach to introducing new features for sure. 

You might also like

TechRadar – All the latest technology news

Read More

Has ChatGPT been getting a little lazy for you? OpenAI has just released a fix

It would seem reports of 'laziness' on the part of the ChatGPT AI bot were pretty accurate, as its developer OpenAI just announced a fix for the problem – which should mean the bot takes fewer shortcuts and is less likely to fail half way through trying to do something.

The latest update to the ChatGPT code is “intended to reduce cases of 'laziness' where the model doesn’t complete a task” according to OpenAI. However, it's worth noting that this only applies to the GPT-4 Turbo model that's still in a limited preview.

If you're a free user on GPT-3.5 or a paying user on GPT-4, you might still notice a few problems in terms of ChatGPT's abilities – although we're assuming that eventually the upgrade will trickle its way down to the other models as well.

Back in December, OpenAI mentioned a lack of updates and “unpredictable” behavior as reasons why users might be noticing subpar performance from ChatGPT, and it would seem that the work to try and get these issues resolved is still ongoing.

More thorough

ChatGPT voice chat

ChatGPT is pushing forward on mobile too (Image credit: Future)

One of the tasks that GPT-4 Turbo can now complete “more thoroughly” is generating code, according to OpenAI. More complex tasks can also be completed from a single prompt, while the model will also be cheaper for users to work with.

Many of the other model upgrades mentioned in the OpenAI blog post are rather technical – but the takeaways are that these AI bots are getting smarter, more accurate, and more efficient. A lot of improvements are related to “embeddings”, the numerical representations that AI bots use to understand words and the context around them.

ChatGPT recently got its very own app store, where third-party developers can showcase their own custom-made bots (or GPTs). However, there are rules in place that ban certain types of chatbots – like virtual girlfriends.

It also appears that OpenAI is busy pushing ChatGPT forward on mobile, with the latest ChatGPT beta for Android offering the ability to load up the bot from any screen (much as you might do with Google Assistant or Siri).

You might also like

TechRadar – All the latest technology news

Read More

OpenAI confirms ChatGPT has been getting ‘lazier’ – but a fix is coming

Have you recently felt that ChatGPT isn’t performing as well as it used to? If so, you're not alone, as numerous users have claimed the artificial intelligence (AI) chatbot has been on the decline – and ChatGPT developer OpenAI has just confirmed a possible reason why that might be the case.

In fact, OpenAI seemed to endorse the idea that ChatGPT was getting “lazier” on X (formerly Twitter). In the post, OpenAI explained it had heard users’ feedback and that the reason for ChatGPT getting “lazier” was that it hadn’t been updated since November 11 – an entire month.

While OpenAI said this lack of updates wasn’t “intentional,” it added that it was “looking into fixing it.” It also noted that “model behavior can be unpredictable,” perhaps hinting that the developer itself hadn’t noticed ChatGPT’s declining performance until users brought it to light.

Despite all that, OpenAI hasn’t given an indication of when the issue might be fixed. If you regularly use ChatGPT prompts and have noticed a downward trend in the tool’s abilities, you’ll just have to hang tight until an update gets released.

Temporary solutions

A laptop screen on a green background showing the ChatGPT logo

(Image credit: ChatGPT)

Underneath the post on X, OpenAI further clarified the issue. One user asked the developer how it's possible that ChatGPT could get lazier.

In response, OpenAI explained that “to be clear, the idea is not that the model has somehow changed itself since Nov 11th. It’s just that differences in model behavior can be subtle – only a subset of prompts may be degraded, and it may take a long time for customers and employees to notice and fix these patterns.”

Other comments suggested ways to restore ChatGPT to its former prowess, including using the phrase “take a deep breath” or telling the chatbot to “reason step-by-step.” These might serve as temporary solutions until OpenAI is able to fix the underlying issue.

The degradation of ChatGPT performance comes shortly after Google announced its own ChatGPT rival called Gemini. Yet despite flashy promises from the search giant, numerous reports have emerged claiming its abilities are less than stellar. Perhaps it's time for both OpenAI and Google to give their chatbots a Christmas break and work on some upgrades for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Popular AI art tool Dall-E gets a big upgrade from ChatGPT creator OpenAI

If you’ve ever messed around with AI tools online, chances are you’ve used Dall-E. OpenAI’s AI art generator is user-friendly and offers a free version, which is why we named it the best tool for beginners in our list of the best AI art generators.

You might’ve heard the name from Dall-E mini, a basic AI image generator made by Boris Dayma that enjoyed a decent amount of viral popularity back in 2021 thanks to its super-simple functionality and free access. But OpenAI’s version is more sophisticated – now more than ever, thanks to the Dall-E 3 update.

As reported by Reuters, OpenAI confirmed on September 20th that the new-and-improved Dall-E would be available to paying ChatGPT Plus and Enterprise subscribers in October (though an official release date has not been announced yet). An OpenAI spokesperson noted that “DALL-E 3 can translate nuanced requests into extremely detailed and accurate images”, hopefully signally a boost in the tool’s graphical capabilities – something competitors Midjourney and Stable Diffusion arguably do better right now.

Another small step for AI

Although ChatGPT creator OpenAI has become embroiled in lawsuits over the use of human-created material for training its AI models, the Dall-E 3 upgrade actually does feel like a step in the right direction.

In addition to technical improvements to the art generation tool, the new version will also deliver a host of security and safeguarding features, some of which are arguably sorely needed for AI image production services.

Most prominent is a set of mitigations within the software that prevents Dall-E 3 from being used to generate pictures of real-world living public figures or art in the style of a living artist. Combined with new safeguards that will (hopefully) prevent the generation of violent, inappropriate, or otherwise harmful images, I can see Dall-E 3 setting the new benchmark for legality and morality in the generative AI space.

It’s an unpleasant topic, but there’s no denying the potential dangers of art theft, deepfake videos, and ‘revenge porn’ when it comes to AI art tools. OpenAI has also stated that Dall-E creators will be able to opt out of having their work used to train future text-to-image tools, which will hopefully preserve some originality – so I’m going to be cautiously optimistic about this update, despite my previous warning about the dangers of AI.

You might also like

TechRadar – All the latest technology news

Read More

Even OpenAI can’t tell the difference between original content and AI-generated content – and that’s worrying

Open AI, the creator of the incredibly popular AI chatbot ChatGPT, has officially shut down the tool it had developed for detecting content created by AI and not humans. ‘AI Classifier’ has been scrapped just six months after its launch – apparently due to a ‘low rate of accuracy’, says OpenAI in a blog post.

ChatGPT has exploded in popularity this year, worming its way into every aspect of our digital lives, with a slew of rival services and copycats. Of course, the flood of AI-generated content does bring up concerns from multiple groups surrounding inaccurate, inhuman content pervading our social media and newsfeeds.

Educators in particular are troubled by the different ways ChatGPT has been used to write essays and assignments that are passed off as original work. OpenAI’s classifier tool was designed to address these fears not just within education but wider spheres like corporate workspaces, medical fields, and coding-intensive careers. The idea behind the tool was that it should be able to determine whether a piece of text was written by a human or an AI chatbot, in order to combat misinformation

Plagiarism detection service Turnitin, often used by universities, recently integrated an ‘AI Detection Tool’ that has demonstrated a very prominent fault of being wrong on either side. Students and faculty have gone to Reddit to protest the inaccurate results, with students stating their own original work is being flagged as AI-generated content, and faculty complaining about AI work passing through these detectors unflagged.

Turnitin’s “AI Detection Tool” strikes (wrong) again from r/ChatGPT

It is an incredibly troubling thought: the idea that the makers of ChatGPT can no longer differentiate between what is a product of their own tool and what is not. If OpenAI can’t tell the difference, then what chance do we have? Is this the beginning of a misinformation flood, in which no one will ever be certain if what they read online is true? I don’t like to doomsay, but it’s certainly worrying.

TechRadar – All the latest technology news

Read More

There’s trouble in AI paradise as Microsoft and OpenAI butt heads

OpenAI warned Microsoft early this year about rushing into integrating GPT-4 (a more advanced but less ‘stable’ language model) into Bing without further training. Microsoft pushed ahead anyway. 

This led to a rush of unhinged and strange behaviour from the Bing AI tool. Now, a new report by the Wall Street Journal details how there is “conflict and confusion” between the companies and their fragile alliance.

Keeping in mind Microsoft doesn't own OpenAI outright (something it often does to up-and-coming companies), but instead invested a 49% stake in the startup, the arrangement gave Microsoft early access to OpenAI’s ChatGPT and Dall-E to boost Bing’s search engine.

It’s a setup that benefits both parties, as OpenAI gains a stable financial investment and servers for hosting, where Microsoft gets early access to the previously mentioned tools, forcing Google and others to scramble to catch up. The WSJ article describes this as an “open relationship” where Microsoft maintains significant influence without outright control. 

Retro illustration of a man and robot butting heads

(Image credit: studiostoks / Shutterstock)

Butting of heads

Microsoft's rush to incorporate GPT-4 into Bing search without the necessary further training has built resentment on both sides. 

According to the article, some Microsoft employees feel slighted by the fact that Microsoft's in-house AI projects are being overlooked in favour of OpenAI, which despite their partnership is free to work with Microsoft’s rivals. 

Apparently, there's also a feeling that OpenAI is not allowing some people at Microsoft full access to its tech. Some also feel that OpenAI's warning about not rushing into using its tech is hypocritical, as many feel OpenAI rushed out ChatGPT.

This has led to a situation where the pair are working together, and yet against each other at the same time. Time will tell whether that leads to healthy competition or a very messy breakup. 

TechRadar – All the latest technology news

Read More