ChatGPT is broken again and it’s being even creepier than usual – but OpenAI says there’s nothing to worry about

OpenAI has been enjoying the limelight this week with its incredibly impressive Sora text-to-video tool, but it looks like the allure of AI-generated video might’ve led to its popular chatbot getting sidelined, and now the bot is acting out.

Yes, ChatGPT has gone insane–- or, more accurately, briefly went insane for a short period sometime in the past 48 hours. Users have reported a wild array of confusing and even threatening responses from the bot; some saw it get stuck in a loop of repeating nonsensical text, while others were subjected to invented words and weird monologues in broken Spanish. One user even stated that when asked about a coding problem, ChatGPT replied with an enigmatic statement that ended with a claim that it was ‘in the room’ with them.

Naturally, I checked the free version of ChatGPT straight away, and it seems to be behaving itself again now. It’s unclear at this point whether the problem was only with the paid GPT-4 model or also the free version, but OpenAI has acknowledged the problem, saying that the “issue has been identified” and that its team is “continuing to monitor the situation”. It did not, however, provide an explanation for ChatGPT’s latest tantrum.

This isn’t the first time – and it won’t be the last

ChatGPT has had plenty of blips in the past – when I set out to break it last year, it said some fairly hilarious things – but this one seems to have been a bit more widespread and problematic than past chatbot tomfoolery.

It’s a pertinent reminder that AI tools in general aren’t infallible. We recently saw Air Canada forced to honor a refund after its AI-powered chatbot invented its own policies, and it seems likely that we’re only going to see more of these odd glitches as AI continues to be implemented across the different facets of our society. While these current ChatGPT troubles are relatively harmless, there’s potential for real problems to arise – that Air Canada case feels worryingly like an omen of things to come, and may set a real precedent for human moderation requirements when AI is deployed in business settings.

OpenAI CEO Sam Altman speaking during Microsoft's February 7, 2023 event

OpenAI CEO Sam Altman doesn’t want you (or his shareholders) to worry about ChatGPT. (Image credit: JASON REDMOND/AFP via Getty Images)

As for exactly why ChatGPT had this little episode, speculation is currently rife. This is a wholly different issue to user complaints of a ‘dumber’ chatbot late last year, and some paying users of GPT-4 have suggested it might be related to the bot’s ‘temperature’.

That’s not a literal term, to be clear: when discussing chatbots, temperature refers to the degree of focus and creative control the AI exerts over the text it produces. A low temperature gives you direct, factual answers with little to no character behind them; a high temperature lets the bot out of the box and can result in more creative – and potentially weirder – responses.

Whatever the cause, it’s good to see that OpenAI appears to have a handle on ChatGPT again. This sort of ‘chatbot hallucination’ is a bad look for the company, considering its status as the spearpoint of AI research, and threatens to undermine users’ trust in the product. After all, who would want to use a chatbot that claims to be living in your walls?

TechRadar – All the latest technology news

Read More

Microsoft is making video calls creepier in Windows 11

Windows 11 is getting a new AI-powered eye contact feature for video calls, but rather than making these calls feel more natural, it actually looks pretty creepy.

Announced at its recent event on the future of hybrid work, this new feature aims to use artificial intelligence to make it look like your eyes are looking directly at the person you’re video calling.

Most webcams, including ones built into laptops, sit above the screen, but when we’re on video calls, we’re usually looking at the video of the person we’re talking to, instead of looking directly at the camera. This leads to callers appearing to look down when talking, rather than making eye contact, as most of us would when talking to people in person.

Microsoft’s attempt to fix this by adjusting the video caller’s pupils so they face the screen, in a bid to make video calls, as Windows chief Panos Panay claims, “more human,” is certainly interesting, but from the results we’ve seen so far, the effect appears more unnerving than the company intends.

GIF video showing the eye tracking feature

(Image credit: Microsoft)

Analysis: AI has its limits – and this is one of them

In the video clip Microsoft showed, a woman speaks on a video call, and her pupils do indeed make it appear that she’s looking at the screen. However, there are slight glitches that even when subtle, make it clear that something isn’t quite right.

It’s a classic example of the ‘uncanny valley’, where an attempt to synthesise an artificial human causes a sense of uneasiness in real humans, often because of imperfections which tell us that what we’re looking at is fake.

In fact, the uncanny valley can be more pronounced in more realistic attempts, as we subconsciously pick up more minor details, which then increases the impact of the effect, and that’s something that appears to have happened here.

By trying to make video calls in Windows 11 “more human,” Microsoft has actually done the opposite, and when you notice the little issues and glitches, you’re unable to see past the artificiality of it all. Ironically, it seems that this new feature is actually more distracting then if a caller was not looking at the camera.

Thankfully this feature will likely be optional, and there may be future updates that make it look more realistic, but at the moment we can’t imagine many people using it, and it shows that while AI has many fantastic uses, it also has its limits.

  • Check out our pick of the best laptops that you can use for remote video calls

Via TechCrunch

TechRadar – All the latest technology news

Read More

Microsoft is making video calls creepier in Windows 11

Windows 11 is getting a new AI-powered eye contact feature for video calls, but rather than making these calls feel more natural, it actually looks pretty creepy.

Announced at its recent event on the future of hybrid work, this new feature aims to use artificial intelligence to make it look like your eyes are looking directly at the person you’re video calling.

Most webcams, including ones built into laptops, sit above the screen, but when we’re on video calls, we’re usually looking at the video of the person we’re talking to, instead of looking directly at the camera. This leads to callers appearing to look down when talking, rather than making eye contact, as most of us would when talking to people in person.

Microsoft’s attempt to fix this by adjusting the video caller’s pupils so they face the screen, in a bid to make video calls, as Windows chief Panos Panay claims, “more human,” is certainly interesting, but from the results we’ve seen so far, the effect appears more unnerving than the company intends.

GIF video showing the eye tracking feature

(Image credit: Microsoft)

Analysis: AI has its limits – and this is one of them

In the video clip Microsoft showed, a woman speaks on a video call, and her pupils do indeed make it appear that she’s looking at the screen. However, there are slight glitches that even when subtle, make it clear that something isn’t quite right.

It’s a classic example of the ‘uncanny valley’, where an attempt to synthesise an artificial human causes a sense of uneasiness in real humans, often because of imperfections which tell us that what we’re looking at is fake.

In fact, the uncanny valley can be more pronounced in more realistic attempts, as we subconsciously pick up more minor details, which then increases the impact of the effect, and that’s something that appears to have happened here.

By trying to make video calls in Windows 11 “more human,” Microsoft has actually done the opposite, and when you notice the little issues and glitches, you’re unable to see past the artificiality of it all. Ironically, it seems that this new feature is actually more distracting then if a caller was not looking at the camera.

Thankfully this feature will likely be optional, and there may be future updates that make it look more realistic, but at the moment we can’t imagine many people using it, and it shows that while AI has many fantastic uses, it also has its limits.

  • Check out our pick of the best laptops that you can use for remote video calls

Via TechCrunch

TechRadar – All the latest technology news

Read More