ChatGPT is broken again and it’s being even creepier than usual – but OpenAI says there’s nothing to worry about

OpenAI has been enjoying the limelight this week with its incredibly impressive Sora text-to-video tool, but it looks like the allure of AI-generated video might’ve led to its popular chatbot getting sidelined, and now the bot is acting out.

Yes, ChatGPT has gone insane–- or, more accurately, briefly went insane for a short period sometime in the past 48 hours. Users have reported a wild array of confusing and even threatening responses from the bot; some saw it get stuck in a loop of repeating nonsensical text, while others were subjected to invented words and weird monologues in broken Spanish. One user even stated that when asked about a coding problem, ChatGPT replied with an enigmatic statement that ended with a claim that it was ‘in the room’ with them.

Naturally, I checked the free version of ChatGPT straight away, and it seems to be behaving itself again now. It’s unclear at this point whether the problem was only with the paid GPT-4 model or also the free version, but OpenAI has acknowledged the problem, saying that the “issue has been identified” and that its team is “continuing to monitor the situation”. It did not, however, provide an explanation for ChatGPT’s latest tantrum.

This isn’t the first time – and it won’t be the last

ChatGPT has had plenty of blips in the past – when I set out to break it last year, it said some fairly hilarious things – but this one seems to have been a bit more widespread and problematic than past chatbot tomfoolery.

It’s a pertinent reminder that AI tools in general aren’t infallible. We recently saw Air Canada forced to honor a refund after its AI-powered chatbot invented its own policies, and it seems likely that we’re only going to see more of these odd glitches as AI continues to be implemented across the different facets of our society. While these current ChatGPT troubles are relatively harmless, there’s potential for real problems to arise – that Air Canada case feels worryingly like an omen of things to come, and may set a real precedent for human moderation requirements when AI is deployed in business settings.

OpenAI CEO Sam Altman speaking during Microsoft's February 7, 2023 event

OpenAI CEO Sam Altman doesn’t want you (or his shareholders) to worry about ChatGPT. (Image credit: JASON REDMOND/AFP via Getty Images)

As for exactly why ChatGPT had this little episode, speculation is currently rife. This is a wholly different issue to user complaints of a ‘dumber’ chatbot late last year, and some paying users of GPT-4 have suggested it might be related to the bot’s ‘temperature’.

That’s not a literal term, to be clear: when discussing chatbots, temperature refers to the degree of focus and creative control the AI exerts over the text it produces. A low temperature gives you direct, factual answers with little to no character behind them; a high temperature lets the bot out of the box and can result in more creative – and potentially weirder – responses.

Whatever the cause, it’s good to see that OpenAI appears to have a handle on ChatGPT again. This sort of ‘chatbot hallucination’ is a bad look for the company, considering its status as the spearpoint of AI research, and threatens to undermine users’ trust in the product. After all, who would want to use a chatbot that claims to be living in your walls?

TechRadar – All the latest technology news

Read More

DeepMind and Meta staff plan to launch a new AI chatbot that could have the edge over ChatGPT and Bard

Since the explosion in popularity of large language AI models chatbots like ChatGPT, Google Gemini, and Microsoft Copilot, many smaller companies have tried to wiggle their way into the scene. Reka, a new AI startup, is gearing up to take on artificial intelligence chatbot giants like Gemini (formerly known as Google Bard) and OpenAI’s ChatGPT – and it may have a fighting chance to actually do so. 

The company is spearheaded by Singaporean scientist Yi Tay, working towards Reka Flash, a multilingual language model that has been trained in over 32 languages. Reka Flash also boasts 21 billion parameters, with the company stating that the model could have a competitive edge with Google Gemini Pro and OpenAI’s ChatGPT 3.5 across multiple AI benchmarks. 

According to TechInAsia, the company has also released a more compact version of the model called Reka Edge, which offers 7 billion parameters with specific use cases like on-device use. It’s worth noting that ChatGPT and Google Gemini have significantly more training parameters (approximately 175 billion and 137 billion respectively), but those bots have been around for longer and there are benefits to more ‘compact’ AI models; for example, Google has ‘Gemini Nano’, an AI model designed for running on edge devices like smartphones that uses just 1.8 billion parameters – so Reka Edge has it beat there.

So, who’s Yasa?

The model is available to the public in beta on the official Reka site. I’ve had a go at using it and can confirm that it's got a familiar ChatGPT-esque feel to the user interface and the way the bot responds. 

The bot introduced itself as Yasa, developed by Reka, and gave me an instant rundown of all the things it could do for me. It had the usual AI tasks down, like general knowledge, sharing jokes or stories, and solving problems.

Interestingly, Yasa noted that it can also assist in translation, and listed 28 languages it can swap between. While my understanding of written Hindi is rudimentary, I did ask Yasa to translate some words and phrases from English to Hindi and from Hindi to English. 

I was incredibly impressed not just by the accuracy of the translation, but also by the fact that Yasa broke down its translation to explain not just how it got there, but also breaking down each word in the phrase or sentence and translated it word forward before giving you the complete sentence. The response time for each prompt no matter how long was also very quick. Considering that non-English-language prompts have proven limited in the past with other popular AI chatbots, it’s a solid showing – although it’s not the only multilingual bot out there.

Image 1 of 2

Reka translating

(Image credit: Future)
Image 2 of 2

Reka AI Barbie

(Image credit: Future)

I tried to figure out how up-to-date the bot was with current events or general knowledge and finally figured out the information.  It must have been trained on information that predates the release of the Barbie movie. I know, a weird litmus test, but when I asked it to give me some facts about the pink-tinted Margot Robbie feature it spoke about it as an ‘upcoming movie’ and gave me the release date of July 28, 2023. So, we appear to have the same case as seen with ChatGPT, where its knowledge was previously limited to world events before 2022

Of all the ChatGPT alternatives I’ve tried since the AI boom, Reka (or should I say, Yasa) is probably the most immediately impressive. While other AI betas feel clunky and sometimes like poor-man’s knockoffs, Reka holds its own not just with its visually pleasing user interfaces and easy-to-use setup, but for its multilingual capabilities and helpful, less robotic personality.

You might also like…

TechRadar – All the latest technology news

Read More

ChatGPT is getting human-like memory and this might be the first big step toward General AI

ChatGPT is becoming more like your most trusted assistant, remembering not just what you've told it about yourself, your interests, and preferences, but applying those memories in future chats. It's a seemingly small change that may make the generative AI appear more human and, perhaps, pave the way for General AI, which is where an AI brain can operate more like the gray matter in your head.

OpenAI announced the limited test in a blog post on Tuesday, explaining that it's testing the ability of ChatGPT (in both the free version and ChatGPT Plus) to remember what you tell it across all chats. 

ChatGPT can with this update remember casually, just picking up interesting bits along the way, like my preference for peanut butter on cinnamon raisin bagels, or what you explicitly tell it to remember. 

The benefit of ChatGPT having a memory is that new conversations with ChatGPT no longer start from scratch. A fresh prompt could have, for the AI, implied context. A ChatGPT with memory becomes more like a useful assistant who knows how you like your coffee in the morning or that you never want to schedule meetings before 10 AM.

In practice, OpenAI says that the memory will be applied to future prompts. If you tell ChatGPT that you have a three-year-old who loves giraffes, subsequent birthday card ideation chats might result in card ideas featuring a giraffe.

ChatGPT won't simply parrot back its recollections of your likes and interests, but will instead use that information to work more efficiently for you.

It can remember

Some might find an AI that can remember multiple conversations and use that information to help you a bit off-putting. That's probably why OpenAI is letting people easily opt out of the memories by using the “Temporary Chat” mode, which will seem like you're introducing a bit of amnesia to ChatGPT.

Similar to how you can remove Internet history from your browser, ChatGPT will let you go into settings to remove memories (I like to think of this as targeted brain surgery) or you can conversationally tell ChatGPT to forget something.

For now, this is a test among some free and ChatGPT Plus users but OpenAI offered no timeline for when it will roll out ChatGPT memories to all users. I didn't find the feature live in either my free ChatGPT or Plus subscription.

OpenAI is also adding Memory capabilities to its new app-like GPTs, which means developers can build the capability into bespoke chatty AIs. Those developers will not be able to access memories stored within the GPT.

Too human?

An AI with long-term memory is a dicier proposition than one that has a transient, at best, recall of previous conversations. There are, naturally, privacy implications. If ChatGPT is randomly memorizing what it considers interesting or relevant bits about you, do you have to worry about your details appearing in someone else's ChatGPT conversations? Probably not. OpenAI promises that memories will be excluded from ChatGPT's training data.

OpenAI adds in its blog, “We're taking steps to assess and mitigate biases, and steer ChatGPT away from proactively remembering sensitive information, like your health details – unless you explicitly ask it to.” That might help but ChatGPT must understand the difference between useful and sensitive info, a line that might not always be clear.

This update could ultimately have significant implications. ChatGPT can in prompt-driven conversations already seem somewhat human, but its hallucinations and fuzzy memories about, sometimes, even how the conversation started make it clear that more than a few billion neurons still separate us.

Memories, especially information delivered casually back to you throughout ChatGPT conversations, could change that perception. Our relationships with other people are driven in large part by our shared experiences and memories of them. We use them to craft our interactions and discussions. It's how we connect. Surely, we'll end up feeling more connected to a ChatGPT that can remember our distaste of spicy food and our love of all things Rocky Balboa.

You might also like

TechRadar – All the latest technology news

Read More

Google Gemini explained: 7 things you need to know the new Copilot and ChatGPT rival

Google has been a sleeping AI giant, but this week it finally woke up. Google Gemini is here and it's the tech giant's most powerful range of AI tools so far. But Gemini is also, in true Google style, really confusing, so we're here to quickly break it all down for you.

Gemini is the new umbrella name for all of Google's AI tools, from chatbots to voice assistants and full-blown coding assistants. It replaces both Google Bard – the previous name for Google's AI chatbot – and Duet AI, the name for Google's Workspace-oriented rival to CoPilot Pro and ChatGPT Plus.

But this is also way more than just a rebrand. As part of the launch, Google has released a new free Google Gemini app for Android (in the US, for now. For the first time, Google is also releasing its most powerful large language model (LLM) so far called Gemini Ultra 1.0. You can play with that now as well, if you sign up for its new Google One AI Premium subscription (more on that below).

This is all pretty head-spinning stuff, and we haven't even scratched the surface of what you can actually do with these AI tools yet. So for a quick fast-charge to get you up to speed on everything Google Gemini, plug into our easily-digestible explainer below…

1. Gemini replaces Google Bard and Duet AI

In some ways, Google Gemini makes things simpler. It's the new umbrella name for all of Google's AI tools, whether you're on a smartphone or desktop, or using the free or paid versions.

Gemini replaces Google Bard (the previous name for Google's “experimental” AI chatbot) and Duet AI, the collection of work-oriented tools for Google Workspace. Looking for a free AI helper to make you images or redraft emails? You can now go to Google Gemini and start using it with a standard Google account.

But if you want the more powerful Gemini Advanced AI tools – and access to Google's newest Gemini Ultra LLM – you'll need to pay a monthly subscription. That comes as part of a Google One AI Premium Plan, which you can read more about below.

To sum up, there are three main ways to access Google Gemini:   

2. Gemini is also replacing Google Assistant

Two phones on an orange background showing the Google Gemini app

(Image credit: Google)

As we mentioned above, Google has launched a new free Gemini app for Android. This is rolling out in the US now and Google says it'll be “fully available in the coming weeks”, with more locations to “coming soon”. Google is known for having a broad definition of “soon”, so the UK and EU may need to be patient.

There's going to be a similar rollout for iOS and iPhones, but with a different approach. Rather than a separate standalone app, Gemini will be available in the Google app.

The Android app is a big deal in particular because it'll let you set Gemini as your default voice assistant, replacing the existing Google Assistant. You can set this during the app's setup process, where you can tap “I agree” for Gemini to “handle tasks on your phone”.

Do this and it'll mean that whenever you summon a voice assistant on your Android phone – either by long-pressing your home button or saying “Hey Google” – you'll speak to Gemini rather than Google Assistant. That said, there is evidence that you may not want to do that just yet…

3. You may want to stick with Google Assistant (for now)

An Android phone on an orange background showing the Google Gemini app

(Image credit: Google)

The Google Gemini app has only been out for a matter of days – and there are early signs of teething issues and limitations when it comes to using Gemini as your voice assistant.

The Play Store is filling up with complaints stating that Gemini asks you to tap 'submit' even when using voice commands and that it lacks functionality compared to Assistant, including being unable to handle hands-free reminders, home device control and more. We've also found some bugs during our early tests with the app.

Fortunately, you can switch back to the old Google Assistant. To do that, just go the Gemini app, tap your Profile in the top-right corner, then go to Settings > Digital assistants from Google. In here you'll be able to choose between Gemini and Google Assistant.

Sissie Hsiao (Google's VP and General Manager of Gemini experiences) claims that Gemini is “an important first step in building a true AI assistant – one that is conversational, multimodal and helpful”. But right now, it seems that “first step” is doing a lot of heavy lifting.

4. Gemini is a new way to quiz Google's other apps

Two phones on an orange background showing the Google Gemini app

(Image credit: Google)

Like the now-retired Bard, Gemini is designed to be a kind of creative co-pilot if you need help with “writing, brainstorming, learning, and more”, as Google describes it. So like before, you can ask it to tell you a joke, rewrite an email, help with research and more. 

As always, the usual caveats remain. Google is still quite clear that “Gemini will make mistakes” and that, even though it's improving by the day, Gemini “can provide inaccurate information, or it can even make offensive statements”.

This means its other use case is potentially more interesting. Gemini is also a new way to interact with Google's other services like YouTube, Google Maps and Gmail. Ask it to “suggest some popular tourist sites in Seattle” and it'll show them in Google Maps. 

Another example is asking it to “find videos of how to quickly get grape juice out of a wool rug”. This means Gemini is effectively a more conversational way to interact with the likes of YouTube and Google Drive. It can also now generate images, which was a skill Bard learnt last week before it was renamed.

5. The free version of Gemini has limitations

Two phones on an orange background showing the Google Gemini Android app

(Image credit: Future)

The free version of Gemini (which you access in the Google Gemini app on Android, in the Google app on iOS, or on the Gemini website) has quite a few limitations compared to the subscription-only Gemini Advanced. 

This is partly because it's based on a simpler large language model (LLM) called Gemini Pro, rather than Google's new Gemini Ultra 1.0. Broadly speaking, the free version is less creative, less accurate, unable to handle multi-step questions, can't really code and has more limited data-handling powers.

This means the free version is best for basic things like answering simple questions, summarizing emails, making images, and (as we discussed above) quizzing Google's other services using natural language.

Looking for an AI assistant that can help with advanced coding, complex creative projects, and also work directly within Gmail and Google Docs? Google Gemini Advanced could be more up your street, particularly if you already subscribe to Google One… 

6. Gemini Advanced is tempting for Google One users

The subscription-only Gemini Advanced costs $ 19.99 / £18.99 / AU$ 32.99 per month, although you can currently get a two-month free trial. Confusingly, you get Advanced by paying for a new Google One AI Premium Plan, which includes 2TB of cloud storage.

This means Gemini Advanced is particularly tempting if you already pay for a Google One cloud storage plan (or are looking to sign up for it anyway). With a 2TB Google One plan already costing $ 9.99 / £7.99 / AU$ 12.49 per month, that means the AI features are effectively setting you back an extra $ 10 / £11 / AU$ 20 a month.

There's even better news for those who already have a Google One subscription with 5TB of storage or more. Google says you can “enjoy AI Premium features until July 21, 2024, at no extra charge”.

This means that Google, in a similar style to Amazon Prime, is combining its subscriptions offerings (cloud storage and its most powerful AI assistant) in order to make them both more appealing (and, most likely, more sticky too).

7. The Gemini app could take a little while to reach the UK and EU

Two phones on an orange background showing the Google Gemini app

(Image credit: Future)

While Google has stated that the Gemini Android app is “coming soon” to “more countries and languages”, it hasn't given any timescale for when that'll happen – and a possible reason for the delay is that it's waiting for the EU AI Act to become clearer.

Sissie Hsiao (Google's VP and General Manager of Gemini experiences) told the MIT Technology Review “we’re working with local regulators to make sure that we’re abiding by local regime requirements before we can expand.”

While that sounds a bit ominous, Hsiao added that “rest assured, we are absolutely working on it and I hope we’ll be able to announce expansion very, very soon.” So if you're in the UK or EU, you'll need to settle for tinkering with the website version for now.

Given the early reviews of the Google Gemini Android app, and its inconsistencies as a Google Assistant replacement, that might well be for the best anyway.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT could become a smart personal assistant helping with everything from work to vacation planning

Now that ChatGPT has had a go at composing poetry, writing emails, and coding apps, it's turning its attention to more complex tasks and real-world applications, according to a new report – essentially, being able to do a lot of your computing for you.

This comes from The Information (via Android Authority), which says that ChatGPT developer OpenAI is working on “agent software” that will act almost like a personal assistant. It would be able to carry out clicks and key presses as it works inside applications from web browsers to spreadsheets.

We've seen something similar with the Rabbit R1, although that device hasn't yet shipped. You teach an AI how to calculate a figure in a spreadsheet, or format a document, or edit an image, and then it can do the job for you in the future.

Another type of agent in development will take on online tasks, according to the sources speaking to The Information: These agents are going to be able to research topics for you on the web, or take care of hotel and flight bookings, for example. The idea is to create a “supersmart personal assistant” that anyone can use.

Our AI agent future?

The Google Gemini logo on a laptop screen that's on an orange background

Google is continuing work on its own AI (Image credit: Google)

As the report acknowledges, this will certainly raise one or two concerns about letting automated bots loose on people's personal computers: OpenAI is going to have to do a lot of work to reassure users that its AI agents are safe and secure.

While many of us will be used to deploying macros to automate tasks, or asking Google Assistant or Siri to do something for us, this is another level up. Your boss isn't likely to be too impressed if you blame a miscalculation in the next quarter's financial forecast on the AI agent you hired to do the job.

It also remains to be seen just how much automation people want when it comes to these tasks: Booking vacations involves a lot of decisions, from the position of your seats on an airplane to having breakfast included, which AI would have to make on your behalf.

There's no timescale on any of this, but it sounds like OpenAI is working hard to get its agents ready as soon as possible. Google just announced a major upgrade to its own AI tools, while Apple is planning to reveal its own take on generative AI at some point later this year, quite possibly with iOS 18.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI quietly slips in update for ChatGPT that allows users to tag their own custom-crafted chatbots

OpenAI is continuing to cement its status as the leading force in generative AI, adding a nifty little feature with little fanfare: the ability to tag a custom-created GPT bot with an ‘@’ in the prompt. 

In November 2023, custom ChatGPT-powered chatbots were introduced by OpenAI that would help users have specific types of conversations. These were named GPTs and customers who subscribed to OpenAI’s premium ChatGPT Plus service were able to build their own GPT-powered chatbot for their own purposes using OpenAI’s easy-to-use GPT-building interface. Users would then be able to help train and improve their own GPTs over time, making them “smarter” and better at accomplishing tasks asked of them by users. 

Also, earlier this year, OpenAI debuted the GPT store which allowed users to create their own GPT bots for specific categories like education, productivity, and “just for fun,” and then make them available for other users. Once they’re on the GPT store, the AI chatbots become searchable, can compete and rank in leaderboards against GPTs created by other users, and eventually users will even be able to earn money for their creators. 

Surprising new feature

It seems OpenAI has now made it easier to switch to a custom GPT chatbot, with an eagle-eyed ChatGPT fan, @danshipper, spotting that you can summon a GPTs with an ‘@’ while chatting with ChatGPT.

See more

Cybernews suggests that it’ll make switching between these different custom GPT personas more fluid and easier to use. OpenAI hasn’t publicized this new development yet, and it seems like this change specifically applies to ChatGPT Plus subscribers. 

This would somewhat mimic existing functionalities of apps like Discord and Slack, and could prove popular with ChatGPT users who wanted to make their own personal chatbot ecosystems populated by custom GPT chatbots that can be interacted with in a similar manner to those apps.

However, it’s interesting that OpenAI hasn’t announced or even mentioned this update, leaving users to discover it by themselves. It’s a distinctive approach to introducing new features for sure. 

You might also like

TechRadar – All the latest technology news

Read More

Has ChatGPT been getting a little lazy for you? OpenAI has just released a fix

It would seem reports of 'laziness' on the part of the ChatGPT AI bot were pretty accurate, as its developer OpenAI just announced a fix for the problem – which should mean the bot takes fewer shortcuts and is less likely to fail half way through trying to do something.

The latest update to the ChatGPT code is “intended to reduce cases of 'laziness' where the model doesn’t complete a task” according to OpenAI. However, it's worth noting that this only applies to the GPT-4 Turbo model that's still in a limited preview.

If you're a free user on GPT-3.5 or a paying user on GPT-4, you might still notice a few problems in terms of ChatGPT's abilities – although we're assuming that eventually the upgrade will trickle its way down to the other models as well.

Back in December, OpenAI mentioned a lack of updates and “unpredictable” behavior as reasons why users might be noticing subpar performance from ChatGPT, and it would seem that the work to try and get these issues resolved is still ongoing.

More thorough

ChatGPT voice chat

ChatGPT is pushing forward on mobile too (Image credit: Future)

One of the tasks that GPT-4 Turbo can now complete “more thoroughly” is generating code, according to OpenAI. More complex tasks can also be completed from a single prompt, while the model will also be cheaper for users to work with.

Many of the other model upgrades mentioned in the OpenAI blog post are rather technical – but the takeaways are that these AI bots are getting smarter, more accurate, and more efficient. A lot of improvements are related to “embeddings”, the numerical representations that AI bots use to understand words and the context around them.

ChatGPT recently got its very own app store, where third-party developers can showcase their own custom-made bots (or GPTs). However, there are rules in place that ban certain types of chatbots – like virtual girlfriends.

It also appears that OpenAI is busy pushing ChatGPT forward on mobile, with the latest ChatGPT beta for Android offering the ability to load up the bot from any screen (much as you might do with Google Assistant or Siri).

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT steps up its plan to become your default voice assistant on Android

A recent ChatGPT beta is giving a select group of users the ability to turn the AI into their device’s new default voice assistant on Android.

This information comes from industry insider Mishaal Rahman on X (the platform formerly known as Twitter) who posted a video of himself trying out the feature live. According to the post, users can add a shortcut to ChatGPT Assistant, as it’s referred to, directly into an Android’s Quick Settings panel. Tapping the ChatGPT entry on there causes a new UI overlay to appear on-screen, consisting of a plain white circle near the bottom of the display. From there, you verbally give it a prompt, and after several seconds, the assistant responds with an answer. 

See more

The clip shows it does take the AI some time to come up with a response – about 15 seconds. Throughout this time, the white circle will display a bubbling animation to indicate it’s generating a reply. When talking back, the animation turns more cloud-like. You can also interrupt ChatGPT at any time just by tapping the screen. Doing so causes the circle to turn black.

Setting up

The full onboarding process of the feature is unknown although 9To5Google claims in their report you will need to pick a voice when you launch it for the first time. If they like what they hear, they can stick with a particular voice or go back a step to exchange it with another. Previews of each voice can be found on OpenAI’s website too. They consist of three male and two female voices. Once all that is settled, the assistant will subsequently launch as normal with the white circle near the bottom.

To try out this update, you will need a subscription to ChatGPT Plus which costs $ 20 a month. Next, you install either ChatGPT for Android version 1.2024.017 or .018, whatever is available to you. Go to the Beta Features section in ChatGPT’s Settings menu and it should be there ready to be activated. As stated earlier, only a select group of people will gain access. It's not a guarantee.

Future default

Apparently, the assistant is present on earlier builds. 9ToGoogle states the patch is available on ChatGPT beta version 1.2024.010 with limited functionality. They claim the patch introduces the Quick Setting tile, but not the revamped UI.

Rahman in his post says no one can set ChatGPT as their default assistant at the moment. However, lines of code found in a ChatGPT patch from early January suggest this will be possible in the future. We reached out to OpenAI asking if there are plans to expand the beta’s availability. This story will be updated at a later time.

Be sure to check out TechRadar's list of the best ChatGPT extensions for Chrome that everyone should use. There are four in total.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT will get video-creation powers in a future version – and the internet isn’t ready for it

The web's video misinformation problem is set to get a lot worse before it gets better, with OpenAI CEO Sam Altman going on the record to say that video-creation capabilities are coming to ChatGPT within the next year or two.

Speaking to Bill Gates on the Unconfuse Me podcast (via Tom's Guide), Altman pointed to multimodality – the ability to work across text, images, audio and “eventually video” – as a key upgrade for ChatGPT and its models over the next two years.

While the OpenAI boss didn't go into too much detail about how this is going to work or what it would look like, it will no doubt work along similar lines to the image-creation capabilities that ChatGPT (via DALL-E) already offers: just type a few lines as a prompt, and you get back an AI-generated picture based on that description.

Once we get to the stage where you can ask for any kind of video you like, featuring any subject or topic you like, we can expect to see a flood of deepfake videos hit the web – some made for fun and for creative purposes, but many intended to spread misinformation and to scam those who view them.

The rise of the deepfakes

Deepfake videos are already a problem of course – with AI-generated videos of UK Prime Minister Rishi Sunak popping up on Facebook just this week – but it looks as though the problem is about to get significantly worse.

Adding video-creation capabilities to a widely accessible and simple-to-use tool like ChatGPT will mean it gets easier than ever to churn out fake video content, and that's a major worry when it comes to separating fact from fiction.

The US will be going to the polls later this year, and a general election in the UK is also likely to happen at some point in 2024. With deepfake videos purporting to show politicians saying something they never actually said already circulating, there's a real danger of false information spreading online very quickly.

With AI-generated content becoming more and more difficult to spot, the best way of knowing who, and what, to trust is to stick to well-known and reputable publications online for your news sources – so not something that's been reposted by a family member on Facebook, or pasted from an unknown source on the platform formerly known as Twitter.

You might also like

TechRadar – All the latest technology news

Read More

CoPilot Pro leak suggests Microsoft will soon make you pay for its ChatGPT Plus features

Microsoft has spent billions on integrating ChatGPT into its Copilot AI assistant for Edge, Bing and Windows 11 – and a new code leak suggests it could be planning to claw back some of that investment very soon.

As spotted by Android Authority, some new Edge browser updates for Android contain several code references to a 'Copilot Pro' tier that isn't yet available. Right now, Copilot (previously called Bing Chat) is completely free and, as Tom's Guide recently noticed, even gives some access to the latest ChatGPT model, GPT-4 Turbo.

But those days could be numbered if Copilot Pro does become a reality. The code contains references to a “pay wall upsell” option, which suggests that Microsoft is planning its equivalent of ChatGPT Plus. The latter currently costs $ 20 / £16 / AU$ 28 per month.

Those strings of code discovered in Edge also give us hints of what kind of features a Copilot Pro subscription might give us. These include access to the newest AI models (in other words, ChatGPT's GPT-4 Turbo), priority server access, and “high-quality” image generation.

While it seems likely that a free Copilot tier will continue to be available, the days of Microsoft giving us quite so many free AI perks, then, could be drawing to a close.

Plus points

Copilot in Windows

(Image credit: Microsoft)

The arrival of a ChatGPT Pro subscription has always been a matter of when rather than if, when you consider how much its costs to run an AI assistant on the scale of Microsoft Copilot. In the case of ChatGPT, some estimates suggest the computer hardware costs could be as high as $ 700,000 a day.

This is why ChatGPT launched its Plus subscription in February 2023 – and, a year on, it looks like Microsoft Copilot Pro could soon be following in that paid model's footsteps. 

Unfortunately, that could mean the free version of Copilot becoming a bit dumber, as that version currently gives you access to ChatGPT's latest models and also Dall-E 3 image generation. 

Hopefully, some of Copilot's current restrictions, like being limited to 300 conversations per day, will also be eased in the Pro version. While we don't yet know when this Copilot Pro tier might launch, it looks like we could find out very soon.

You might also like

TechRadar – All the latest technology news

Read More