ChatGPT’s newest GPT-4 upgrade makes it smarter and more conversational

AI just keeps getting smarter: another significant upgrade has been pushed out for ChatGPT, its developer OpenAI has announced, and specifically to the GPT-4 Turbo model available to those paying for ChatGPT Plus, Team, or Enterprise.

OpenAI says ChatGPT will now be better at writing, math, logical reasoning, and coding – and it has the charts to prove it. The release is labeled with the date April 9, and it replaces the GPT-4 Turbo model that was pushed out on January 25.

Judging by the graphs provided, the biggest jumps in capabilities are in mathematics and GPQA, or Graduate-Level Google-Proof Q&A – a benchmark based on multiple-choice questions in various scientific fields.

According to OpenAI, the new and improved ChatGPT is “more direct” and “less verbose” too, and will use “more conversational language”. All in all, a bit more human-like then. Eventually, the improvements should trickle down to non-paying users too.

More up to date

See more

In an example given by OpenAI, AI-generated text for an SMS intended to RSVP to a dinner invite is half the length and much more to the point – with some of the less essential words and sentences chopped out for simplicity.

Another important upgrade is that the training data ChatGPT is based on now goes all the way up to December 2023, rather than April 2023 as with the previous model, which should help with topical questions and answers.

It's difficult to test AI chatbots from version to version, but in our own experiments  with ChatGPT and GPT-4 Turbo we found it does now know about more recent events – like the iPhone 15 launch. As ChatGPT has never held or used an iPhone though, it's nowhere near being able to offer the information you'd get from our iPhone 15 review.

The momentum behind AI shows no signs of slowing down just yet: in the last week alone Meta has promised human-like cognition from upcoming models, while Google has made its impressive AI photo-editing tools available to more users.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft is upgrading its Copilot with GPT-4 Turbo, even for free users

Microsoft revealed that its Copilot AI assistant will be getting a huge upgrade: it will be integrating Open AI’s GPT-4 Turbo language model. The best part is that all users will have full access to GPT-4 Turbo, including those in the free tier.

According to the same status update on Twitter / X, Pro tier users will have the option to choose the older standard GPT-4 model by using a built-in toggle, which is useful for specialized cases. It also gives the Pro tier added value without taking away from the free tier users.

GPT-4 Turbo is the updated version of the base GPT-4 and is well-known for speed, accuracy, and complex long-form task management. The update brings faster code generation, more insightful suggestions, and improved overall responsiveness, translating to better productivity and smoother coding.

Copilot is really increasing its value

It’s good to see that free-tier users are getting meaningful updates to their Copilot AI assistant already – it’s a good sign that Microsoft will ensure that those without deep enough pockets to maintain a paid premium subscription can still benefit from the service. This is especially important since the tech giant needs to win over more people to Windows 11, which is where the full version of Copilot will be.

However, the Pro subscribers aren’t left in the dark either, as they get more flexibility in the AI assistant when it comes to language model upgrades. Not to mention other features and tools that have been added so far.

Microsoft just announced a Copilot Chatbot builder, which allows Pro users to create custom task-specific chatbots based on their job role. What makes this so interesting is that it was built without any input from OpenAI, which could be due to a need to distance itself from the popular AI tool due to increased scrutiny and lawsuits. This is odd considering that the latest GPT update was added across the Copilot board.

There’s also a feature that lets the Copilot bot directly read files on your PC, then provide a summary, locate specific data, or search the internet for additional information. However, it’s not a privacy nightmare as you have to manually drag and drop the file into the Copilot chat box (or select the ‘Add a file’ option), and then make a ‘summarize’ request of the AI.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft just launched a free Copilot app for Android, powered by GPT-4

If you're keen to play around with some generative AI tech on your phone, you now have another option: Microsoft has launched an Android app for its Copilot chatbot, and like Copilot in Windows 11, it's free to use and powered by GPT-4 and DALL-E 3.

As spotted by @techosarusrex (via Neowin), the Copilot for Android app is available now, and appears to have arrived on December 19. It's free to use and you don't even need to sign into your Microsoft account – but if you don't sign in, you are limited in terms of the number of prompts you can input and the length of the answers.

In a sense, this app isn't particularly new, because it just replicates the AI functionality that's already available in Bing for Android. However, it cuts out all the extra Bing features for web search, news, weather, and so on.

There's no word yet on a dedicated Copilot for iOS app, so if you're using an iPhone you're going to have to stick with Bing for iOS for now if you need some AI assistance. For now, Microsoft hasn't said anything officially on its new Android app.

Text and images

The functionality inside the new app is going to be familiar to anyone who has used Copilot or Bing AI anywhere else. Microsoft has been busy adding the AI everywhere, and has recently integrated it into Windows 11 too.

You can ask direct questions like you would with a web search, get complex topics explained in simple terms, have Copilot generate new text on any kind of subject, and much more. The app can work with text, image and voice prompts too.

Based on our testing of the app, it seems you get five questions or searches per day for free if you don't want to sign in. If you do tell Microsoft who you are, that limit is lifted, and signing in also gives you access to image generation capabilities.

With both Apple's Siri and Google Assistant set to get major AI boosts in the near future, Microsoft won't want to be left behind – and the introduction of a separate Copilot app could help position it as a standalone digital assistant that works anywhere.

You might also like

TechRadar – All the latest technology news

Read More

AI might take a ‘winter break’ as GPT-4 Turbo apparently learns from us to wind down for the Holidays

It seems that GPT-4 Turbo – the most recent incarnation of the large language model (LLM) from OpenAI – winds down for the winter, just as many people are doing as December rolls onwards.

We all get those end-of-year Holiday season chill vibes (probably) and indeed that appears to be why GPT-4 Turbo – which Microsoft’s Copilot AI will soon be upgraded to – is acting in this manner.

As Wccftech highlighted, the interesting observation on the AI’s behavior was made by an LLM enthusiast, Rob Lynch, on X (formerly Twitter).

See more

The claim is that GPT-4 Turbo produces shorter responses – to a statistically significant extent – when the AI believes that it’s December, as opposed to May (with the testing done by changing the date in the system prompt).

So, the tentative conclusion is that it appears GPT-4 Turbo learns this behavior from us, an idea advanced by Ethan Mollick (an Associate Professor at the Wharton School of the University of Pennsylvania who specializes in AI).

See more

Apparently GPT-4 Turbo is about 5% less productive if the AI thinks it’s the Holiday season. 


Analysis: Winter break hypothesis

This is known as the ‘AI winter break hypothesis’ and it’s an area that is worth exploring further.

What it goes to show is how unintended influences can be picked up by an AI that we wouldn’t dream of considering – although some researchers obviously did notice and consider it, and then test it. But still, you get what we mean – and there’s a whole lot of worry around these kinds of unexpected developments.

As AI progresses, its influences, and the direction that the tech takes itself in, need careful watching over, hence all the talk of safeguards for AI being vital.

We’re rushing ahead with developing AI – or rather, the likes of OpenAI (GPT), Microsoft (Copilot), and Google (Bard) certainly are – caught up in a tech arms race, with most of the focus on driving progress as hard as possible, with safeguards being more of an afterthought. And there’s an obvious danger therein which one word sums up nicely: Skynet.

At any rate, regarding this specific experiment, it’s just one piece of evidence that the winter break theory is true for GPT-4 Turbo, and Lynch has urged others to get in touch if they can reproduce the results – and we do have one report of a successful reproduction so far. Still, that’s not enough for a concrete conclusion yet – watch this space, we guess.

As mentioned above, Microsoft is currently upgrading its Copilot AI from GPT-4 to GPT-4 Turbo, which has been advanced in terms of being more accurate and offering higher quality responses in general. Google, meanwhile, is far from standing still with its rival Bard AI, which is powered by its new LLM, Gemini.

You might also like …

TechRadar – All the latest technology news

Read More

Microsoft’s AI tinkering continues with powerful new GPT-4 Turbo upgrade for Copilot in Windows 11

Bing AI, which Microsoft recently renamed from Bing Chat to Copilot – yes, even the web-based version is now officially called Copilot, just to confuse everyone a bit more – should get GPT-4 Turbo soon enough, but there are still issues to resolve around the implementation.

Currently, Bing AI runs GPT-4, but GPT-4 Turbo will allow for various benefits including more accurate responses to queries and other important advancements.

We found out more about how progress was coming with the move to GPT-4 Turbo thanks to an exchange on X (formerly Twitter) between a Bing AI user and Mikhail Parakhin, Microsoft’s head of Advertising and Web Services.

As MS Power User spotted, Ricardo, a denizen of X, noted that they just got access to Bing’s redesigned layout and plug-ins, and asked: “Does Bing now use GPT-4 Turbo?”

As you can see in the below tweet, Parakhin responded to say that GPT-4 Turbo is not yet working in Copilot, as a few kinks still need to be ironed out.

See more

Of course, as well as Copilot on the web (formerly Bing Chat), this enhancement will also come to Copilot in Windows 11, too (which is essentially Bing AI – just with bells and whistles added in terms of controls for Windows and manipulating settings).


Analysis: Turbo mode

We’re taking the comment that a ‘few’ kinks are still to be resolved as a suggestion that much of the work around implementing GPT-4 Turbo has been carried out. Meaning that GPT-4 Turbo could soon arrive in Copilot, or we can certainly keep our fingers crossed that this is the case.

Expect it to bring in more accurate and relevant responses to queries as noted, and it’ll be faster too (as the name suggests). As Microsoft observes, it “has the latest training data with knowledge up to April 2023” – though it’s still in preview. OpenAI only announced GPT-4 Turbo earlier this month, and said that it’s also going to be cheaper to run (for developers paying for GPT-4, that is).

In theory, it should represent a sizeable step forward for Bing AI, and that’s something to look forward to hopefully in the near future.

You might also like …

TechRadar – All the latest technology news

Read More

Meta takes aim at GPT-4 for it’s next AI model

Meta is planning to meet, if not surpass, the powerful GPT-4 chatbots designed by OpenAI with its own sophisticated artificial intelligence bot. The company is planning on training the large language model (LLM) early next year, and likely hopes it will take the number one spot in the AI game. 

According to the Wall Street Journal, Meta has been buying up Nvidia H100 AI training chips and strengthening internal infrastructure to ensure that this time around, Meta won’t have to rely on Microsoft’s Azure cloud platform to train its new chatbot. 

The Verge notes that there’s already a group within the company that was put together earlier in the year to begin work building the model, with the apparent goal being to quickly create a tool that can closely emulate human expressions. 

Is this what we want? And do companies care?

Back in June, a leak suggested that a new Instagram feature would have chatbots integrated into the platform that could answer questions, give advice, and help users write messages. Interestingly, users would also be able to choose from “30 AI personalities and find which one [they] like best”. 

It seems like this leak might actually come to fruition if Meta is putting in this much time and effort to replicate human expressiveness. Of course, the company will probably look to Snapchat AI for a comprehensive look at what not to do when it comes to squeezing AI chatbots into its apps, hopefully skipping the part where Snapchat’s AI bot got bullied and gave users some pretty disturbing advice

Overall, the AI scramble carries on as big companies continue to climb to the summit of a mysterious, unexplored mountain. Meta makes a point of ensuring the potential new LLM will remain free for other companies to base their own AI tools on, a net positive in my books. We’ll just have to wait for next year to see what exactly is in store. 

TechRadar – All the latest technology news

Read More

Duolingo’s new GPT-4 AI will happily explain why your Spanish is wrong

Duolingo is launching a new virtual tutor that aims to replicate real-world scenarios to help students learn better. And it’s all powered by the recently released GPT-4 AI model.

Making its home in the new Duolingo Max subscription tier, the tutor consists of two features: Explain My Answer and Roleplay. The former, as its name suggests, gives users the opportunity to, if they’re confused by something in DuoLingo's initial response, ask the chatbot Duo to give a detailed explanation of why their answer was right or wrong. In an example video, the AI explains why select Spanish verbs must be conjugated a certain way given the context of the sentence. 

Duo, however, is not universally available on all language exercises, just certain ones. On those few, a Explain My Answer button will appear at the bottom of the screen after you attempt an exercise.

Roleplay, on the other hand, allows users to engage in a realistic conversation with the AI so they can practice their language skills. According to the post, no two chats will be exactly the same. In one instance, you could be talking to “waiter” as you order coffee at a French café or discussing vacation plans in Spanish with a “friend.” And at the end of every Roleplay, Duo will give you some feedback based “on the accuracy and complexity of [your] responses, as well as tips for future conversations.” 

Limited release

Do be aware that the GPT-4 AI behind Duo is not perfect. For the new release, research laboratory OpenAI took the time to improve GPT-3’s chat abilities so it can produce more natural-sounding text, similar to how people normally speak – at least in English. GPT-4 can create language guides such as utilizing English mnemonics for Spanish words. However, as seen on Twitter, those mnemonic guides can be pretty hilarious and not always in a good way. Duolingo admits its virtual tutor will make some mistakes. As such, the company is asking users to give the AI some feedback which you can do by selecting either the “thumbs-up” or “thumbs-down” emoji at the end of every Explain My Answer session.

Currently, Duolingo Max is seeing a limited roll-out. The AI will only be available in either Spanish or French for English speakers on iOS, but there are plans to “expand to more courses, language interfaces, and platforms in the coming months”, according to a company representative.

To subscribe to the tier, you have two methods. You can either pay $ 29.99 for Duolingo Max or $ 167.99 for the whole year. Breaking everything down, the yearly cost comes down to $ 13.99 a month. Additionally, you also get every feature under Super Duolingo which includes “unlimited hearts [for lessons], no ads, and [a] personalized review through the Practice Hub.”

While we have you, be sure to check out TechRadar’s list of the best AI writers for 2023 if you need content done fast. 

TechRadar – All the latest technology news

Read More

GPT-4 is bringing a massive upgrade to ChatGPT

OpenAI has officially announced GPT-4 – the latest version of its incredibly popular large language model powering artificial intelligence (AI) chatbots (among other cool things).

If you’ve heard the hype about ChatGPT (perhaps at an incredibly trendy party or a work meeting), then you may have a passing familiarity with GPT-3 (and GPT-3.5, a more recent improved version). GPT is the acronym for Generative Pre-trained Transformer, a machine learning technology that uses neural networks to bounce around raw input information tidbits like ping pong balls and turn them into something comprehensible and convincing to human beings. OpenAI claims that GPT-4 is its “most advanced AI system” that has been “trained using human feedback, to produce even safer, more useful output in natural language and code.”

GPT-3 and GPT-3.5 are large language models (LLM), a type of machine learning model, from the AI research lab OpenAI and they are the technology that ChatGPT is built on. If you've been following recent developments in the AI chatbot arena, you probably haven’t missed the excitement about this technology and the explosive popularity of ChatGPT. Now, the successor to this technology, and possibly to ChatGPT itself, has been released.

Cut to the chase

  • What is it? GPT-4 is the latest version of the large language model that’s used in popular AI chatbots
  • When is it out? It was officially announced March 14, 2023
  • How much is it? It’s free to try out, and there are subscription tiers as well

When will ChatGPT-4 be released?

GPT-4 was officially revealed on March 14, although it didn’t come as too much of a surprise, as Microsoft Germany CTO, Andreas Braun speaking at the AI in Focus – Digital Kickoff event, let slip that the release of GPT-4 was imminent. 

It had been previously speculated that GPT-4 would be multimodal, which Braun also confirmed. GPT-3 is already one of the most impressive natural language processing models (NLP models), models built with the aim of producing human-like speech, in history. 

GPT-4 will be the most ambitious NLP we have seen yet as it will be the largest language model in existence.

A man in a suit using a laptop with a projected display showing a mockup of the ChatGPT interface.

ChatGPT is about to get stronger. (Image credit: Shutterstock)

What is the difference between GPT-3 and GPT-4?

The type of input Chat GPT (iGPT-3 and GPT-3.5) processes is plain text, and the output it can produce is natural language text and code. GPT-4’s multimodality means that you may be able to enter different kinds of input – like video, sound (e.g speech), images, and text. Like its capabilities on the input end, these multimodal faculties will also possibly allow for the generation of output like video, audio, and other types of content. Inputting and outputting both text and visual content could provide a huge boost in the power and capability of AI chatbots relying on ChatGPT-4.

Furthermore, similar to how GPT-3.5 was an improvement on GPT-3’s chat abilities by being more fine-tuned for natural chat, the capability to process and output code, and to do traditional completion tasks, GPT-4 should be an improvement on GPT-3.5’s understanding.  One of GPT-3/GPT-3.5’s main strengths is that they are trained on an immense amount of text data sourced across the internet. 

Bing search and ChatGPT

(Image credit: Rokas Tenys via Shutterstock)

What can GPT-4 do?

GPT-4 is trained on a diverse spectrum of multimodal information. This means that it will, in theory, be able to understand and produce language that is more likely to be accurate and relevant to what is being asked of it. This will be another marked improvement in the GPT series to understand and interpret not just input data, but also the context within which it is put. Additionally, GPT-4 will have an increased capacity to perform multiple tasks at once.

OpenAI also claims that GPT-4 is 40% more likely to provide factual responses, which is encouraging to learn since companies like Microsoft plan to use GPT-4 in search engines and other tools we rely on for factual information. OpenAI has also said that it is 82% less like to respond to requests for ‘disallowed’ content.

Safety is a big feature with GPT-4, with OpenAI working for over six months to ensure it is safe. They did this through an improved monitoring framework, and by working with experts in a variety of sensitive fields, such as medicine and geopolitics, to ensure the replies it gives are accurate and safe.

These new features promise greater ability and range to do a wider variety of tasks, greater efficiency of processing resources, the ability to complete multiple tasks simultaneously, and the potential for greater accuracy, which is a concern among current AI-bot and search engine engineers.

How GPT-4 will be presented is yet to be confirmed as there is still a great deal that stands to be revealed by OpenAI. We do know, however, that Microsoft has exclusive rights to OpenAI’s GPT-3 language model technology and has already begun the full roll-out of its incorporation of ChatGPT into Bing. This leads many in the industry to predict that GPT-4 will also end up being embedded in Microsoft products (including Bing). 

We have already seen the extended and persistent waves caused by GPT-3/GPT-3.5 and ChatGPT in many areas of our lives, including but not limited to tech such as content creation, education, and commercial productivity and activity. When you add more dimensions to the type of input that can be both submitted and generated, it's hard to predict the scale of the next upheaval. 

The ethical discussions around AI-generated content have multiplied as quickly as the technology’s ability to generate content, and this development is no exception.

GPT-4 is far from perfect, as OpenAI admits. It still has limitations surrounding social biases – the company warns it could reflect harmful stereotypes, and it still has what the company calls 'hallucinations', where the model creates made-up information that is “incorrect but sounds plausible.”

Even so, it's an exciting milestone for GPT in particular and AI in general, and the pace at which GPT is evolving since its launch last year is incredibly impressive.

TechRadar – All the latest technology news

Read More