Six major ChatGPT updates OpenAI unveiled at its Spring Update – and why we can’t stop talking about them

OpenAI just held its eagerly-anticipated spring update event, making a series of exciting announcements and demonstrating the eye- and ear-popping capabilities of its newest GPT AI models. There were changes to model availability for all users, and at the center of the hype and attention: GPT-4o. 

Coming just 24 hours before Google I/O, the launch puts Google's Gemini in a new perspective. If GPT-4o is as impressive as it looked, Google and its anticipated Gemini update better be mind-blowing. 

What's all the fuss about? Let's dig into all the details of what OpenAI announced. 

1. The announcement and demonstration of GPT-4o, and that it will be available to all users for free

OpenAI demoing GPT-4o on an iPhone during the Spring Update event.

OpenAI demoing GPT-4o on an iPhone during the Spring Update event. (Image credit: OpenAI)

The biggest announcement of the stream was the unveiling of GPT-4o (the 'o' standing for 'omni'), which combines audio, visual, and text processing in real time. Eventually, this version of OpenAI's GPT technology will be made available to all users for free, with usage limits.

For now, though, it's being rolled out to ChatGPT Plus users, who will get up to five times the messaging limits of free users. Team and Enterprise users will also get higher limits and access to it sooner. 

GPT-4o will have GPT-4's intelligence, but it'll be faster and more responsive in daily use. Plus, you'll be able to provide it with or ask it to generate any combination of text, image, and audio.

The stream saw Mira Murati, Chief Technology Officer at OpenAI, and two researchers, Mark Chen and Barret Zoph, demonstrate GPT-4o's real-time responsiveness in conversation while using its voice functionality. 

The demo began with a conversation about Chan's mental state, with GPT-4o listening and responding to his breathing. It then told a bedtime story to Barret with increasing levels of dramatics in its voice upon request – it was even asked to talk like a robot.

It continued with a demonstration of Barret “showing” GPT-4o a mathematical problem and the model guiding Barret through solving it by providing hints and encouragement. Chan asked why this specific mathematical concept was useful, which it answered at length. 

A look at the updated mobile app interface for ChatGPT.

A look at the updated mobile app interface for ChatGPT. (Image credit: OpenAI)

They followed this up by showing GPT-4o some code, which it explained in plain English, and provided feedback on the plot that the code generated. The model talked about notable events, the labels of the axis, and a range of inputs. This was to show OpenAI's continued conviction to improving GPT models' interaction with code bases and the improvement of its mathematical abilities.

The penultimate demonstration was an impressive display of GPT-4o's linguistic abilities, as it simultaneously translated two languages – English and Italian – out loud. 

Lastly, OpenAI provided a brief demo of GPT-4o's ability to identify emotions from a selfie sent by Barret, noting that he looked happy and cheerful.

If the AI model works as demonstrated, you'll be able to speak to it more naturally than many existing generative AI voice models and other digital assistants. You'll be able to interrupt it instead of having a turn-based conversation, and it'll continue to process and respond – similar to how we speak to each other naturally. Also, the lag between query and response, previously about two to three seconds, has been dramatically reduced. 

ChatGPT equipped with GPT-4o will roll out over the coming weeks, free to try. This comes a few weeks after Open AI made ChatGPT available to try without signing up for an account. 

2. Free users will have access to the GPT store, the memory function, the browse function, and advanced data analysis

OpenAI unveils the GPT Store

OpenAI unveils the GPT Store at its Spring Update event. (Image credit: Open AI)

GPTs are custom chatbots created by OpenAI and ChatGPT Plus users to help enable more specific conversations and tasks. Now, many more users can access them in the GPT Store.

Additionally, free users will be able to use ChatGPT's memory functionality, which makes it a more useful and helpful tool by giving it a sense of continuity. Also being added to the no-cost plan are ChatGPT's vision capabilities, which let you converse with the bot about uploaded items like images and documents. The browse function allows you to search through previous conversations more easily.

ChatGPT's abilities have improved in quality and speed in 50 languages, supporting OpenAI’s aim to bring its powers to as many people as possible. 

3. GPT-4o will be available in API for developers

OpenAI GPT-4o API

3. GPT-4o will be available in API for developers (Image credit: OpenAI)

OpenAI's latest model will be available for developers to incorporate into their AI apps as a text and vision model. The support for GPT-4o's video and audio abilities will be launched soon and offered to a small group of trusted partners in the API.

4. The new ChatGPT desktop app 

A look at the new ChatGPT desktop app running on a Mac.

A look at the new ChatGPT desktop app running on a Mac. (Image credit: OpenAI)

OpenAI is releasing a desktop app for macOS to advance its mission to make its products as easy and frictionless as possible, wherever you are and whichever model you're using, including the new GPT-4o. You’ll be able to assign keyboard shortcuts to do processes even more quickly. 

According to OpenAI, the desktop app is available to ChatGPT Plus users now and will be available to more users in the coming weeks. It sports a similar design to the updated interface in the mobile app as well.

5. A refreshed ChatGPT user interface

ChatGPT is getting a more natural and intuitive user interface, refreshed to make interaction with the model easier and less jarring. OpenAI wants to get to the point where people barely focus on the AI and for you to feel like ChatGPT is friendlier. This means a new home screen, message layout, and other changes. 

6. OpenAI's not done yet

Open AI

(Image credit: Open AI)

The mission is bold, with OpenAI looking to demystify technology while creating some of the most complex technology that most people can access. Murati wrapped up by stating that we will soon be updated on what OpenAI is preparing to show us next and thanking Nvidia for providing the most advanced GPUs to make the demonstration possible. 

OpenAI is determined to shape our interaction with devices, closely studying how humans interact with each other and trying to apply its learnings to its products. The latency of processing all of the different nuances of interaction is part of what dictates how we behave with products like ChatGPT, and OpenAI has been working hard to reduce this. As Murati puts it, its capabilities will continue to evolve, and it’ll get even better at helping you with exactly what you’re doing or asking about at exactly the right moment. 

You Might Also Like

TechRadar – All the latest technology news

Read More

New Rabbit R1 demo promises a world without apps – and a lot more talking to your tech

We’ve already talked about the Rabbit R1 before here on TechRadar: an ambitious little pocket-friendly device that contains an AI-powered personal assistant, capable of doing everything from curating a music playlist to booking you a last-minute flight to Rome. Now, the pint-sized companion tool has been shown demonstrating its note-taking capabilities.

The latest demo comes from Jesse Lyu on X, founder and CEO of Rabbit Inc., and shows how the R1 can be used for note-taking and transcription via some simple voice controls. The video (see the tweet below) shows that note-taking can be started with a short voice command, and ended with a single button press.

See more

It’s a relatively early tech demo – Lyu notes that it “still need bit of touch” [sic] – but it’s a solid demonstration of Rabbit Inc.’s objectives when it comes to user simplicity. The R1 has very little in terms of a physical interface, and doubles down by having as basic a software interface as possible: there’s no Android-style app grid in sight here, just an AI capable of connecting to web apps to carry out tasks.

Once you’ve recorded your notes, you can either view a full transcription, see an AI-generated summary, or replay the audio recording (the latter of which requires you to access a web portal). The Rabbit R1 is primarily driven by cloud computing, meaning that you’ll need a constant internet connection to get the full experience.

Opinion: A nifty gadget that might not hold up to criticism

As someone who personally spent a lot of time interviewing people and frantically scribbling down notes in my early journo days, I can definitely see the value of a tool like the Rabbit R1. I’m also a sucker for purpose-built hardware, so despite my frequent reservations about AI, I truly like the concept of the R1 as a ‘one-stop shop’ for your AI chatbot needs.

My main issue is that this latest tech demo doesn’t actually do anything I can’t do with my phone. I’ve got a Google Pixel 8, and nowadays I use the Otter.ai app for interview transcriptions and voice notes. It’s not a perfect tool, but it does the job as well as the R1 can right now.

Rabbit r1

The Rabbit R1’s simplicity is part of its appeal – though it does still have a touchscreen. (Image credit: Rabbit)

As much as I love the Rabbit R1’s charming analog design, it’s still going to cost $ 199 (£159 / around AU$ 300) – and I just don’t see the point in spending that money when the phone I’ve already paid for can do all the same tasks. An AI-powered pocket companion sounds like an excellent idea on paper, but when you take a look at the current widespread proliferation of AI tools like Windows Copilot and Google Gemini in our existing tech products, it feels a tad redundant.

The big players such as Google and Microsoft aren’t about to stop cramming AI features into our everyday hardware anytime soon, so dedicated AI gadgets like Rabbit Inc.’s dinky pocket helper will need to work hard to prove themselves. The voice control interface that does away with apps completely is a good starting point, but again, that’s something my Pixel 8 could feasibly do in the future. And yet, as our Editor-in-Chief Lance Ulanoff puts it, I might still end up loving the R1…

You might also like

TechRadar – All the latest technology news

Read More

Apple tells developers NOT to use “virtual reality” when talking about Vision Pro

The Vision Pro will go on sale next month, and we’ve just learned that Apple has requested that app developers for visionOS (the operating system that runs on the headset) don’t allude to visionOS apps as “AR” or “VR”. 

We first heard about Apple’s newest innovation in June 2023 – where it was marketed as a spatial computer that combines digital content and the user’s physical surroundings. It’s also equipped with some serious Apple graphics specs and visionOS, which Apple calls the “world’s first spatial computing system”

At first glance, the Vision Pro certainly appears to be similar to existing Virtual Reality (VR) and Augmented Reality (AR) headsets, so it’s interesting that Apple is at pains to ensure that it isn’t mistaken for one. The de facto ban on AR and VR references (as well as Extended Reality (XR) and Mixed Reality (MR)) was spotted in the guidelines of the new Xcode (Apple’s suite of developer tools) update that came after the announcement that Vision Pro devices will be in stores in early February

Vision Pro

(Image credit: Apple)

Apple lays down the law

This recommendation is pretty explicitly laid out on a new Apple Developer page which goes through what a developer needs to do to prepare their app for submission to the App Store. 

Apple insists that developers will also have to use the “visionOS” branding beginning with a lowercase “v” (similar to how they brand their flagship operating system for desktop and laptop devices, macOS), and to use the device’s full name, “Apple Vision Pro,” when referring to it. These aren’t as unexpected as Apple’s more notable instructions to avoid VR and AR, however. According to Apple, visionOS apps will not be considered VR, XR, or MR apps but as “spatial computing apps”.

It’s an interesting move for a number of reasons; coining a new term can be confusing to people, meaning that users will have to build familiarity and actually use the term for it to stick, but it also means that Apple can differentiate itself from the pack of AR/VR devices out there. 

It’s also a pivot from messaging that until now has relied on existing terms like augmented reality and virtual reality. Most of Apple’s current marketing refers to the Vision Pro as  a “spatial computing” platform, but at the Worldwide Developers Conference (WWDC) in 2023, Apple’s annual event for Apple platform developers, Apple CEO Tim Cook introduced the Vision Pro as an “entirely new AR platform.” Materially, this is mainly a marketing and branding move as Apple becomes more confident in its customers’ understanding of what the Vision Pro actually is. 9to5Mac reports that Apple engineers referred to visionOS as xrOS leading up to the device’s official announcement. 

Apple Vision Pro VR headset

(Image credit: Future / Lance Ulanoff)

Apple charts its own course

The pointed effort to distinguish itself from its competitors is an understandable move from Apple considering that some other tech giants have already attempted to dominate this space. 

Meta, Facebook and Instagram’s parent company, was one of the most noticeable examples. You might have a not-so-distant memory of a certain “metaverse”. The metaverse has seen a reception most would call lukewarm, even at its peak, and Apple is making a bold attempt to have its own association in people’s minds, with Apple’s VP of global marketing Greg Joswiak dismissing the word “metaverse” as one he’ll “never use” according to 9to5Mac.

I enjoy watching Apple make bolder moves into existing markets because it’s often when we’ve seen new industry standards emerge, which is always exciting – no matter whether you want to call it AR, VR, or spatial computing. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

WhatsApp will let you know when people are talking about you

WhatsApp is great at letting you know when you have received a new message, but also gives you the option of quietening notifications for those times when you don’t want to be disturbed. Now there is a new preview version of the app available that introduces an important upgrade to notifications that make them even more useful.

While a recent update added profile photos so you can easily see who has messages you, this new update takes things a step further. The change concerns notifications, and it means that you can more easily tell who has replied to you, or mentioned you, in a group message.

If another member of a group chat replies to a message you have posted, or if they @ mentions you, the notification you see now includes the profiles picture of the person in question. Again, this is a relatively minor change, but it is a tweak that can make a world of different to usability.

While you need to update to the latest preview build of WhatsApp in TestFlight in order to access all of the latest features as options, making the upgrade is not necessarily a guarantee that you will be able to see these new notifications images.

Keep informed

This latest change is just the latest update to notifications, and it comes hot on the heels of the addition of profile photos to notifications in iOS. But this most recent update is about more than just changes to notifications.

As WABetaInfo notes, with this beta version, WhatsApp has also changes the way app version numbers are formatted. While the previous version of WhatsApp for iOS was 2.22.1.1, the latest is now listed as 22.1.71 – quite a jump. However, if you check within the settings of the app, it is shown to be 2.22.1.71, and it's not clear when – or, indeed, if – the two will be brought in line with each other.

Via WABetaInfo

TechRadar – All the latest technology news

Read More