ChatGPT’s big, free update with GPT-4o is rolling out now – here’s how to get it

ChatGPT has just got one its biggest updates so far, thanks to a series of new features – powered by a new GPT-4o model – that were announced at its 'Spring Update' event. And with comparisons to the virtual assistant in Spike Jonze's movie Her flying around, you're probably wondering when you can try it out – well, the answer is a little complicated.

The good news is that GPT-4o, a new multi-modal version of ChatGPT that can “reason across audio, vision, and text in real time” (as the company describes it), is rolling out right now to everyone, including free users. We've already got it in our ChatGPT Plus account, albeit only in limited form – for now, OpenAI has only released GPT-4o's text and image powers, with the cool voice and video-based features coming sometime later.

To find it, just log into your account in a web browser and check the drop-down menu in the top left-hand corner – if you have the update, it should default to GPT-4o with a label calling it OpenAI's “newest and most advanced model” (see below).

A laptop on a red and blue background showing ChatGPT running the GPT-4o model

The GPT-4o model is rolling out now to the browser-based version of ChatGPT – if you’ve got it, it’ll appear in the model drop-down in the top-left corner (above). (Image credit: Future / OpenAI)

That's web access to the GPT-4o model sorted, but what about the ChatGPT apps for iOS, Android and now Mac? It seems that ChatGPT's newest model rolling out a little slower on those. We don't yet have access to GPT-4o on iOS or Android yet, and ChatGPT's new Mac app is still rolling out (and wasn't available at the time of writing).

OpenAI said on May 13 that it was “rolling out the macOS app to Plus users starting today” and that it would be made “more broadly available in the coming weeks”. Strangely, Windows fans have been snubbed and left out of the ChatGPT desktop app party, but OpenAI says “we also plan to launch a Windows version later this year”.

When do we get the new voice assistant?

The most impressive parts of OpenAI's GPT-4o demo were undoubtedly the real-time conversational speech and the vision-based tricks that allow the model to 'see' and chat simultaneously.

Unfortunately, it looks like we'll have to wait a little longer for those to get a wider rollout. OpenAI says that developers can “now access GPT-4o in the API as a text and vision model”, which differs from the image-based capabilities of the version that was released to free and paid users starting yesterday.

And as for the voice tricks, OpenAI says it'll “roll out a new version of Voice Mode with GPT-4o in alpha within ChatGPT Plus in the coming weeks”. And that “we plan to launch support for GPT-4o's new audio and video capabilities to a small group of trusted partners in the API in the coming weeks”. 

That's a little vague and means some of GPT-4o's coolest tricks are only coming to testers and developers among ChatGPT's paid users for now. But that's also understandable – the tech powering OpenAI's GPT-4o demos likely required some serious compute power, so a wider rollout could take time.

That's a little frustrating for those of us who have been itching to chat to the impossibly cheerful and smart assistant powered by GPT-4o in OpenAI's various demos. If you haven't watched them yet, we'd suggest checking out the various GPT-4o demo videos on OpenAI's site – which include two AI assistants singing to each other and ChatGPT helping someone prep for an interview.

But on the plus side, GPT-4o is surprisingly going to be available for both free and paid users – and while the full rollout of all the tricks that OpenAI previewed could take some time, the promise is certainly there. Now it's time to see how Google responds at Google I/O 2024 – here's how you can tune into the live event.

You might also like

TechRadar – All the latest technology news

Read More

Google teases new AI-powered Google Lens trick in feisty ChatGPT counter-punch

It's another big week in artificial intelligence in a year that's been full of them, and Google has teased a new AI feature coming to mobile devices just hours ahead of its Google I/O 2024 event – where we're expecting some major announcements.

A social media post from Google shows someone asking their phone about what's being shown through the camera. In this case, it's people setting up the Google I/O stage, which the phone correctly identifies.

User and phone then go on to have a real-time chat about Google I/O 2024, complete with a transcription of the conversation on screen. We don't get any more information than that, but it's clearly teasing some of the upcoming reveals.

As far as we can tell, it looks like a mix of existing Google Lens and Google Gemini technologies, but with everything running instantly. Lens and Gemini can already analyze images, but studying real-time video feeds would be something new.

The AI people

See more

It's all very reminiscent of the multimodal features – mixing audio, text, and images – that OpenAI showed off with its own ChatGPT bot yesterday. ChatGPT now has a new AI model called GPT-4 Omni (GPT-4o), which makes all of this natural interaction even easier.

We've also seen the same kind of technology demoed on the Rabbit R1 AI device. The idea is that these AIs become less like boxes that you type text into, and more like synthetic people who can see, recognize, and talk.

Based on this teaser, it looks likely that this is the way the Google Gemini AI model and bot is going. While we can't identify the smartphone in the video, it may be that these new features come to Pixel phones (like the new Google Pixel 8a) first.

All will be revealed later today, May 14: everything gets underway at 10am PT / 1pm ET / 6pm BST, which is May 15 at 3am AEST. We've put together a guide to how to watch Google I/O 2024 online, and we'll be reporting live from the event too.

You might also like

TechRadar – All the latest technology news

Read More

Apple and Google have teamed up to help make it easier to spot suspicious item trackers

As part of iOS 17.5, Apple is finally rolling out its Detecting Unwanted Location Trackers specification, allowing mobile users to locate suspiciously placed AirTags and other similar devices. Simply, this update has been a long time coming. 

To give a quick breakdown of the situation at large, Bluetooth trackers were being used as a way to stalk people. Google announced that it and Apple were teaming up to tackle this problem. The former sought to upgrade its Find My Device network quickly but decided to postpone the launch, partly to wait until Apple finished developing its new standard.

With iOS 17.5, though, Apple states that your iPhone will notify you if an unknown Bluetooth tracker device was placed on you. If it sniffs something out, an “[Item] Found Moving With You” alert will appear on the smartphone screen. 

Upon detection, your iPhone can trigger a noise on the tracker to help you locate it. An accompanying notification will include a guide showing you how to disable the gadget. You can find these instructions on Apple’s support website.

Supporting devices

The detection tool can locate other Find My accessories so long as the third-party trackers are built to the specifications that Apple and Google are on. Devices not on the new network aren't compatible and will not work. Third-party tag manufacturers like Chipolo and Motorola are reportedly committing future releases to the new standard, which means the iOS feature will also detect forthcoming models.

Android devices have been capable of detecting Bluetooth trackers for some time, and Google is currently rolling out its long-awaited Find My Device upgrade to smartphones. Thanks to the Detecting Unwanted Location Trackers specification, it’ll work in conjunction with Apple’s network.

Part of iOS 17.5

There is more to iOS 17.5 than just the security patch. 

First, it’s introducing a dynamic wallpaper celebrating the LGBTQ+ community just in time for Pride Month. Second, the company is adding a new game called Quartile to Apple News Plus. It’s sort of like Scrabble, where you have to make up words using small groups of letters. 

Moreover, Apple News Plus subscribers can download audio briefings, entire magazine issues, and more for offline enjoyment. When you're back online, the downloaded content list “will automatically refresh.”

Besides these, 9To5Mac has confirmed even more changes like the Podcasts widget receiving support for dynamic colors. This alters the color of the box to match a podcast’s artwork. The publication also confirms the existence of Repair State, a “special hibernation mode” that lets people send in their iPhone for service without disabling the Find My connection. 

To install iOS 17.5, head over to your iPhone’s Settings menu. Go to General then Software Update to receive the patch. And be sure to check out TechRadar's list of the best iPhone for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Meta Quest 3’s new Travel Model lets you use the headset on a plane – and stop staring at the Vision Pro wearer in the next aisle

Augmented reality is taking to the skies as Meta is rolling out an experimental Travel Mode to its Quest 2 and Quest 3 headsets. Once enabled, users can enjoy content while on a plane, a function that wasn't possible due to certain components. 

Sitting in a moving vehicle, such as a car or airplane, can confuse the internal measurement units (or IMUs, for short) and, as a result, cause the headset to have a hard time tracking your position. 

But thanks to Travel Mode, you won’t have this problem. Meta says it fine-tuned the Quest headset's “algorithms to account for the motion of an airplane,” which delivers a much more stable experience while flying. It'll also level the playing field against the Apple Vision Pro, which has offered a travel mode since launch.

You connect the Quest 2 or 3 to a plane's Wi-Fi connection and access content from an external tablet or laptop or that is stored within the Quest library. Meta recommends double-checking if an app needs an internet connection to work, as inflight Wi-Fi can be rather spotty. This means that certain video games, among other content, may play worse. 

As far as in-flight infotainment systems go, most will not be accessible, except for Lufthansa, thanks to a partnership between Meta and the German-based airline.

Quest 3's new Travel Mode

(Image credit: Meta)

New content

Meta's partnership with Lufthansa will provide unique content that is “designed to run on Quest 3 in Travel Mode.” These include interactive games like chess, meditation exercises, travel podcasts, and “virtual sightseeing previews”. That last one lets see what your destination is like right before you get there. However, this content will only be offered to people seated in Lufthansa’s Allegris Business Class Suite on select flights.

Lufthansa Chess on Travel Mode

(Image credit: Meta/Lufthansa)

If you want to try out Travel Mode, you can activate it by going to the Experimental section on your Quest headset’s Settings menu. Enable the feature, and you're ready to use it. Once activated, you can toggle Travel Mode on or off anytime in Quick Settings. Meta plans to offer Travel Mode for additional modes of transportation like trains at some point, but a specific release date has not been announced.

A company representative told us Travel Mode is available to all users globally, although it's unknown when it'll leave its experimental state and become a standard feature. We asked if there are plans to expand the Lufthansa content to other airlines and travel classes like Economy. But they have nothing to share at the moment. Meta wants to keep their pilot program with Lufthansa for the time being, however they are interested in expanding.

If you're looking for recommendations on what to play on your next flight, check out TechRadar's list of the best Quest 2 games for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Six major ChatGPT updates OpenAI unveiled at its Spring Update – and why we can’t stop talking about them

OpenAI just held its eagerly-anticipated spring update event, making a series of exciting announcements and demonstrating the eye- and ear-popping capabilities of its newest GPT AI models. There were changes to model availability for all users, and at the center of the hype and attention: GPT-4o. 

Coming just 24 hours before Google I/O, the launch puts Google's Gemini in a new perspective. If GPT-4o is as impressive as it looked, Google and its anticipated Gemini update better be mind-blowing. 

What's all the fuss about? Let's dig into all the details of what OpenAI announced. 

1. The announcement and demonstration of GPT-4o, and that it will be available to all users for free

OpenAI demoing GPT-4o on an iPhone during the Spring Update event.

OpenAI demoing GPT-4o on an iPhone during the Spring Update event. (Image credit: OpenAI)

The biggest announcement of the stream was the unveiling of GPT-4o (the 'o' standing for 'omni'), which combines audio, visual, and text processing in real time. Eventually, this version of OpenAI's GPT technology will be made available to all users for free, with usage limits.

For now, though, it's being rolled out to ChatGPT Plus users, who will get up to five times the messaging limits of free users. Team and Enterprise users will also get higher limits and access to it sooner. 

GPT-4o will have GPT-4's intelligence, but it'll be faster and more responsive in daily use. Plus, you'll be able to provide it with or ask it to generate any combination of text, image, and audio.

The stream saw Mira Murati, Chief Technology Officer at OpenAI, and two researchers, Mark Chen and Barret Zoph, demonstrate GPT-4o's real-time responsiveness in conversation while using its voice functionality. 

The demo began with a conversation about Chan's mental state, with GPT-4o listening and responding to his breathing. It then told a bedtime story to Barret with increasing levels of dramatics in its voice upon request – it was even asked to talk like a robot.

It continued with a demonstration of Barret “showing” GPT-4o a mathematical problem and the model guiding Barret through solving it by providing hints and encouragement. Chan asked why this specific mathematical concept was useful, which it answered at length. 

A look at the updated mobile app interface for ChatGPT.

A look at the updated mobile app interface for ChatGPT. (Image credit: OpenAI)

They followed this up by showing GPT-4o some code, which it explained in plain English, and provided feedback on the plot that the code generated. The model talked about notable events, the labels of the axis, and a range of inputs. This was to show OpenAI's continued conviction to improving GPT models' interaction with code bases and the improvement of its mathematical abilities.

The penultimate demonstration was an impressive display of GPT-4o's linguistic abilities, as it simultaneously translated two languages – English and Italian – out loud. 

Lastly, OpenAI provided a brief demo of GPT-4o's ability to identify emotions from a selfie sent by Barret, noting that he looked happy and cheerful.

If the AI model works as demonstrated, you'll be able to speak to it more naturally than many existing generative AI voice models and other digital assistants. You'll be able to interrupt it instead of having a turn-based conversation, and it'll continue to process and respond – similar to how we speak to each other naturally. Also, the lag between query and response, previously about two to three seconds, has been dramatically reduced. 

ChatGPT equipped with GPT-4o will roll out over the coming weeks, free to try. This comes a few weeks after Open AI made ChatGPT available to try without signing up for an account. 

2. Free users will have access to the GPT store, the memory function, the browse function, and advanced data analysis

OpenAI unveils the GPT Store

OpenAI unveils the GPT Store at its Spring Update event. (Image credit: Open AI)

GPTs are custom chatbots created by OpenAI and ChatGPT Plus users to help enable more specific conversations and tasks. Now, many more users can access them in the GPT Store.

Additionally, free users will be able to use ChatGPT's memory functionality, which makes it a more useful and helpful tool by giving it a sense of continuity. Also being added to the no-cost plan are ChatGPT's vision capabilities, which let you converse with the bot about uploaded items like images and documents. The browse function allows you to search through previous conversations more easily.

ChatGPT's abilities have improved in quality and speed in 50 languages, supporting OpenAI’s aim to bring its powers to as many people as possible. 

3. GPT-4o will be available in API for developers

OpenAI GPT-4o API

3. GPT-4o will be available in API for developers (Image credit: OpenAI)

OpenAI's latest model will be available for developers to incorporate into their AI apps as a text and vision model. The support for GPT-4o's video and audio abilities will be launched soon and offered to a small group of trusted partners in the API.

4. The new ChatGPT desktop app 

A look at the new ChatGPT desktop app running on a Mac.

A look at the new ChatGPT desktop app running on a Mac. (Image credit: OpenAI)

OpenAI is releasing a desktop app for macOS to advance its mission to make its products as easy and frictionless as possible, wherever you are and whichever model you're using, including the new GPT-4o. You’ll be able to assign keyboard shortcuts to do processes even more quickly. 

According to OpenAI, the desktop app is available to ChatGPT Plus users now and will be available to more users in the coming weeks. It sports a similar design to the updated interface in the mobile app as well.

5. A refreshed ChatGPT user interface

ChatGPT is getting a more natural and intuitive user interface, refreshed to make interaction with the model easier and less jarring. OpenAI wants to get to the point where people barely focus on the AI and for you to feel like ChatGPT is friendlier. This means a new home screen, message layout, and other changes. 

6. OpenAI's not done yet

Open AI

(Image credit: Open AI)

The mission is bold, with OpenAI looking to demystify technology while creating some of the most complex technology that most people can access. Murati wrapped up by stating that we will soon be updated on what OpenAI is preparing to show us next and thanking Nvidia for providing the most advanced GPUs to make the demonstration possible. 

OpenAI is determined to shape our interaction with devices, closely studying how humans interact with each other and trying to apply its learnings to its products. The latency of processing all of the different nuances of interaction is part of what dictates how we behave with products like ChatGPT, and OpenAI has been working hard to reduce this. As Murati puts it, its capabilities will continue to evolve, and it’ll get even better at helping you with exactly what you’re doing or asking about at exactly the right moment. 

You Might Also Like

TechRadar – All the latest technology news

Read More

OpenAI’s GPT-4o ChatGPT assistant is more life-like than ever, complete with witty quips

So no, OpenAI didn’t roll out a search engine competitor to take on Google at its May 13, 2024 Spring Update event. Instead, OpenAI unveiled GPT-4 Omni (or GPT-4o for short) with human-like conversational capabilities, and it's seriously impressive. 

Beyond making this version of ChatGPT faster and free to more folks, GPT-4o expands how you can interact with it, including having natural conversations via the mobile or desktop app. Considering it's arriving on iPhone, Android, and desktop apps, it might pave the way to be the assistant we've all always wanted (or feared). 

OpenAI's ChatGPT-4o is more emotional and human-like

OpenAI demoing GPT-4o on an iPhone during the Spring Update event.

OpenAI demoing GPT-4o on an iPhone during the Spring Update event. (Image credit: OpenAI)

GPT-4o has taken a significant step towards understanding human communication in that you can converse in something approaching a natural manner. It comes complete with all the messiness of real-world tendencies like interrupting, understanding tone, and even realizing it's made a mistake.

During the first live demo, the presenter asked for feedback on his breathing technique. He breathed heavily into his phone, and ChatGPT responded with the witty quip, “You’re not a vacuum cleaner.” It advised on a slower technique, demonstrating its ability to understand and respond to human nuances.

So yes, ChatGPT has a sense of humor but also changes the tone of responses, complete with different inflections while conveying a “thought”. Like human conversations, you can cut the assistant off and correct it, making it react or stop speaking. You can even ask it to speak in a certain tone, style, or robotic voice. Furthermore, it can even provide translations.

In a live demonstration suggested by a user on X (formerly Twitter), two presenters on stage, one speaking English and one speaking Italian, had a conversation with Chat GPT-4o handling translation. It could quickly deliver the translation from Italian to English and then seamlessly translate the English response back to Italian.

It’s not just voice understanding with GPT-4o, though; it can also understand visuals like a written-out linear equation and then guide you through how to solve it, as well as look at a live selfie and provide a description. That could be what you're wearing or your emotions. 

In this demo, GPT said the presenter looked happy and cheerful. It’s not without quirks, though. At one point ChatGPT said it saw the image of the equation before it was even written out, referring back to a previous visual of just a wooden tabletop.

Throughout the demo, ChatGPT worked quickly and didn't really struggle to understand the problem or ask about it. GPT-4o is also more natural than typing in a query, as you can speak naturally to your phone and get a desired response – not one that tells you to Google it.  

A little like “Samantha” in “Her”

If you’re thinking about Her or another futuristic-dystopian film with an AI, you’re not the only one. Speaking with ChatGPT in such a natural way is essentially the Her moment for OpenAI. Considering it will be rolling out to the mobile app and as a desktop app for free, many people may soon have their own Her moments.

The impressive demos across speech and visuals feel may only be scratching the surface of what's possible. Overall performance and how well GPT-4o performs day-to-day in various environments remains to be seen, and once available, TechRadar will be putting it through the test. Still, after this peek, it's clear that GPT-4o is preparing to take on the best Google and Apple have to offer in their eagerly-anticipated AI reveals.

The outlook on GPT-4o

However, announcing this the day before Google I/O kicks off and just a few weeks after we’ve seen new AI gadgets hit the scene – like the Rabbit R1 – OpenAI is giving us a taste of truly useful AI experiences we want. If this rumored partnership with Apple comes to fruition, Siri could be supercharged, and Google will almost certainly show off its latest AI tricks at I/O on May 14, 2024. But will they be enough?

We wish OpenAI showed off a bit more live demos with the latest ChatGPT-4o in what turned out to be a jam-packed, less-than-30-minute keynote. Luckily, it will be rolling out to users in the coming week, and you won’t have to pay to try it out.

You Might Also Like

TechRadar – All the latest technology news

Read More

Ads in Windows 11 are becoming the new normal and look like they’re headed for your Settings home page

Microsoft looks like it’s forging ahead with its mission to put more ads in parts of the Windows 11 interface, with the latest move being an advert introduced to the Settings home page.

Windows Latest noticed the ad, which is for the Xbox Game Pass, is part of the latest preview release of the OS in the Dev channel (build 26120). For the uninitiated, the Game Pass is Microsoft’s subscription service that grants you access to a host of games for a monthly or yearly subscription fee.

Not every tester will see this advert, though, at least for now, as it’s only rolling out to those who have chosen the option to ‘Get the latest updates as soon as they're available’ (and that’s true of the other features delivered by this preview build). Also, the ad only appears for those signed into a Microsoft account.

Furthermore, Microsoft explains in a blog post introducing the build that the advert for the Xbox Game Pass will only appear to Windows 11 users who “actively play games” on their PC. The other changes provided by this fresh preview release are useful, too, including fixes for multiple known issues, some of which are related to performance hiccups with the Settings app. 

A close up of a keyboard and a woman gaming at a PC in neon lighting

(Image credit: Shutterstock/Standret)

Pushing too far is a definite risk for Microsoft

While I can see this fresh advertising push won’t play well with Windows 11 users, Windows Latest did try the new update and reports that it’s a significant improvement on the previous version of 24H2. So that’s good news at least, and the tech site further observes that there’s a solution for an installation failure bug in here (stop code error ‘0x8007371B’ apparently).

Windows 11 24H2 is yet to roll out officially for all users, but it’s expected to be the pre-installed operating system on the new Snapdragon X Elite PCs that are scheduled to be shipped in June 2024. A rollout to all users on existing Windows 11 devices will happen several months later, perhaps in September or October. 

I’m not the biggest fan of Microsoft’s strategy regarding promoting its own services – and indeed outright ads as is the case here – or the firm’s efforts to push people to upgrade from Windows 10 to Windows 11. Unfortunately, come next year, Windows 10 users will be facing a choice of migrating to Windows 11, or losing out on security updates when support expires for the older OS (in October 2025). That is, if they can upgrade at all – Windows 11’s hardware requirements make this a difficult task for some older PCs.

I hope for my sake personally, and for all Windows 11 users, that Microsoft considers showing that it values us all by not subjecting us to more and more adverts creeping into different parts of the operating system.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

OpenAI’s big launch event kicks off soon – so what can we expect to see? If this rumor is right, a powerful next-gen AI model

Rumors that OpenAI has been working on something major have been ramping up over the last few weeks, and CEO Sam Altman himself has taken to X (formerly Twitter) to confirm that it won’t be GPT-5 (the next iteration of its breakthrough series of large language models) or a search engine to rival Google. What a new report, the latest in this saga, suggests is that OpenAI might be about to debut a more advanced AI model with built-in audio and visual processing.

OpenAI is towards the front of the AI race, striving to be the first to realize a software tool that comes as close as possible to communicating in a similar way to humans, being able to talk to us using sound as well as text, and also capable of recognizing images and objects. 

The report detailing this purported new model comes from The Information, which spoke to two anonymous sources who have apparently been shown some of these new capabilities. They claim that the incoming model has better logical reasoning than those currently available to the public, being able to convert text to speech. None of this is new for OpenAI as such, but what is new is all this functionality being unified in the rumored multimodal model. 

A multimodal model is one that can understand and generate information across multiple modalities, such as text, images, audio, and video. GPT-4 is also a multimodal model that can process and produce text and images, and this new model would theoretically add audio to its list of capabilities, as well as a better understanding of images and faster processing times.

OpenAI CEO Sam Altman attends the artificial intelligence Revolution Forum. New York, US - 13 Jan 2023

(Image credit: Shutterstock/photosince)

The bigger picture that OpenAI has in mind

The Information describes Altman’s vision for OpenAI’s products in the future as involving the development of a highly responsive AI that performs like the fictional AI in the film “Her.” Altman envisions digital AI assistants with visual and audio abilities capable of achieving things that aren’t possible yet, and with the kind of responsiveness that would enable such assistants to serve as tutors for students, for example. Or the ultimate navigational and travel assistant that can give people the most relevant and helpful information about their surroundings or current situation in an instant.

The tech could also be used to enhance existing voice assistants like Apple’s Siri, and usher in better AI-powered customer service agents capable of detecting when a person they’re talking to is being sarcastic, for example.

According to those who have experience with the new model, OpenAI will make it available to paying subscribers, although it’s not known exactly when. Apparently, OpenAI has plans to incorporate the new features into the free version of its chatbot, ChatGPT, eventually. 

OpenAI is also reportedly working on making the new model cheaper to run than its most advanced model available now, GPT-4 Turbo. The new model is said to outperform GPT-4 Turbo when it comes to answering many types of queries, but apparently it’s still prone to hallucinations,  a common problem with models such as these.

The company is holding an event today at 10am PT / 1pm ET / 6pm BST (or 3am AEST on Tuesday, May 14, in Australia), where OpenAI could preview this advanced model. If this happens, it would put a lot of pressure on one of OpenAI’s biggest competitors, Google.

Google is holding its own annual developer conference, I/O 2024, on May 14, and a major announcement like this could steal a lot of thunder from whatever Google has to reveal, especially when it comes to Google’s AI endeavor, Gemini

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

OpenAI has big news to share on May 13 – but it’s not announcing a search engine

OpenAI has announced it's got news to share via a public livestream on Monday, May 13 – but, contrary to previous rumors, the developer of ChatGPT and Dall-E apparently isn't going to use the online event  to launch a search engine.

In a social media post, OpenAI says that “some ChatGPT and GPT-4 updates” will be demoed at 10am PT / 1pm ET / 6pm BST on Monday May 13 (which is Tuesday, May 14 at 3am AEST for those of you in Australia). A livestream is going to be available.

OpenAI CEO Sam Altman followed up by saying the big reveal isn't going to be GPT-5 and isn't going to be a search engine, so make of that what you will. “We've been hard at work on some new stuff we think people will love,” Altman says. “Feels like magic to me.”

Rumors that OpenAI would be taking on Google directly with its own search engine, possibly developed in partnership with Microsoft and Bing, have been swirling for months. It sounds like it's not ready yet though – so we'll have to wait.

OpenAI, Google, and Apple

See more

AI chatbots such as Microsoft Copilot already do a decent job of pulling up information from the web – indeed, at their core, these Large Language Models (LLMs) are essentially training themselves on websites in a similar way to how Google indexes them.

It's possible that the future of web search is not a list of links but rather an answer from an AI, based on those links – which raises the question of how websites could carry on getting the revenue they need to supply LLMs with information in the first place. Google itself has also been experimenting with AI in its search results.

In other OpenAI news, according to Mark Gurman at Bloomberg, Apple has “closed in” on a deal to inject some ChatGPT smarts into iOS 18, due later this year. The companies are apparently now “finalizing terms” on the deal.

However, Gurman says that a deal between Apple and Google to use Google's Gemini AI engine is still on the table too. We know that Apple is planning to go big on AI this year, though it sounds as though it may need some help along the way.

You might also like

TechRadar – All the latest technology news

Read More

Android users may soon have an easier, faster way to magnify on-screen elements

As we inch closer to the launch of Android 15, more of its potential features keep getting unearthed. Industry insider Mishaal Rahman found evidence of a new camera extension called Eyes Free to help stabilize videos shot by third-party apps. 

Before that, Rahman discovered another feature within the Android 15 Beta 1.2 update relating to a fourth screen magnification shortcut referred to as the “Two-finger double-tap screen” within the menu.

What it does is perfectly summed up by its name: quickly double-tapping the screen with two fingers lets you zoom in on a specific part of the display. That’s it. This may not seem like a big deal initially, but it is. 

As Rahman explains, the current three magnification shortcuts are pretty wonky. The first method requires you to hold down on an on-screen button, which is convenient but causes your finger to obscure the view and only zoom into the center. The second method has you hold on both the volume buttons, which frees up the screen but takes a while to activate. 

The third method is arguably the best one—tapping the phone display three times lets you zoom into a specific area. However, doing so causes the Android device to slow down, so it's not instantaneous. Interestingly enough, the triple-tap method warns people of the performance drop. 

This warning is missing on the double-tap option, indicating the zoom is near instantaneous. Putting everything together, you can think of double-tap as the Goldilocks option. Users can control where they want the software to focus on without experiencing any slowdown.

Improved accessibility

At least, it should be that fast and a marked improvement over the triple tap. Rahman states in his group’s time testing the feature, they noticed a delay when zooming in. He chalks this up to the unfinished state of the update, although soon after admits that the slowdown could simply be a part of the tool and may be an unavoidable aspect of the software.

It’ll probably be a while until a more stable version of the double-tap method becomes widely available. If you recall, Rahman and his team could only view the update by manually toggling the option themselves. As far as we know, it doesn’t even work at the moment.

Double-tap seems to be one of the new accessibility features coming to Android 15. There are several in the works, such as the ability to hide “unused notification channels” to help people manage alerts and forcing dark mode on apps that normally don’t support it.

While we have you, be sure to check out TechRadar's round up of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More