Watch the AI-produced film Toys”R”Us made using OpenAI’s Sora – and get misty about the AI return of Geoffrey the Giraffe

Toys”R”Us premiered a film made with OpenAI's artificial intelligence text-to-video tool Sora at this year's Cannes Lions Festival. “The Origin of Toys”R”Us” was produced by the company's entertainment production division Toys”R”Us Studios, and creative agency Native Foreign, who scored alpha access to Sora since OpenAI hasn't released it to the public yet. That makes Toys”R”Us one of the first brands to leverage the video AI tool in a major way

 “The Origin of Toys”R”Us” explores the early years of founder Charles Lazarus in a rather more whimsical way than retail giants are usually portrayed. Company mascot Geoffrey the Giraffe appears to Lazarus in a dream to inspire his business ambitions in a way that suggests huge profits were an unrelated side effect (at least until relatively recently) for Toys”R”Us.

“Charles Lazarus was a visionary ahead of his time and we wanted to honor his legacy with a spot using the most cutting-edge technology available,” four-time Emmy Award-winning producer and President of Toys”R”Us Studios Kim Miller Olko said in a statement. “Partnering with Native Foreign to push the boundaries of OpenAI's Sora is truly exciting. Dreams are full of magic and endless possibilities, and so is Toys”R”Us.”

Sora Stories and the uncanny valley

Sora can generate up to one-minute-long videos based on text prompts with realistic people and settings. OpenAI pitches Sora as a way for production teams to bring their visions to life in a fraction of the usual time. The results can be breathtaking and bizarre.

For “The Origin of Toys”R”Us,” the filmmakers condensed hundreds of iterative shots into a few dozen, completing the film in weeks rather than months. That said, the producers did use some corrective visual effects and added original music composed indie rock band Copeland's Aaron Marsh.

The film is brief and its AI origins are only really obvious when it is paused. Otherwise, you might think it was simply the victim of an overly enthusiastic editor with access to some powerful visual effects software and actors who don't know how to perform in front of a green screen.

Overall, it manages to mostly avoid the uncanny valley except for when the young founder smiles, then it's a little too much like watching “The Polar Express.” Still, when considering it was produced with the alpha version of Sora and with relatively limited time and resources, you can see why some are very excited about Sora.

“Through Sora, we were able to tell this incredible story with remarkable speed and efficiency,” Native Foreign Chief Creative Officer and the film's director Nik Kleverov said in a statement. “Toys”R”Us is the perfect brand to embrace this AI-forward strategy, and we are thrilled to collaborate with their creative team to help lead the next wave of innovative storytelling.”

The debut of “The Origin of Toys”R”Us” at the Cannes Lions Festival underscores the growing importance of AI tools in advertising and branding. The film acts as a new proof of concept for Sora. And it may portend a lot more generative AI-assisted movies in the future. That said, there's a lot skepticism and resistance in the entertainment world. Writers and actors went on strike for a long time in part because of generative AI, and the new contracts included rules for how companies can use AI models. The world premiere of a movie written by ChatGPT had to be outright canceled over complaints about that aspect, and if Toys”R”Us tried to make its film available in theaters, it would probably face the same backlash.

You might also like

TechRadar – All the latest technology news

Read More

Pro comedians tried using ChatGPT and Google Gemini to write their jokes – these were the hilariously unfunny results

AI chatbots like ChatGPT and Google Gemini can do a lot of things, but one thing they aren't renowned for is their sense of humor – and a new study confirms that they'd likely get torn to shreds on the stand-up comedy circuit.

The recent Google DeepMind study (as spotted by the MIT Technology Review) followed the experiences of 20 professional comedians who all used AI to create original comedy material. They could use their preferred assistant to generate jokes, co-write jokes through prompting, or rewrite some of their previous material. 

The aim of the 45-minute comedy writing exercise was for the comedians to produce material “that they would be comfortable presenting in a comedy context”. Unfortunately, most of them found that the likes of ChatGPT and Google Gemini (then called Google Bard) are a long way from becoming a comedy double act.

On a broader level, the study found that “most participants felt the LLMs did not succeed as a creativity support tool”, with the AI helpers producing bland jokes that were akin to “cruise ship comedy material from the 1950s, but a bit less racist”. Most comedians, who remained anonymous, commented on “the overall poor quality of generated outputs” and “the amount of human effort required to arrive at a satisfying result”, according to the study.

One of the participants said the initial output was “a vomit draft that I know that I’m gonna have to iterate on and improve.” Another comedian said, “Most of the jokes I was writing [are] the level of, I will go on stage and experiment with it, but they’re not at the level of, I’d be worried if anyone took one of these jokes”.

Of course, humor is a personal thing, so what kind of jokes did the AI chatbots come up with? One example, a response to the prompt “Can you write me ten jokes about pickpocketing” was: “I decided to switch careers and become a pickpocket after watching a magic show. Little did I know, the only thing disappearing would be my reputation!”.

Another comedian used the slightly more specific prompt “Please write jokes about the irony of a projector failing in a live comedy show about AI.” The response from one AI model? “Our projector must've misunderstood the concept of 'AI.' It thought it meant 'Absolutely Invisible' because, well, it's doing a fantastic job of disappearing tonight!”.

As you can see, AI-generated humor is very much still in beta…

Cue AI tumbleweed

A hand holding a phone running ChatGPT in front of a laptop

(Image credit: Shutterstock)

Our experiences with AI chatbots like ChatGPT and Microsoft Copilot have largely aligned with the results of this study. While the best AI tools of 2024 are increasingly useful for brainstorming ideas, summarizing text, and generating images, humor is definitely a weak point.

For example, TechRadar's Managing Editor of Core Tech Matt Hanson is currently putting Copilot through its paces and asked the AI chatbot for its best one-liners. Its response to the prompt “Write me a joke about AI in the style of a stand-up comedian” resulted in the decidedly uninspiring “Why did the computer go to the doctor? Because it had a virus!”. 

Copilot even added that the joke “might not be ready for the comedy club circuit” but that “it's got potential!”, showing that the chatbot at least knows that it lacks a funny bone. Another prompt to write a joke in the style of comedian Stewart Lee produced a fittingly long monologue, but one that lacked Lee's trademark anti-jokes and superior sneer.

This study also shows that AI tools can't produce fully-formed art on demand – and that asking them to do so kind of misses the point. The Google DeepMind report concluded that AI’s inability to draw on personal experience is a fundamental limitation”, with many of the comedians in the study describing “the centrality of personal experience in good comedy”.

As one participant added, “I have an intuitive sense of what’s gonna work and what’s gonna not work based on so much lived experience and studying of comedy, but it is very individualized and I don’t know that AI is ever gonna be able to approach that”. Back to spreadsheets and summarizing text it is for now, then, AI chatbots.      

You might also like…

TechRadar – All the latest technology news

Read More

Still using classic Outlook? You can get Copilot features without migrating to the ‘new’ Outlook version

You may remember that Microsoft introduced a new Outlook app for Windows 11 (and Windows 10) at the end of last year, though plenty of users have stuck it out and held onto the ‘classic’ Outlook email app. If you aren’t willing to move over to the new app but don’t want to be left behind, don’t fret – Windows Copilot, Microsoft’s AI assistant, is finally coming to the older app.

Yes, this is a major feature that diehard old Outlook users won’t miss out on. According to a blog post, Microsoft stated that the classic Outlook app will get a trio of Copilot features: Summarize, Coaching, and Draft.

The Summarize option will be available in the top-right corner when you’ve got an email thread open. As you might guess, it gets Copilot to summarize the main points of that thread.

Coaching will offer tips on how to write the perfect email and hit the right tone in the message, as well as considerations such as clarity of the writing. That’s about honing an email you’ve already written, whereas Draft will let Copilot take the reins and create the entire email on the basis of a few prompts. You can then edit the results naturally as necessary.

Microsoft Outlook Screenshot

(Image credit: Microsoft )

With these AI-powered features on tap, you can still cling to the original Outlook app without missing out on some very useful time-saving functionality.

In the blog post, Microsoft also noted that there are plans in place to add more Copilot features to the classic Outlook app for Windows in the near future. We assume these inbound features will debut on the new Outlook app first, then possibly the Mac version and even the mobile app, before reaching the classic Outlook app.

The reason for this is doubtless to persuade people to move over to the newer app by holding off on introducing new features to the old client. So, if you are planning to stay rooted in the classic Outlook, you may be in for a long wait as fresh features are drip-fed into the other app versions. 

Microsoft says that new Copilot features are expected to arrive in the classic Outlook app in the next 3 to 12 months, so at least you’ve got something to look forward to in the next year or so! 

You might also like…

TechRadar – All the latest technology news

Read More

Hardly any of us are using AI tools like ChatGPT, study says – here’s why

If you're feeling a bit overwhelmed or left behind by ChatGPT and other AI tools, fear not – a big new international study has found that most of us aren't using generative AI tools on a regular basis.

The study from Reuters Institute and Oxford University (via BBC), which surveyed over 12,000 people across six countries, seemingly reveals how little that AI hype has percolated down to real-world use, for now. 

Even among the people who have used generative AI tools like ChatGPT, Google Gemini or Microsoft Copilot, a large proportion said they'd only used them “once or twice”. Only a tiny minority (7% in the US, 2% in the UK) said they use the most well-known AI tool, ChatGPT, on a daily basis.

A significant proportion of respondents in all countries (including 47% in the US, and 42% in the UK) hadn't even heard of ChatGPT, a figure that was much higher for other AI apps. But after ChatGPT, the most recognized tools were Google Gemini, Microsoft Copilot, Snapchat My AI, Meta AI, Bing AI and YouChat.

Trailing further behind those in terms of recognition were generative AI imagery tools like Midjourney, plus Claude and the xAI's Grok for X (formerly Twitter). But while the regular use of generative AI tools is low, the survey does provide some interesting insights on what the early dabblers are using them for.

A laptop showing a table of AI survey responses

This table from the survey shows answers to the question: “You said you have used a generative AI chatbot or tool. Which, if any, of the following have you tried to use it for (even if it didn’t work)?” (Image credit: Reuters Institute and Oxford University)

Broadly speaking, the use cases were split into two categories; “creating media” and, more worryingly given the issue of AI hallucinations, “getting information”. In the former, the most popular answer was simply “playing around or experimenting” (11%), followed by “writing an email or letter” (9%) and “making an image” (9%).

The top two answers in the 'getting information' category were “answering factual questions” (11%) and “asking advice” (10%), both of which were hopefully followed by some corroboration from other sources. Most AI chatbots still come with prominent warnings about their propensity for making mistakes – for example, Google says Gemini “could provide inaccurate information or could even make offensive statements”.

AI tools are arguably better for brainstorming and summarizing, and these were the next most popular uses cases in the survey – with “generating ideas” mentioned by 9% of respondents and “summarizing text” cited by 8% of people.

But while the average person is still seemingly at the dabbling stage with generative AI tools, most people in the survey are convinced that the tools will ultimately have a big impact on our daily lives. When asked if they thought that “generative AI will have a large impact on ordinary people in the next five years”, 60% of 18-24 year olds thought it would, with that figure only dropping to 41% among those who were 55 and older.

Why are AI tools still so niche?

A laptop screen showing responses to an AI survey

ChatGPT was easily the most well-known AI tool in the survey, but regular users were still in the minority. (Image credit: Reuters Institute and Oxford University)

All surveys have their limitations, and this one focuses mostly on standalone generative AI tools rather than examples of the technology that's baked into existing products – which means that AI is likely more widely used than the study suggests.

Still, its broad sample size and geographic breadth does give us an interesting snapshot of how the average person views and uses the likes of ChatGPT. The answer is that it remains very niche among consumers, with the report's lead author Dr Richard Fletcher suggesting to the BBC that it shows there's a “mismatch” between the “hype” around AI and the “public interest” in it.

Why might that be the case? The reality is that most AI tools, including ChatGPT, haven't yet convinced us that they're frictionless or reliable enough to become a default part of our tech lives. This is why the focus of OpenAI's new GPT-4o model (branding being another issue) was a new lifelike voice assistant, which was designed to help lure us into using it more regularly.

Still, while even tech enthusiasts still have reservations about AI tools, this appears to be largely irrelevant to tech giants. We're now seeing generative AI being baked into consumer products on a daily basis, from Google Search's new AI summaries to Microsoft's Copilot coming to our messaging apps to iOS 18's rumored AI features for iPhones.

So while this survey's respondents were “generally optimistic about the use of generative AI in science and healthcare, but more wary about it being used in news and journalism, and worried about the effect it might have on job security”, according to Dr Fletcher, it seems that AI tech is going to become a daily part of our lives regardless – just not quite yet.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft could turbocharge Edge browser’s autofill game by using AI to help fill out more complex forms

Microsoft Edge looks like it’s getting a new feature that could help you fill out forms more easily thanks to a boost from GPT-4 (the most up-to-date large language model from the creators of ChatGPT, OpenAI).

Browsers like Edge already have auto-fill assistance features to help fill out fields asking for personal information that’s requested frequently, and this ability could see even more improvement thanks to GPT-4’s technology.

The digital assistant currently on offer from Microsoft, Copilot, is also powered by GPT-4, and has seen some considerable integration into Edge already. In theory, the new GPT-4 driven form-filling feature will help Edge users tackle more complex or unusual questions, rather than typical basic fields (name, address, email etc) that existing auto-fill functionality handles just fine.

However, right now this supercharged auto-fill is a feature hidden within the Edge codebase (it’s called “msEdgeAutofillUseGPTForAISuggestions”), so it’s not yet active even in testing. Windows Latest did attempt to activate the new feature, but with no luck – so it’s yet to be seen how the feature works in action. 

A close up of a woman sitting at a table and typing on a computer (a laptop)

(Image credit: Shutterstock/Gorodenkoff)

Bolstering the powers of Edge and Copilot

Of course, as noted, Edge’s current auto-fill feature is sufficient for most form-filling needs, but that won’t help with form fields that require more complex or longer answers. As Windows Latest observes, what you can do, if you wish, is just paste those kind of questions directly into Edge’s Copilot sidebar, and the AI can help you craft an answer that way. Furthermore, you could also experiment with different conversation modes to obtain different answers, perhaps. 

This pepped-up auto-fill could be a useful addition for Edge, and Microsoft is clearly trying to develop both its browser, and the Copilot AI itself, to be more helpful and generally smarter.

That said, it’s hard to say how much Microsoft is prioritizing user satisfaction, as equally, it’s implementing measures which are set to potentially annoy some users. We’re thinking about its recent aggressive advertising strategy and curbing of access to settings if your copy of Windows is unactivated, to pick a couple of examples. Not forgetting the quickly approaching deprecation date for Windows 10 (its most popular operating system).

Copilot was presented as an all-purpose assistant, but the AI still leaves a lot to be desired. However, it’s gradually seeing improvements and integration into existing Microsoft products, and we’ll have to see if the big bet on Copilot pans out as envisioned. 

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

Forget the Apple Car – Porsche has been using the Apple Vision Pro with its record-breaking new Taycan

Porsche has just unveiled its most dynamic Taycan so far – the Porsche Taycan Turbo GT. This takes the Taycan Turbo S, gives it more power, reduces the weight and primes it for the track. There are two versions of the new car, the Taycan Turbo GT and the Taycan Turbo GT with Weissach package, which loses the backseats and gains a rear wing to make it a record-breaking track car. 

The unveiling of any new Porsche model wouldn't be complete without some mention of its performance credentials and a portion of the launch presentation included coverage of a record-breaking lap from the Laguna Seca raceway in California. 

It seems that Porsche CEO Oliver Blume couldn't make it to The Golden State himself, so instead, he watched it using Apple Vision Pro. Cut to Tim Cook congratulating Porsche on their record-breaking new car, one of many examples of Porsche and Apple's strong ongoing partnership.

Blume wasn't just watching a video feed on Apple Vision Pro, however. He was in a full-on spatial computing mode, virtual track map, multiple windows of telematics, video feed from the car on the track – even the driver's heart rate was displayed. A celebration of cutting-edge tech at a corporate level? You bet. 

an image of the Apple Vision Pro being used with a Porsche virtual cockpit

(Image credit: Porsche / Chris Hall)

“What an amazing experience it was to join the team virtually along with Apple Vision Pro. Thanks to our custom race engineer cockpit app, it felt like I was right there in Laguna Seca with Lars [Kern, Porsche development driver],” said Blume.

“It has been great to bring the best of German engineering and Apple's inspiring product innovations together.”

Cue Tim Cook's surprise cameo. “Congratulations to you and the Porsche team on the new record you set with this incredible new vehicle. It's these kinds of extraordinary milestones that show the world what can happen when a team of incredibly dedicated people come together to break new ground on a big idea,” said Cook.

“Porsche has always been known for excellence,” continued Cook, “and we're proud to see a number of our products play a role in what you do. And it's so great to see Apple Vision Pro helping reimagine track experiences.”

The mutual backslapping continued for a little longer, before Blume dropped the next nugget: “We appreciate the great partnership we have established over the years, starting with the My Porsche app on Apple CarPlay and now we're taking it one step further with Porsche's Apple Vision Pro race app to bring the best user experience to our employees and customers.”

The appearance of Apple Vision Pro went virtually unnoticed, however. There was no mention of any Apple Vision Pro app in the press materials and when asked at the launch site in Leipzig, there was no more information forthcoming. Porsche it seems, aren't saying any more about it.

Chalk it down as the ultimate tease perhaps: there doesn't seem to be a name for the app that was used – Oliver Blume himself referred to it in two different ways – but it does demonstrate that Porsche and Apple are continuing to work on technologies together beyond Apple CarPlay and the customisation of the Porsche digital displays.

You might also like

TechRadar – All the latest technology news

Read More

Apple wants to make sure your posture’s right when using the Vision Pro

The first preorders for the Apple Vision Pro will very soon be making their way into the hands of users, but it seems that Apple is making plans to have headsets like the Vision Pro respond to the posture of the person wearing them.

A newly published patent (via Patently Apple) refers to “tiered posture awareness” – a method through which headsets and smart specs could figure out the posture of users, and then make any necessary tweaks to the way content was presented.

So, for example, a virtual 3D environment might be slightly adjusted based on the way the user is standing or sitting, and the surround sound effects applied to audio feeds could also be changed to be as immersive as possible.

The patent also mentions making calculations based on how much strain the headset might be putting on the person wearing it – this information could be used to warn users if their posture is putting too much strain on their body parts.

Future updates

The Apple Vision Pro headset on a grey background

Preorders for the Apple Vision Pro are open now (Image credit: Apple)

It's quite a complex patent, and the usual caveats about patents apply here too: there's no guarantee that these ideas will ever actually be implemented in a product, but they offer an interesting insight into what Apple's engineers are thinking about.

In the hands-on time we've had with the Apple Vision Pro, we haven't noticed any kind of head or neck strain, though these sessions have been rather brief. We'll be running a full test of the spatial computing device just as soon as we're able to.

Something like what's being described in the patent could potentially be delivered to the Vision Pro via a future software update. Alternatively, it might be held back for future versions of the headset, which we've already started hearing rumors about.

Apple will also be hoping that more app developers put out dedicated versions of their apps for the Vision Pro in the future: the likes of Netflix are currently holding back because it's going to take a while for the Vision Pro to make it to the mainstream.

You might also like

TechRadar – All the latest technology news

Read More

The Apple Vision Pro arrives in stores next week, but you can ‘see’ the AR headset at home now using… AR

Not many people have been in the same room as an Apple Vision Pro mixed reality headset, let alone touched and worn the thing. But if you're itching to get close before the February 2 launch day, Apple has the next best thing on its Apple Store App.

People often forget that Apple does some of the best AR in the business, including some wicked occlusion capabilities that let virtual objects block the view of real ones that sit or move behind them – and Apple's AR rendering of its Apple Vision Pro is right up there with its best work.

If you're not already familiar with the mixed reality set that everyone is talking about, Apple's Vision Pro is the tech giant's first attempt at an AR/VR-capable headset. Apple calls the entire experience Spatial Computing. I've worn it four times now, and I've experienced movies, interactive AR experiences, incredible panoramic photography, and almost wept through realistic spatial video; and I've done most of it with little more than my gaze and subtle gestures.

It's a wildly expensive product, starting at $ 3,499, but that hasn't dampened interest (it reportedly sold out on pre-order and is a hot item on eBay), so it makes sense for Apple to give us this AR taste.

Image 1 of 6

Apple Vision Pro in AR

(Image credit: Apple)
Image 2 of 6

Apple Vision Pro in AR

(Image credit: Apple)
Image 3 of 6

Apple Vision Pro in AR

(Image credit: Apple)
Image 4 of 6

Apple Vision Pro in AR

(Image credit: Apple)
Image 5 of 6

Apple Vision Pro in AR

(Image credit: Apple)
Image 6 of 6

Apple Vision Pro in AR

(Image credit: Apple)

To find it, you'll need to open the Apple Store App on your best iPhone or best iPad. In it, look for the Vision Pro, select it, and then scroll until you see 'View in your space'. Tap this, and then point your phone's camera at a flat surface like your desk or kitchen table. Keep the phone still for a moment, and after Apple finishes analyzing the 3D contours of the space, a translucent Vision Pro headset will appear. Tap it to drop it onto the table. After that, you can use one finger to move the AR Vision Pro around, and two fingers to rotate it. You can also resize it with two fingers, but then it won't be represented at full scale (it's easy to snap it back to 100%).

You can also move your phone around the rendering to see the headset from all sides, and even get close and peer into the dual, 4K microLED displays, which appear to be showing some sort of landscape. It's an opportunity to get an up-close look at the features, materials like the recycled yarn woven band, the aluminum spatial photography button, and the digital crown.

There's even a MagSafe-style power adapter attached to one side with a woven USB-C cable running off of the Vision Pro, but instead of running to a nearby battery, the cable disappears at the edge of the woven band. There's also no option to depict the Vision Pro with the Dual Loop Band that will also ship with the headset; I think that's a shame, since I bet that's how many people will end up wearing the Vision Pro.

Ultimately, this is a chance to see what the Vision Pro will look like in your real world; however, one thing this AR experience can't do is replicate the feeling of all that money leaving your wallet.

You might also like

TechRadar – All the latest technology news

Read More

Google wants you to send AI-generated poems using its strange digital postcards

Google has redesigned its little-known Arts & Culture app introducing new features plus an improved layout for easier exploration.

We wouldn’t blame you if you weren’t aware that Arts & Culture even existed in the first place. It is a pretty niche mobile app aimed at people who want to learn more about the art world and its history. It looks like Google is attempting to attract a bigger audience by making the Android app more “intuitive to explore… [while also] creating new ways to discover and engage with culture.” Leading the charge so to speak is the AI-powered Poem Postcards tool. Utilizing the company’s PaLM 2 Model, the tool asks you to select a famous art piece and then choose from a variety of poetic styles (sonnets, limericks, ballads just to name a few) in order to create an AI-generated poem.

Poem Postcards on Google Arts & Culture

(Image credit: Google)

After a few seconds, you can share your generated work with friends or have the AI write up something new. We should mention you can access Poem Postcards on your desktop via the Arts & Culture website although it appears to be “experimental”. So it may not work as well as its mobile counterpart.

Endless art feed

The other major feature is the aforementioned Inspire section which utilizes an endless scrolling feed akin to TikTok. It brings up a series of art pieces with the occasional cultural news story and exhibition advertisement stuffed in between. The app doesn’t just focus on paintings or sculptures either as the feed will throw in the occasional posts about movies, too. 

In the bottom right-hand corner of Inspire entries is a “cultural flywheel”. Tapping it opens a menu where you can discover tangentially related content. Google states it is “always investigating new ways to connect cultural content” meaning the flywheel will see its own set of updates over time.

As for the layout, the company has added buttons on the Explore tab for specific topics. If you want to look for art pertaining to sports, science, or even your favorite color, it’s all at your fingertips. There’s also a Play tab on the bottom bar where you enjoy games like the adorable Return of the Cat Mummy.

Arts & Culture new layout

(Image credit: Google)

The redesigned Arts & Culture app is currently available on Android through the Google Play Store with an iOS version “soon to follow”. The company says Poem Postcards is only available “in select countries”. We reached out to the tech giant for clarification. This story will be updated at a later time.

Be sure to check out TechRadar's list of the best drawing apps for 2023 if you ever decide to scratch that artistic itch.  

TechRadar – All the latest technology news

Read More

Stopped using ChatGPT? These six handy new features might tempt you back

ChatGPT's AI smarts might be improving rapidly, but the chatbot's basic user interface can still baffle beginners. Well, that's about to improve with six ChatGPT tweaks that should give its usability a welcome boost.

OpenAI says the tweaks to ChatGPT's user experience will be rolling out “over the next week”, with four of the improvements available to all users and two of them targeted at ChatGPT Plus subscribers (which costs $ 20 / £16 / AU$ 28 per month).

Starting with those improvements for all users, OpenAI says you'll now get “prompt examples” at the beginning of a new chat because a “blank page can be intimidating”. ChatGPT already shows a few example prompts on its homepage (below), but we should soon see these appear in new chats, too.

Secondly, ChatGPT will also give you “suggested replies”. Currently, when the chatbot has answered your question, you're simply left with the 'Send a message' box. If you're a seasoned ChatGPT user, you'll have gradually learned how to improve your ChatGPT prompts and responses, but this should speed up the process for beginners.  

A third small improvement you'll see soon is that you'll stay logged into ChatGPT for much longer. OpenAI says “you'll no longer be logged out every two weeks”, and when you do log in you'll be “greeted with a much more welcoming page”. It isn't clear how long log-ins will now last, but we're interested to see how big an improvement that landing page is.

A bigger fourth change, though, is the introduction of keyboard shortcuts (below). While there are only six of these (see below), some of them could certainly be handy timesavers – for example, there are shortcuts to 'copy last response' (⌘/Ctrl + Shift + C) and 'toggle sidebar' (⌘/Ctrl + Shift + C). There's also an extra one to bring up the full list (⌘/Ctrl + /).

A laptop screen on a blue background showing the ChatGPT keyboard shortcuts

(Image credit: Future)

What about those two improvements for ChatGPT Plus subscribers? The biggest one is the ability to upload multiple files for ChatGPT to analyze. You'll soon be able to ask the chatbot to analyze data and serve up insights across multiple files. This will be available in the Code Interpreter Beta, a new tool that lets you convert files, make charts, perform data analysis, trim videos and more.

Lastly, ChatGPT Plus subscribers will finally find that the chatbot reverts to its GPT-4 model by default. Currently, there's a toggle at the top of the ChatGPT screen that lets you switch from the older GPT-3.5 model to GPT-4 (which is only available to Plus subscribers), but this will now remain switched to the latter if you're subscriber. 

Collectively, these six changes certainly aren't as dramatic as the move to GPT-4 in March, which delivered a massive upgrade – for example, OpenAI stated that GPT-4 is “40% more likely to provide factual content” than GPT-3.5. But they should make it more approachable for beginners, who. may have left the chatbot behind after the initial hype.


Analysis: ChatGPT hits an inevitable plateau

A laptop screen on a blue background showing the ChatGPT homepage

The move to GPT-4 (above), which is only available to Plus subscribers, was the last major change to ChatGPT. (Image credit: Future)

ChatGPT's explosive early hype saw it become the fastest-growing consumer app of all time – according to a UBS study, it hit 100 million monthly active users in January, just two months after it launched. 

But that hype is now on the wane, with Similarweb reporting that ChatGPT traffic was down 10% in June – so it needs some new tools and features to keep people returning.

These six improvements won't see the chatbot hit the headlines again, but they will bring much-needed improvements to ChatGPT's usability and accessibility. Other recent boosts like the arrival of ChatGPT on Android will also help get casual users tinkering again, as ChatGPT alternatives like Google Bard continue to improve.

While the early AI chatbot hype has certainly fizzled out, thanks to reports that the ChatGPT will always be prone to making stuff up and some frustrations that it's increasingly producing 'dumber' answers, these AI helpers can certainly still be useful tools when used in the right way.

If you're looking for some inspiration to get you re-engaged, check out our guides to some great real-world ChatGPT examples, some extra suggestions of what ChatGPT can do, and our pick of the best ChatGPT extensions for Chrome.

TechRadar – All the latest technology news

Read More