Apple Intelligence tipped to feature Google Gemini integration at launch, as well as ChatGPT

At WWDC 2024, Apple confirmed that its upcoming Apple Intelligence toolset will feature integration with ChatGPT to help field user queries that Siri can’t handle on its own, and now we’re hearing that a second third-party chatbot could be added to the mix.

According to Bloomberg’s resident Apple expert Mark Gurman, Google Gemini could join ChatGPT as one of two Siri-compatible chatbot options in Apple Intelligence. This integration could see iPhone, iPad and MacBook users given the option to use the cloud-based powers of ChatGPT or Google Gemini when Siri is unable to answer a query on-device.

Apple will reportedly announce its collaboration with Google “this fall” (aka September), which aligns with the assumed launch of Apple Intelligence, iOS 18 and the iPhone 16 line.

Rumors surrounding a partnership between Apple and Google have been swirling for some time now, but many tech commentators – including TechRadar’s own Lance Ulanoff – doubted its authenticity owing to Apple’s historic reluctance to bring parity between the best iPhones and best Android phones.

A hand holding an iPhone showing the new Siri

Siri could soon feature integration with ChatGPT and Google Gemini  (Image credit: Apple)

Ulanoff wrote back in March: “Apple's goal with the iPhone 16, iOS 18, and future iPhones is to differentiate its products from Android phones. It wants people to switch and they'll only do that if they see a tangible benefit. If the generative tools on the iPhone are the same as you can get on the Google Pixel 8 Pro (and 9) or Samsung Galaxy S24 Ultra (and S25 Ultra), why switch?”

It’s a valid question, but perhaps Apple sees its additional (and so far unique) partnership with ChatGPT as the USP of its new and upcoming devices. 

There’s also the question of revenue to consider. Gurman recently reported that Apple’s “long-term plan is to make money off Apple Intelligence”, with the company keen to “get a cut of the subscription revenue from every AI partner that it brings onboard.”

It seems likely, then, that Apple will launch a paid version of Apple Intelligence which incorporates the premium, fee-paying features of ChatGPT and Google Gemini, respectively. 

Incidentally, Gurman also reports that Apple had brief conversations with Meta about incorporating its Llama chatbot into Apple Intelligence, but the iPhone maker allegedly decided against a partnership due to privacy concerns and a preference for Google’s superior AI technology.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT wrote a movie and yes, it freaked people out and forced a big change to its launch plans

The Prince Charles Cinema in London canceled the world premiere of “The Last Screenwriter” after receiving complaints over the use of ChatGPT to write the film’s script.

Swiss director Peter Luisi employed the generative artificial intelligence chatbot to write the film and gave the AI the screenwriting credit. Aptly enough for a script composed by an AI, “The Last Screenwriter” is about a famous screenwriter dealing with an AI scriptwriter named “ChatGPT 4.0,” outperforming him and somehow understanding humanity better than the actual human.

Luisi produced the screenplay through a series of prompts to ChatGPT, starting by asking it to “write a plot to a feature-length film where a screenwriter realizes he is less good than artificial intelligence in writing.” He followed up with the AI by asking it to compose outlines and scenes, as well as name the movie’s characters. With some editing, the script was complete. 

The movie’s press kit even includes a statement from ‘the screenwriter,’ who comes off as very proud of the screenplay.

“As the screenwriter of 'The Last Screenwriter,' I am excited to bring this thought-provoking story to life on the page,” ChatGPT is quoted as stating. “At its core, the film explores the intersection between technology and human creativity, and asks the question: can machines truly replace the human experience when it comes to art and storytelling?”

That almost sounds too human.

Fade to black

However, just before the premiere, the cinema canceled the event, citing a deluge of audience complaints. While trying to avoid this specific controversy, the theater did make a point about the question of AI in entertainment being a larger issue than just this one film and one theater’s policy. 

“The feedback we received over the last 24hrs once we advertised the film has highlighted the strong concern held by many of our audience on the use of AI in place of a writer which speaks to a wider issue within the industry,” the Prince Charles wrote in its statement. 

Proponents of AI in entertainment say it can offer innovative solutions and new perspectives. However, many worry about what it might mean for creative employment and even the future of storytelling.

Generative AI and its uses were at the core of the recent writer and screen actor union strikes, and both settlements addressed how companies should approach the technology. Even so, it’s not likely to be a settled issue when the technology itself is evolving so rapidly.

Don't cry for ChatGPT. Director Luisi still held a family and friends screens. Plus, there are plans to release the movie for free online on June 27 and post the screenplay and how it was created by ChatGPT.

TechRadar – All the latest technology news

Read More

Microsoft dumps Windows 11 Recall feature from Copilot+ PCs at launch – an embarrassing turn of events, but ultimately for the best

In a very surprising move – albeit the right one, in our books – Microsoft has pulled the rug on its big Recall feature, so it now won’t launch as planned with Copliot+ PCs.

Microsoft just issued an update on Recall (hat tip to Tom’s Hardware) as follows: “Recall will now shift from a preview experience broadly available for Copilot+ PCs on June 18, 2024, to a preview available first in the Windows Insider Program (WIP) in the coming weeks.

“Following receiving feedback on Recall from our Windows Insider Community, as we typically do, we plan to make Recall (preview) available for all Copilot+ PCs coming soon.”

To recap briefly, Recall is the feature which constantly takes screenshots of the activity on the host PC, allowing the user to search these leveraging AI (Copilot), offering an undoubtedly powerfully ramped up search experience.

But there have been issues aplenty raised around Recall before its (now canceled) launch, and much controversy stirred by those who have fudged their Windows 11 installation to enable and test the feature.

So, as noted in Microsoft’s statement, the expectation was very much that Recall would be live next week, when Copilot+ PCs finally emerge blinking in the sunlight, but that will no longer be the case.

Instead, Microsoft is going to have the Recall preview made available to testers in early builds of Windows 11 in the “coming weeks,” and there’s the second major admission here. That makes it sound like testers won’t be getting the feature to play with next week, let alone buyers of Copilot+ PCs, and it may be some weeks before it arrives in whatever preview channel Microsoft deploys Recall.

In short, Microsoft isn’t sure whether Recall will even be ready for testing any time soon.

Person with a laptop

(Image credit: Shutterstock.com / ImYanis)

Analysis: A major setback, but still the right decision

This has all been a bit of a fiasco, frankly. Microsoft announced Recall with a big fanfare for Copilot+ PCs, then proceeded to batten down the hatches as flak and doubts were fired at the feature left, right, and center. Defensiveness and evasion gave way to big changes being implemented for Recall to shore up security in the light of all the negative feedback, and also ensuring it’s turned off by default (something we argued strongly for).

Now, even after that, it’s been canned for the time being, at least for Copilot+ PCs. It’s not a good look, is it? It feels like Microsoft has been taken aback by all the salvoes fired at Recall by security researchers, rushed to implement some hefty changes, realized that there isn’t time to do all this properly – Copilot+ PCs are almost upon us – so put the full launch on ice to go back to testing.

There’s no doubting that this will be damaging to Copilot+ PCs to some extent. These are AI PCs, after all, and Windows 11’s key feature for them was Recall – there is other AI functionality for these devices, but nothing on the same scale. Just look at Dell’s Copilot+ PC web page, and how it’s built around Recall – it’s the key piece of the AI puzzle, and now it’s missing.

However, we’re glad Microsoft has taken the PR hit here, as it were, and pulled Recall, rather than putting its head down and trying to forge through with the feature. That would have proved even more damaging, most likely, so we understand, and approve of this move in the end.

Honestly, though, we don’t think Recall – given that it’s a sensitive and tricky piece of AI functionality with all those privacy and security aspects – should be pushed out to finished versions of Windows as a ‘preview’ at all. This should be done, dusted, tight and secure, before leaving testing – shouldn’t it?

Speaking of tight and secure, this is especially bad timing for Microsoft, given that Apple Intelligence was just unveiled, with the rival AI offering looking super-sharp on the privacy front, while Copilot appears to be stumbling about from blunder to blunder for the moment. Again, it’s not a good look, made much worse by Apple’s confident and professional revelation of its AI rival for Macs and iDevices (though we should note, we need to see Apple’s promises in action, not just words, before we get carried away with any comparisons).

Still, awkward days for Microsoft, but we’re hoping the company can now take the time to get things right with Recall. In fact, we’d argue it must take the time to do so, or risk blemishes on the Copilot brand that’ll quite probably cause lasting damage in terms of public perception.

You might also like…

TechRadar – All the latest technology news

Read More

Apple Vision Pro finally gets global launch dates – and cool new visionOS 2 tricks

As part of WWDC 2024 Apple has announced a slew of updates for its hardware – with the Apple Vision Pro kicking things off with not only new features, but a release date for non-US markets (finally).

The big news is obviously that latter one. On June 28 the Apple Vision Pro is rolling out to China, Japan, and Singapore; and then two weeks later on July 12 it’ll launch in Australia, Canada, France, Germany, and the UK.

We don’t have the precise prices for the Vision Pro in these regions yet, but as soon as WWDC is over the online Apple Store should be updated with all of these details. Just be warned; you should expect this VR headset to be pricey given that its US launch price was $ 3,499 – which is around £2,788, AU$ 6349.

This is a developing story, as we learn more we'll be updating this page with the details

TechRadar – All the latest technology news

Read More

Google’s Gemini Nano could launch on the Pixel 8a as early as next month

Google promised back in March that it would eventually bring Gemini Nano to the Pixel 8 and Pixel 8a although no one knew exactly when. Until now, that is, as the update may arrive “very soon.” 

Android Authority recently did a deep dive into the Pixel 8 series’ AICore app and found a pair of toggle switches inside the settings menu. The second switch is the main focus because flipping it turns on “on-device GenAI features.” Activating the tool presumably allows a Pixel 8 smartphone to harness Gemini Nano in order to power the device’s generative AI capabilities. 

It doesn’t actually say “Nano” in the accompanying text, though; just “Gemini”. However, we believe it is that particular model and not some stripped-down version, after all this is what Google said it would. Plus, the Pixel 8 trio runs on the Tensor G3 chipset and it can support the AI with the right hardware adjustments.

No one knows exactly what the AI will actually do here, though. Gemini Nano on the Pixel 8 Pro powers the phone’s “multimodal capabilities,” including, but not limited to, the Summarize tool in the Record app and Magic Compose in Google Messages.

Imminent launch

The other toggle switch isn’t as noteworthy – a screenshot in the report reveals it enables “AICore Persistent.” This gives applications “permission… to use as many [device] resources as possible”. 

Android Authority states that the sudden appearance of these switches could mean Google is almost ready to “announce Nano support for the Pixel 8/8a ”—maybe within the next couple of days or weeks. The company typically releases major updates for its mobile platforms in June, so we expect to see Gemini roll out to the rest of the Pixel 8 line next month. 

According to the publication, the toggles will likely be found in the Developer Options section of the smartphone’s Settings menu. However, it's important to note that this could change at any time.

Technically-minded users can find the switches by digging around the latest AICore patch. The software is available for download from the Google Play Store; however, your Pixel 8 model may need to be running Android 15 Beta 2.1. 9To5Google, in their coverage, claims to have found the AICore toggles on a Pixel 8 with the beta but not on a phone running Android 14.

As for the Pixel 8 Pro, it's unknown if the high-end model is going to receive the same update although there's a chance it could. Android Authority points out it's currently not possible for Pro users to deactivate Gemini Nano, but this update could give them the option.

Be sure to check out TechRadar's list of the best Pixel phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI’s big launch event kicks off soon – so what can we expect to see? If this rumor is right, a powerful next-gen AI model

Rumors that OpenAI has been working on something major have been ramping up over the last few weeks, and CEO Sam Altman himself has taken to X (formerly Twitter) to confirm that it won’t be GPT-5 (the next iteration of its breakthrough series of large language models) or a search engine to rival Google. What a new report, the latest in this saga, suggests is that OpenAI might be about to debut a more advanced AI model with built-in audio and visual processing.

OpenAI is towards the front of the AI race, striving to be the first to realize a software tool that comes as close as possible to communicating in a similar way to humans, being able to talk to us using sound as well as text, and also capable of recognizing images and objects. 

The report detailing this purported new model comes from The Information, which spoke to two anonymous sources who have apparently been shown some of these new capabilities. They claim that the incoming model has better logical reasoning than those currently available to the public, being able to convert text to speech. None of this is new for OpenAI as such, but what is new is all this functionality being unified in the rumored multimodal model. 

A multimodal model is one that can understand and generate information across multiple modalities, such as text, images, audio, and video. GPT-4 is also a multimodal model that can process and produce text and images, and this new model would theoretically add audio to its list of capabilities, as well as a better understanding of images and faster processing times.

OpenAI CEO Sam Altman attends the artificial intelligence Revolution Forum. New York, US - 13 Jan 2023

(Image credit: Shutterstock/photosince)

The bigger picture that OpenAI has in mind

The Information describes Altman’s vision for OpenAI’s products in the future as involving the development of a highly responsive AI that performs like the fictional AI in the film “Her.” Altman envisions digital AI assistants with visual and audio abilities capable of achieving things that aren’t possible yet, and with the kind of responsiveness that would enable such assistants to serve as tutors for students, for example. Or the ultimate navigational and travel assistant that can give people the most relevant and helpful information about their surroundings or current situation in an instant.

The tech could also be used to enhance existing voice assistants like Apple’s Siri, and usher in better AI-powered customer service agents capable of detecting when a person they’re talking to is being sarcastic, for example.

According to those who have experience with the new model, OpenAI will make it available to paying subscribers, although it’s not known exactly when. Apparently, OpenAI has plans to incorporate the new features into the free version of its chatbot, ChatGPT, eventually. 

OpenAI is also reportedly working on making the new model cheaper to run than its most advanced model available now, GPT-4 Turbo. The new model is said to outperform GPT-4 Turbo when it comes to answering many types of queries, but apparently it’s still prone to hallucinations,  a common problem with models such as these.

The company is holding an event today at 10am PT / 1pm ET / 6pm BST (or 3am AEST on Tuesday, May 14, in Australia), where OpenAI could preview this advanced model. If this happens, it would put a lot of pressure on one of OpenAI’s biggest competitors, Google.

Google is holding its own annual developer conference, I/O 2024, on May 14, and a major announcement like this could steal a lot of thunder from whatever Google has to reveal, especially when it comes to Google’s AI endeavor, Gemini

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

OpenAI’s big Google Search rival could launch within days – and kickstart a new era for search

When OpenAI launched ChatGPT in 2022, it set off alarm bells at Google HQ about what OpenAI’s artificial intelligence (AI) tool could mean for Google’s lucrative search business. Now, those fears seem to be coming true, as OpenAI is set for a surprise announcement next week that could upend the search world forever.

According to Reuters, OpenAI plans to launch a Google search competitor that would be underpinned by its large language model (LLM) tech. The big scoop here is the date that OpenAI has apparently set for the unveiling: Monday, May 13.

Intriguingly, that’s just one day before the mammoth Google I/O 2024 show, which is usually one of the biggest Google events of the year. Google often uses the event to promote its latest advances in search and AI, so it will have little time to react to whatever OpenAI decides to reveal the day before.

The timing suggests that OpenAI is really gunning for Google’s crown and aims to upstage the search giant on its home turf. The stakes, therefore, could not be higher for both firms.

OpenAI vs Google

OpenAI logo on wall

(Image credit: Shutterstock.com / rafapress)

We’ve heard rumors before that OpenAI has an AI-based search engine up its sleeve. Bloomberg, for example, recently reported that OpenAI’s search engine will be able to pull in data from the web and include citations in its results. News outlet The Information, meanwhile, has made similar claims that OpenAI is “developing a web search product”, and there has been a near-constant stream of whispers to this effect for months.

But even without the direct leaks and rumors, it has been clear for a while that tools like ChatGPT present an alternative way of sourcing information to the more traditional search engines. You can ask ChatGPT to fetch information on almost any topic you can think of and it will bring up the answers in seconds (albeit sometimes with factual inaccuracies). ChatGPT Plus can access information on the web if you’re a paid subscriber, and it looks like this will soon be joined by OpenAI’s dedicated search engine.

Of course, Google isn’t going to go down without a fight. The company has been pumping out updates to its Gemini chatbot, as well as incorporating various AI features into its existing search engine, including AI-generated answers in a box on the results page.

Whether OpenAI’s search engine will be enough to knock Google off its perch is anyone’s guess, but it’s clear that the company’s success with ChatGPT has prompted Google to radically rethink its search offering. Come next week, we might get a clearer picture of how the future of search will look.

You might also like

TechRadar – All the latest technology news

Read More

The 5 subtle AI announcements Apple made at its big iPad 2024 launch event

Today's Apple iPad Air and iPad Pro event was big on product launches, but quieter about AI. Or was it? 

While there weren't any AI announcements to rival the launch of the iPad Pro (2024) or new M4 chip, Apple did uncharacteristically mention 'AI' on eight different occasions during the event – and those covered five different new announcements about the tech.

Apple has previously been reluctant to join the chorus of AI hype, preferring to stick to the less zeitgeisty (if often more accurate) 'machine learning' during its launch events. But back in February, Tim Cook started making unexpectedly bold statements about AI, calling it a “huge opportunity for Apple” and that AI tools would be coming to Apple devices “later this year”.

So what exactly were those subtle AI announcements at the iAPd-centric Apple event? Here are the times the Cupertino crew gave us a taster of what's to come next month at WWDC 2024

1. The M4 chip is more powerful than 'any AI PC today'

An iPad sitting on a desk

(Image credit: Apple)

Apple's next-gen silicon had been rumored for the iPad Pro (2024), but it was still something of a surprise to see the M4 appear for the first time during a tablet announcement. Naturally, Apple was keen to flag its serious AI potential.

Tim Millet, Apple's VP of Platform Architecture, said that “the Neural Engine makes M4 an outrageously powerful chip for AI”, pointing to the simple example of it letting you isolate a subject from its background in 4K video with a tap in Final Cut Pro.

Clearly, Apple thinks its silicon makes for a strong foundation for AI apps, with Millet adding that “the Neural Engine is an IP-block in the chip dedicated to the acceleration of AI workloads”. And he finished with the bolder statement that “the Neural Engine in M4 is more powerful than any neural processing unit in any AI PC today”. We can't verify that yet, but it doesn't sound like an outlandish claim.

2. The Logic Pro 2 app has AI-powered Session Players

A hand holding an iPad that's running Logic Pro

(Image credit: Apple)

The Logic Pro app landed on the iPad about a year ago – and the new version that Apple's just announced has some AI-powered 'Session Players' for you to dabble with.

These are designed to play alongside the existing Drummer feature to give you something like a virtual band. Will Hui, Apple's Product Manager of Creative Apps, said: “Now Drummer is getting some new bandmates in a feature we call Session Players. We're introducing an all-new Bass and Keyboard Player, and like Drummer, they're built using AI.”

Given Apple's digital audio workstation was already a lot of fun, we're looking forward to giving them an audition.

3. The iPad Pro uses AI to help you scan documents

Image 1 of 2

A hand holding an iPad that's scanning a document

(Image credit: Apple)
Image 2 of 2

A hand holding an iPad that's scanning a document

(Image credit: Apple)

This might not be the most wildly exciting AI use case, but sometimes the tech is best suited to helping us with more mundane tasks – and Apple reckons it does just that with the iPad Pro (2024)'s AI-powered document scanning.

This comes courtesy of a new 'adaptive' True Tone flash, which works in tandem with AI algorithms to adjust the lighting depending on the document and ambient lighting. John Ternus, Apple's SVP of Hardware Engineering said: “We've all had the experience of trying to scan a document in certain lighting conditions where it's hard to avoid casting a shadow – the new Pro solves this problem.”

“It uses AI to automatically detect documents like forms and receipts,” he added. “If shadows are in the way, it instantly takes multiple photos with the new, adaptive flash. The frames are stitched together and the result is a dramatically better scan.”

We'll have to see how well that works in practice, but because it's built into iPadOS it'll also be in the Camera app, Files, Notes, and third-party apps, too.

4. The iPad Air 6 isn't left out of the AI party 

An iPad sitting on a desk

(Image credit: Apple)

Apple was keen to stress that the iPad Pro (2024) isn't the only tablet in its range suitable for AI-powered tasks or future apps, despite that tablet being the only one with the new M4 chip.

Melody Kuna, Apple's Director of iPad Product Design, said that “with M2, the new [iPad] Air is also an incredibly powerful device for AI. It's blazing fast for powerful machine learning features in iPadOS, like Visual Look Up, Subject Lift, and Live Text capture.”

So while the iPad Pro's M4 chip is capable of an impressive 38 trillion operations per second (which apparently makes it sixty times faster than Apple's A11 Bionic neural engine from the iPhone 8), the iPad Air 6 won't be left out of future AI apps and features on Apple's tablets.

5. iPadOS is just getting started with AI

A person sitting at a desk working on an iPad and monitor

(Image credit: Apple)

On a similar theme, Apple's final mention of AI during its long-awaited iPad launch was reserved for iPadOS.

Will Hui, Apple's Product Manager of Creative Apps, said that “iPadOS has advanced frameworks like Core ML that make it easy for developers to tap into the Neural Engine to deliver powerful AI features, right on device”.

Clearly, Apple is treading carefully with AI in its own apps, with only Logic Pro's session players and the iPad Pro's document scanning making much use of it so far. But it also put out a call to developers to tap the potential of its software (and chips) for AI-powered features. And we can expect to hear a lot more about those next month at WWDC 2024.

You might also like

TechRadar – All the latest technology news

Read More

Apple’s new Pride Collection heralds the launch of iOS 17.5 with dynamic wallpaper

Continuing a yearly tradition, Apple has revealed this year’s Pride Collection celebrating the LGBTQ+ community. The 2024 set consists of two new wallpapers for iPhones and iPads plus a new watch face and wristband for the Apple Watch.

Launching first on May 22 is the band which is called the Pride Edition Braided Solo Loop. Apple states the color scheme was inspired by multiple pride flags. The pink, light blue, and white threads are meant to “represent transgender and nonbinary” people, while “black and brown symbolize Black, Hispanic, and Latin communities” plus groups who have been hurt by HIV/AIDS. Laser-etched on the lug are the words “PRIDE 2024”. 

The Pride Braided Loop will be available in both 41mm and 45mm for $ 99. It’ll fit on the Apple Watch SE as well as the “Apple Watch Series 4 or later” models. You can purchase it in the US on the 22nd at a physical Apple Store or on the company’s website. Other global regions can buy the band on the following day. No word on how much it’ll cost outside the United States, although we did ask.

Dyanmic wallpaper

The wallpaper coming to Apple hardware is known as Pride Radiance. What’s different about it is it’s not a static image, but rather dynamic. On the Apple Watch, the streams of light actively trace the numbers of the digital clock. They even react in real-time to the wearable moving around. 9To5Mac claims in its coverage users can customize the look of the wallpaper by choosing “from several style palettes.” 

On iPhones and iPads, Pride Radiance is also dynamic, but it doesn’t trace the clock. Instead, the light spells out the word “pride” on the screen. Those interested can download the wallpaper through the Apple Watch and Apple Store app “soon”. An exact date wasn’t given. However, the company did confirm it’ll roll out with iOS 17.5, iPadOS 17.5, and watchOS 10.5.

This is noteworthy because, up until this recent post, the company had yet to announce when the next big software update would arrive for its devices. iOS 17.5 in particular is slated to introduce several interesting features such as the ability to download apps from developer websites instead of the Apple Store. We did see clues last week that the company is working on implementing Repair State. This places iPhones “in a special hibernation mode” whenever people take the device in for repairs.

Given the fact Repair State appears to still be in the early stages, we most likely won’t see it on iOS 17.5 a few weeks from now; although it may roll out on iOS 18.

Be sure to check out TechRadar's suggestions for the best Apple Watch for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Signed, sealed, delivered, summarized: new Gemini-powered AI feature for Gmail looks like it’s close to launch

A summarize feature powered by Gemini, Google’s recently debuted generative AI model and digital assistance tool, is coming to the Gmail app for Android – and it could make reading and understanding emails much faster and easier. The feature is expected to roll out soon to all users, and they’ll be able to provide feedback by rating the quality of the generated email summaries. 

The feature has been suspected to be in the works for some time now, as documented by Android Authority, and now it’s apparently close to launch.

One of Android Authority’s code sleuth sources managed to get the feature working on the Gmail app version 2024.04.21.626860299 after a fair amount of tinkering. It’s not disclosed what steps they took, so if you want to replicate this, you’ll have to do some experimenting, but the fact that the summarize feature can be up and running shows that Android Gmail users may not have to wait very long.

There is a screenshot featured in Android Authority’s report showing a chat window where the user asks Gemini to summarize an email that they currently have open, and Gemini obliging. Apparently, this feature will be available via a ‘Summarize this email’ button under an email’s subject line, I assume triggering the above prompt, and this should return a compact summary of the email. This could prove to be especially helpful when dealing with a large number of emails, or for particularly long emails with many details.

Once the summary is provided, users will be shown thumbs up and thumbs down buttons under Gemini’s output, similar to OpenAI’s ChatGPT after it gives its reply to a user’s query. This will give Google a better understanding of how helpful the feature is to users and how it could be improved. There will also be a button that allows you to copy the email summary to your clipboard, according to the screenshot. 

A man working at an office and looking at his screen while using Gmail

(Image credit: Shutterstock/fizkes)

When to expect the new feature

The speculation is that the feature could be rolled out during Google’s I/O 2024 event, its annual developer conference, which is scheduled for May 14, 2024. Google is also expected to show off the next iteration of its Pixel A series, the Pixel 8A, it could show its development of augmented reality (AR) technology, and new software and service developments, especially for its devices and ChromeOS (the operating system that powers the best Chromebooks). 

Many Gmail users could potentially find this new summarize feature to be time-saving and that it streamlines their emails, but as with any generative AI, there are concerns about the accuracy of the generated text. If Gemini omits or misinterprets important information, it could lead to oversights or misunderstandings. I’m glad that Google has the feedback system in place, as this will show if the feature is actually serving its purpose well. We’ll have to wait and see, and the proof will be in the pudding whether it results in improved productivity and is reasonably accurate when it’s finally released. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More