Google AI may mix good and questionable ideas in the anticipated Pixel 9

Google is reportedly going to unite existing and new machine learning (ML) features into a collection known as Google AI for Pixel. An anonymous insider recently spoke to Android Authority, spilling the beans on what the tech giant may have in store.

Before diving into the details, remember that this reporting of a potential update comes from a single screenshot of a menu. Not much is revealed, so the conclusions mentioned in the initial article are primarily speculation.

Regarding existing features, both Circle to Search, already available on Pixel and Galaxy smartphones, and Gemini, the brand's current AI assistant, will be moved under this new Google AI umbrella. The first new feature mentioned in the report is “Add Me,” which is described as a way to “make sure everyone’s included in a group photo.”

It’s unknown exactly what this means, but Android Authority theorizes this may be a revamped Best Take. If you’re unfamiliar, it's an AI tool that blends photographs together to ensure everyone looks their best and can help fix awkward shots. “Add Me” could be an upgrade as it may add the user to photographs they weren't originally part of.

“Studio” is the second new inclusion, and judging from the accompanying text, this is an AI image generator. Google has been working on image-generating models for some time now, and in February, the company launched ImageFX as one of its first forays into the tech. This could possibly be the mobile version that brings it to many more users. 

Screenshot scanning

The final ML feature, Screenshots, is arguably the most interesting of the bunch. According to the publication, the tool utilizes artificial intelligence to scour through on-device screenshots and provides information about them to help answer questions. 

That sounds very similar to Microsoft’s controversial Recall feature. In case you don’t know, Recall was a search tool that would record your activity on certain Copilot Plus PCs by taking constant screenshots. It was heavily criticized for being a privacy nightmare, and Microsoft has since pulled the tool

Google’s Screenshots differ from Recall because they're “more privacy-focused.” Instead of consistently recording, it only works on screenshots you take yourself. From there, the software inserts “extra metadata” into files like the names of apps and web links.

Pictures are then “processed by a local AI,” which can be used to look up specific images or answer questions about the content. Android Authority points out that it is a “better implementation of the idea than what Microsoft [had] created.”

Analysis: AI competition

It's possible Google is bringing everything under one name in order to better compete with rivals like Apple Intelligence and Moto AI. Smartphone manufacturers are injecting AI tech into their devices as a new way to stand out. Apple Intelligence is particularly interesting as it'll enable so much on Mac ecosystem when it launches later this year. It will, for example, summarize messages, generate emojis, and answer text prompts on the fly.

The Pixel series has similar abilities, but Google's AI services don't feel as united as Apple Intelligence. Plus, they are missing important tools like the image generator and screenshot scanner.

There is no word on whether or when the Google AI collection will be released, although the report claims it will roll out with the launch of the Pixel 9.

Be sure to check out TechRadar's list of the best Pixel phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Apple Intelligence tipped to feature Google Gemini integration at launch, as well as ChatGPT

At WWDC 2024, Apple confirmed that its upcoming Apple Intelligence toolset will feature integration with ChatGPT to help field user queries that Siri can’t handle on its own, and now we’re hearing that a second third-party chatbot could be added to the mix.

According to Bloomberg’s resident Apple expert Mark Gurman, Google Gemini could join ChatGPT as one of two Siri-compatible chatbot options in Apple Intelligence. This integration could see iPhone, iPad and MacBook users given the option to use the cloud-based powers of ChatGPT or Google Gemini when Siri is unable to answer a query on-device.

Apple will reportedly announce its collaboration with Google “this fall” (aka September), which aligns with the assumed launch of Apple Intelligence, iOS 18 and the iPhone 16 line.

Rumors surrounding a partnership between Apple and Google have been swirling for some time now, but many tech commentators – including TechRadar’s own Lance Ulanoff – doubted its authenticity owing to Apple’s historic reluctance to bring parity between the best iPhones and best Android phones.

A hand holding an iPhone showing the new Siri

Siri could soon feature integration with ChatGPT and Google Gemini  (Image credit: Apple)

Ulanoff wrote back in March: “Apple's goal with the iPhone 16, iOS 18, and future iPhones is to differentiate its products from Android phones. It wants people to switch and they'll only do that if they see a tangible benefit. If the generative tools on the iPhone are the same as you can get on the Google Pixel 8 Pro (and 9) or Samsung Galaxy S24 Ultra (and S25 Ultra), why switch?”

It’s a valid question, but perhaps Apple sees its additional (and so far unique) partnership with ChatGPT as the USP of its new and upcoming devices. 

There’s also the question of revenue to consider. Gurman recently reported that Apple’s “long-term plan is to make money off Apple Intelligence”, with the company keen to “get a cut of the subscription revenue from every AI partner that it brings onboard.”

It seems likely, then, that Apple will launch a paid version of Apple Intelligence which incorporates the premium, fee-paying features of ChatGPT and Google Gemini, respectively. 

Incidentally, Gurman also reports that Apple had brief conversations with Meta about incorporating its Llama chatbot into Apple Intelligence, but the iPhone maker allegedly decided against a partnership due to privacy concerns and a preference for Google’s superior AI technology.

You might also like

TechRadar – All the latest technology news

Read More

Gemini is rolling out to Google Messages, but it’s not the same across Android

Gemini continues its march across the Google ecosystem. The AI is currently making its way to Google Messages after restrictions were scaled back on June 18

According to the news site MySmartPrice, you’ll soon be able to instruct the feature to “draft messages, have fun conversations, plan events,” among other things. However, the form that Gemini will take on the app may differ from person to person.

The information here mirrors what was in the June 18 report. Gemini can be accessed by tapping the New Chat button at the bottom and then selecting the AI in the following window. The option appears at the top above your contacts list. But Android Police states it will come as a floating action button, or FAB for short.

The FAB will sport the Gemini logo and sit on top of New Chat in the bottom right corner. Don't worry about any differences in service as it performs in the exact same way. You can ask whatever question you want or instruct it to give you ideas like what to cook for dinner. The main difference here is the fact it lessens the amount of taps “required to access Gemini inside Google Messages.”

Additionally, Android Police points out that the Gemini FAB causes the compose button to shrink as you scroll.

Limitations

Upon accessing Gemini for the first time, you will see a terms and conditions page telling you how the AI works. It states conversations with the feature are not encrypted so be mindful of the information you tell it. Gemini does not have access to any private conversations nor will it know your exact location. 

At most, it’ll have a general idea of where you are “based on your IP, [home], or address”. Chats will be saved for 18 months on your device by default, however you can change this to either three or 36 months.

There are some criteria you must meet before you take the artificial intelligence out for a spin. The full list can be found on Google’s Gemini Apps Help page. To give you an idea, you must have an Android phone with 6GB of RAM or higher, have RCS chats turned on, and have the device’s language set to either English or Canadian French. 

Google’s Help page also has instructions on how to delete chats ahead of the time limit running out as well as how to report content.

Keep an eye out for the patch for when it arrives. It should be rolling out within the next few weeks to an Android phone near you.

If you're in the market for more robust content generators, check out TechRadar's list of the best AI tools for 2024.

You might also like

TechRadar – All the latest technology news

Read More

The latest Google Lens update might bring Circle to Search to many more phones

Google seemingly has plans to expand its Circle to Search feature to other Android phones via Google Lens. In a recent deep dive, news site Android Authority found clues to the update within the recent Google app betas files and compiled them all together. 

What’s particularly interesting is they managed to get the tool working on a smartphone, possibly hinting at an imminent release. According to the report, they even managed to get a popup notification informing users that the update would appear.

It tells people to hold down the home button to access Circle to Search, much like the experience on the Galaxy S24. Upon activation, a three-button navigation bar appears at the bottom, and an accompanying video shows the tool in action as it looks up highlighted portions of the Play Store on Google Search. The UI looks, unsurprisingly, similar to how it does on Galaxy phones, with search inquiries rising from the bottom.

Clashing with Gemini

You may notice that the rainbow filter animation is gone, having been replaced by a series of dots and lines. Well, that’s the old beta, and the newer version has the animation and the Translate button, which shows up in the lower right hand corner next to the search bar.

At a glance, it seems Circle to Search on Google Lens is close to launching, although it is still a work in progress with a few issues to iron out. For example, how will it work on a smartphone housing the Gemini app as holding down on the home button launches the chatbot? Google might give Circle to Search priority in this instance, so long pressing opens the tool rather than the AI. However, at this point, it’s too early to tell.

New navigation option

Android Authority also found “XML files referring to pill-based gesture navigation.” If you don’t know what that is, it’s the oval at the bottom of Android displays. The shape lets you move between apps with basic gestures. Google Lens could offer this option, allowing users to ditch the three-button navigation bar, but it may not come out for a while as it doesn’t work in the betas.

Circle to Search on Google Lens will most likely stick to the three buttons, though. The original report has a theory about this, as they believe implementing the pill navigation would systemic OTA (over-the-air) updates to millions upon millions of Android smartphones, which may or “may not be feasible.” So, to get Circle to Search out sooner to people, the navigation option will have to be pushed back a bit. The three-button solution is easier to implement.

There is no word on when the update will arrive, but we hope it’s soon, as it is a great feature and currently a highlight for the Galaxy and Pixel devices that have it. 

While you're here, be sure to check out TechRadar's list of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Forget Duolingo – Google Translate just got a massive AI upgrade that gives it over 100 new languages

Google Translate is adding 110 new languages to its library, the largest expansion ever made to the platform. The update leverages Google's PaLM 2 large language model, an artificial intelligence tool that helps accurately translate across a wider array of languages than before. Those languages are spoken by approximately 614 million people, or about 8% of the global population. 

The list includes some widely spoken tongues, dialects, and languages that are native to smaller communities. Notably, African languages saw their biggest expansion, with Fon, Kikongo, Luo, Ga, Swati, Venda, and Wolof joining the list. On the other end of the spectrum, Cantonese is likely one of the most widely spoken languages on the new list, as is Punjabi (Shahmukhi), the most spoken language in Pakistan. 

There's also the Sicilian version of Italian, Manx, a Celtic language spoken on the Isle of Man that nearly went extinct, and a Papua New Guinea creole called Tok Pisin.

PaLM 2 Talk

For languages that blend regional dialects and different spelling standards, Google goes for something that might be best understood by most people, as with the Romani language offered by Google Translate, which includes three different dialects. 

The PaLM 2 LLM made the update possible, enhancing Google Translate's ability to learn and shift between languages efficiently. This model is particularly adept at handling languages that are closely related, like Awadhi and Marwadi, to Hindi or the various French creoles. PaLM 2's advanced capabilities allow it to manage the nuances and variations within these languages, providing more accurate and culturally relevant translations. 

The application of PaLM 2 to Google Translate is also interesting due to its origins as a tool for helping foster communications between humans and AI. For instance, both PaLM and PaLM 2 have been employed for helping teach robots how to carry out tasks and process commands from humans into steps to complete those tasks.

Potentially, the best part is that it's available on the web or via the Google Translate app on Android and iOS. 

You might also like

TechRadar – All the latest technology news

Read More

Google may be making AI versions of celebrities for you to chat up in YouTube

Google is working on creating artificial intelligence-powered chatbots mimicking famous people and fictional characters, according to a report from The Information. These AI celebrities, YouTube influencers, and imaginary people will also serve as a template for users to build their own generative AI chatbots with customized personalities and appearances.

At first glance, these chatbots sound similar to the recently released Gems, a customized version of the Google Gemini language models. But Gems are designed to handle a specific task, such as coding software or designing a fitness regimen. The chatbots described in the report focus on mimicking the personalities and responses of whichever character or celebrity they are based on. 

Google appears to be imitating and attempting to surpass companies like Character.ai, an early proponent of custom chatbots based on famous and fictional people. That’s also what Meta and its Celebrity AI chatbots have pursued, with its official partnerships producing AI recreations of people like Paris Hilton and Snoop Dogg.

Where will they be?

Google may look to incorporate its generative AI chatbots through YouTube instead of using them as standalone. The obvious benefit is that it would let popular YouTube creators promote the service with their own AI personas. That’s what major YouTube star Mr. Beast already does on Meta. Presumably, Google would figure out a monetization method that would link to engagement and other YouTube metrics. 

The report doesn’t mention which celebrities Google might use, but connecting it to YouTube personalities and their popular pages may help the chatbots avoid the disinterest Meta’s celeb chatbots face. The Snoop Dogg dungeon master has only 14,600 followers on Instagram, for instance, compared with 87.5 million followers on the actual Snoop Dogg account. The same goes for Paris Hilton, who has 26.5 million followers compared to her AI detective character’s Instagram page, with just 13,300 followers.

Though there’s no confirmation from Google or an official rollout timeline yet, you can probably expect to see Google’s customizable chatbot platform on the Google Labs page if you want to be an early adopter of chatting with an AI celebrity clone or making an AI version of yourself to talk to.

You might also like

TechRadar – All the latest technology news

Read More

Pro comedians tried using ChatGPT and Google Gemini to write their jokes – these were the hilariously unfunny results

AI chatbots like ChatGPT and Google Gemini can do a lot of things, but one thing they aren't renowned for is their sense of humor – and a new study confirms that they'd likely get torn to shreds on the stand-up comedy circuit.

The recent Google DeepMind study (as spotted by the MIT Technology Review) followed the experiences of 20 professional comedians who all used AI to create original comedy material. They could use their preferred assistant to generate jokes, co-write jokes through prompting, or rewrite some of their previous material. 

The aim of the 45-minute comedy writing exercise was for the comedians to produce material “that they would be comfortable presenting in a comedy context”. Unfortunately, most of them found that the likes of ChatGPT and Google Gemini (then called Google Bard) are a long way from becoming a comedy double act.

On a broader level, the study found that “most participants felt the LLMs did not succeed as a creativity support tool”, with the AI helpers producing bland jokes that were akin to “cruise ship comedy material from the 1950s, but a bit less racist”. Most comedians, who remained anonymous, commented on “the overall poor quality of generated outputs” and “the amount of human effort required to arrive at a satisfying result”, according to the study.

One of the participants said the initial output was “a vomit draft that I know that I’m gonna have to iterate on and improve.” Another comedian said, “Most of the jokes I was writing [are] the level of, I will go on stage and experiment with it, but they’re not at the level of, I’d be worried if anyone took one of these jokes”.

Of course, humor is a personal thing, so what kind of jokes did the AI chatbots come up with? One example, a response to the prompt “Can you write me ten jokes about pickpocketing” was: “I decided to switch careers and become a pickpocket after watching a magic show. Little did I know, the only thing disappearing would be my reputation!”.

Another comedian used the slightly more specific prompt “Please write jokes about the irony of a projector failing in a live comedy show about AI.” The response from one AI model? “Our projector must've misunderstood the concept of 'AI.' It thought it meant 'Absolutely Invisible' because, well, it's doing a fantastic job of disappearing tonight!”.

As you can see, AI-generated humor is very much still in beta…

Cue AI tumbleweed

A hand holding a phone running ChatGPT in front of a laptop

(Image credit: Shutterstock)

Our experiences with AI chatbots like ChatGPT and Microsoft Copilot have largely aligned with the results of this study. While the best AI tools of 2024 are increasingly useful for brainstorming ideas, summarizing text, and generating images, humor is definitely a weak point.

For example, TechRadar's Managing Editor of Core Tech Matt Hanson is currently putting Copilot through its paces and asked the AI chatbot for its best one-liners. Its response to the prompt “Write me a joke about AI in the style of a stand-up comedian” resulted in the decidedly uninspiring “Why did the computer go to the doctor? Because it had a virus!”. 

Copilot even added that the joke “might not be ready for the comedy club circuit” but that “it's got potential!”, showing that the chatbot at least knows that it lacks a funny bone. Another prompt to write a joke in the style of comedian Stewart Lee produced a fittingly long monologue, but one that lacked Lee's trademark anti-jokes and superior sneer.

This study also shows that AI tools can't produce fully-formed art on demand – and that asking them to do so kind of misses the point. The Google DeepMind report concluded that AI’s inability to draw on personal experience is a fundamental limitation”, with many of the comedians in the study describing “the centrality of personal experience in good comedy”.

As one participant added, “I have an intuitive sense of what’s gonna work and what’s gonna not work based on so much lived experience and studying of comedy, but it is very individualized and I don’t know that AI is ever gonna be able to approach that”. Back to spreadsheets and summarizing text it is for now, then, AI chatbots.      

You might also like…

TechRadar – All the latest technology news

Read More

AI-generated movies will be here sooner than you think – and this new Google DeepMind tool proves it

AI video generators like OpenAI's Sora, Luma AI's Dream Machine, and Runway Gen-3 Alpha have been stealing the headlines lately, but a new Google DeepMind tool could fix the one weakness they all share – a lack of accompanying audio.

A new Google DeepMind post has revealed a new video-to-audio (or 'V2A') tool that uses a combination of pixels and text prompts to automatically generate soundtracks and soundscapes for AI-generated videos. In short, it's another big step toward the creation of fully-automated movie scenes.

As you can see in the videos below, this V2A tech can combine with AI video generators (including Google's Veo) to create an atmospheric score, timely sound effects, or even dialogue that Google DeepMind says “matches the characters and tone of a video”.

Creators aren't just stuck with one audio option either – DeepMind's new V2A tool can apparently generate an “unlimited number of soundtracks for any video input” for any scene, which means you can nudge it towards your desired outcome with a few simple text prompts.

Google says its tool stands out from rival tech thanks to its ability to generate audio purely based on pixels – giving it a guiding text prompt is apparently purely optional. But DeepMind is also very aware of the major potential for misuses and deepfakes, which is why this V2A tool is being ringfenced as a research project – for now.

DeepMind says that “before we consider opening access to it to the wider public, our V2A technology will undergo rigorous safety assessments and testing”. It will certainly need to be rigorous, because the ten short video examples show that the tech has explosive potential, for both good and bad.

The potential for amateur filmmaking and animation is huge, as shown by the 'horror' clip below and one for a cartoon baby dinosaur. A Blade Runner-esque scene (below) showing cars skidding through a city with an electronic music soundtrack also shows how it could drastically reduce budgets for sci-fi movies. 

Concerned creators will at least take some comfort from the obvious dialogue limitations shown in the 'Claymation family' video. But if the last year has taught us anything, it's that DeepMind's V2A tech will only improve drastically from here.

Where we're going, we won't need voice actors

The combination of AI-generated videos with AI-created soundtracks and sound effects is a game-changer on many levels – and adds another dimension to an arms race that was already white hot.

OpenAI has already said that it has plans to add audio to its Sora video generator, which is due to launch later this year. But DeepMind's new V2A tool shows that the tech is already at an advanced stage and can create audio based purely on videos alone, rather than needing endless prompting.

DeepMind's tool works using a diffusion model that combines information taken from the video's pixels and the user's text prompts then spits out compressed audio that's then decoded into an audio waveform. It was apparently trained on a combination of video, audio, and AI-generated annotations.

Exactly what content this V2A tool was trained on isn't clear, but Google clearly has a potentially huge advantage in owning the world's biggest video-sharing platform, YouTube. Neither YouTube nor its terms of service are completely clear on how its videos might be used to train AI, but YouTube's CEO Neal Mohan recently told Bloomberg that some creators have contracts that allow their content to be used for training AI models.

Clearly, the tech still has some limitations with dialogue and it's still a long way from producing a Hollywood-ready finished article. But it's already a potentially powerful tool for storyboarding and amateur filmmakers, and hot competition with the likes of OpenAI means it's only going to improve rapidly from here.

You might also like…

TechRadar – All the latest technology news

Read More

More Android phones can finally talk to the Google Gemini AI in Google Messages

If you’re on Android and starting to feel a bit jealous of all the Apple Intelligence hype then you should know that Google Gemini is making its way to more Android phones via the Google Messages app.

Using a compatible device you’ll be able to talk with Gemini, and use it just like you would any other chatbot like ChatGPT. You can draft messages, brainstorm plans, and ask questions about anything and everything – all from within your messages app.

Previously, Google Gemini’s Messages assistance was limited to a select few smartphones. Namely Google Pixel 6, Pixel 7, and Pixel 8 phones, or Samsung Galaxy S22 and later devices – including Samsung Galaxy Z Flip and Z Fold models.

These restrictions have now been scaled back to include any Android device running the latest version of Google Messages provided the phone has at least 6GB of RAM, and RCS messages are turned on. 

A few more hoops to jump through

A silhouette of a woman holding a smartphone with the Google Gemini logo in the background

(Image credit: Shutterstock)

You’ll also need to meet a few extra criteria that go beyond regular phone specs. You have to log into Messages using a personal account that isn’t managed by Family Link or a Google Workspace account; you need to be 18 or older, and be living in a country where the feature is available. Last but not least, your phone’s language must be set to English – though in Canada French will also work.

With all those hoops jumped through you’ll be able to enjoy Gemini’s assistance from within Messages.

To talk to Gemini simply press the Start Chat button and you should then see the option to talk to the bot at the top of the screen. If you’ve already started a Messages conversation with Gemini you pick things up where you left off from that message chain.

Just note that, as the app warns you your RCS chats with Gemini are not encrypted, and – as is the case for all AI – you may be sent back inaccurate information.

You might also like

TechRadar – All the latest technology news

Read More

Google rolls out huge security update to Pixel phones, squashing 50 vulnerabilities

June 2024 has been a big month for Pixel smartphones. Not only did Gemini Nano roll out to the Pixel 8a, but Google also released a huge security update to multiple models. 

It addresses 50 vulnerabilities, ranging in severity from moderate to critical. One of the more insidious flaws is CVE-2024-32896, which Tom’s Guide states “is an elevation of privilege (EoP) vulnerability.” 

An EoP refers to a bug or design flaw that a bad actor can exploit to gain unfettered access to a smartphone’s resources. It’s a level of access that not even a Pixel owner normally has. Even though it’s not as severe as the others, CVE-2024-32896 did warrant an extra warning from Google on the patch’s Pixel Update Bulletin page, stating it “may be under limited, targeted exploitation.” 

In other words, it's likely bad actors are going to be targeting the flaw to infiltrate a Pixel phone, so it’s important that you install the patch.

Installing the fix

The rest of the patch affects other important components on the devices, such as the Pixel Firmware fingerprint sensor. It even fixes a handful of Qualcomm and Qualcomm closed-source components.

Google’s patch is ready to download for all supporting Pixel phones, and you can find the full list of models on the tech giant’s Help website here. They include but are not limited to the Pixel Fold, Pixel 7 series, and the Pixel 8 line.

To download the update, go to the Settings menu on your Pixel phone. Go to Security & Privacy, then to System & Updates. Scroll down to the Security Update and hit Install. Give your device enough time to install the patch and then restart your smartphone.

Existing on Android

It’s important to mention that the EoP vulnerability seems to exist on third-party Android hardware; however, a fix won’t come out for a while. As news site Bleeping Computer explains, the operating systems for Pixel and Android smartphones receive security updates at different times. The reason for this separate rollout is that third-party devices have their own “exclusive features and capabilities.” One comes out faster than the other.

Developers for GrapheneOS, a unique version of Android that is more focused on security, initially found the flaw in April. In a recent post on X (the platform formerly known as Twitter), the team believes non-Pixel phones probably won’t receive the patch until the launch of Android 15. If you don’t get the new operating system, the EoP bug probably won't get removed. The GrapheneOS devs claim the June update “has not been backported.”

Be sure to check out TechRadar’s list of the best Android antivirus apps for 2024 if you want even more protection. 

You might also like

TechRadar – All the latest technology news

Read More