Google isn’t done trying to demonstrate Gemini’s genius and is working on integrating it directly into Android devices

Google’s newly reworked and rebranded family of generative artificial intelligence models, Gemini, may still be very much at the beginning of its development journey, but Google is making big plans for it. It’s planning to integrate Gemini into Android software for phones, and it’s predicted that users will be able to access it offline in 2025, according to a top executive at Google’s Pixel division, Brian Rakowski.

Gemini is a series of large language models that are designed to understand and generate human-like text and more, and the most compact, efficient model of these is Gemini Nano, intended for tasks on devices. This is the model that’s currently built and adapted to run on Pixel phones and other capable Android devices. According to Rakowski, Gemini Nano’s larger sibling models that require an internet connection to run (as they only live in Google’s data centers) are the ones expected to be integrated into new Android phones starting next year. 

Google has been able to do this thanks to recent breakthroughs in engineers’ ability to compress these bigger and more complex models to a size that was feasible for use on smaller devices. One of these larger sibling models is Gemini Ultra, which is considered a key competitor to Open AI’s premium GPT-4 chatbot, and the compressed version of it will be able to run on an Android phone with no extra assistance.

This would mean users could access the processing power that Google is offering with Gemini whether they’re connected to the internet or not, potentially improving their day-to-day experience with it. It also means whatever you enter into Gemini wouldn’t necessarily have to leave your phone for Gemini to process it (if Google wills it, that is), thereby making it easier to keep your entries and information private – cloud-based AI tools have been criticized in the past for having inferior digital security compared to locally-run models. Rakowski told CNBC that what users will experience on their devices will be “instantaneous without requiring a connection or subscription.”

Three Android phones on an orange background showing the Google Gemini Android app

(Image credit: Future)

A potential play to win users' favor 

MSPowerUser points out that the smartphone market has cooled down as of late, and some manufacturers might be trying to capture potential buyers’ attention by offering devices capable of utilizing what modern AI has to offer. While AI is an incredibly rich and intriguing area of research and novelty, it might not be enough to convince people to swap their old phone (which may already be capable of processing something like Gemini or ChatGPT) for a new one. Right now, the makers of AI hoping to raise trillions of dollars in funding are likely to offer versions that can run on existing devices so people can try it for themselves, and my guess is that satisfies most people’s AI appetites right now. 

Google, Microsoft, Amazon, and others are all trying to develop their own AI models and assistants to become the first to reap the rewards. Right now, it seems like AI models are extremely impressive and can be surprising, and they can help you at work (although caution should be heavily exercised if you do this), but their initial novelty is currently the biggest draw they have.

These tools will have to demonstrate continuous quality-of-life improvements to be significant enough to make the type of impression they’re aiming to make. I do believe steps like making their models widely available on users’ devices and giving users the option and the capability to use them offline is a step that could pay off for Google in the long run – and I would like to see other tech giants follow in its path. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

We’ll likely get our first look at Android 15 this week – here’s what to expect

The first preview version of Android 15 may launch on Thursday, February 15 if a recently discovered developer comment is to be believed.

It was originally posted to Google’s Android Open Source Project website on February 13, although the page hosting the message has since been deleted. If you go to the page right now, you’ll be greeted with an error message. Fortunately, 9To5Google has a screenshot of the comment and it states, in no uncertain terms, that the “first Developer preview is scheduled for Feb 15”. They even refer to it as “Android V” which the publication explains is a reference to the system’s codename, “Vanilla Ice Cream”. 

Early Android builds are typically exclusive to Pixel devices and 9To5Google believes this will be the case with the preview. Because it is meant primarily for developers, the build probably won’t see a public release due to software instability. That said, we do expect to see people crack open the preview and spill all of its contents onto the internet, revealing what Android 15 is capable of.

It’s unknown what this early version of the OS will bring; however, we can look at previous reports to give you an idea of what may be arriving.

Features to expect

Back in December 2023, three features were found hidden in the files of a then-recent Android 14 beta that could appear to be for Android 15.

The first one is called Communal Space which lets users add widgets to the lock screen. At the time of the initial report, only Google Calendar, Google Clock, and the main Google App could be added, but we believe there's a good chance more will be supported at launch. The second is the introduction of a battery health percentage read-out akin to what the iPhone 15 has. It’ll offer a crystal clear indication “of how much your phone’s battery has degraded” compared to when it was fresh out of the box.  

Communal Space on Pixel tablet

(Image credit: Mishaal Rahman/Android Authority)

The third feature is called Private Space and, according to Android Police, may be Google’s take on Samsung’s Secure Folder. It hides apps on your smartphone away from prying eyes. This can be especially helpful if you happen to share a device with others. 

Then in January, more news came out claiming Android 15 might have a feature allowing users to effortlessly share wireless audio streams. On the surface, it sounds similar to Bluetooth Auracast, a unique form of Bluetooth LE Audio for transmitting content. We wouldn’t be surprised if it was Bluetooth Auracast considering it has yet to be widely adopted by smartphone manufacturers. 

Bluetooth Auracast being shared by two children, on over-ear wireless headphones

(Image credit: Bluetooth SIG)

The last update came in early February revealing Android 15 may soon require all apps on the Google Play Store to support an edge-to-edge mode making it a mandatory setting. The presumed goal here is to better enable full-screen viewing. Edge-to-edge is typically only seen on certain types of apps like video games. Navigation bars and thick black stripes at the top of screens could become a thing of the past as Google establishes a new optimized standard for landscape viewing on Android.

That's currently all we know about Android 15. Hopefully, that one developer's slip-up is just the start of Android 15 reveals. While we have you check out TechRadar's list of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Leak suggests Android and ChromeOS to receive deeper device integration

Android devices and ChromeOS may become best friends in the near future as Google is reportedly working on better integrating the two platforms.

Hints of this move were discovered by industry expert AssembleDebug on X (the platform formerly known as Twitter) who recently dove into the files of Google Play Services version 24.06.12. After activating several internal flags, he discovered two new features are currently in development plus certain sections will be renamed to better fit the changes. As 9To5Google points out, Device Connections will be renamed to Devices & Sharing, and there is a new option called Cross-Device Services.

Tapping the section for the first time allows users to choose the Android phones and Chromebooks they want on their cross-device network. There doesn’t appear to be a limit to how many gadgets you can have connected at the same time. It also looks like you can send out invitations en masse to nearby hardware during this time. Once setting up is done, you’re given access to the aforementioned features. Keep in mind it’s unknown exactly how these tools work although there are short descriptions under each one offering a bit of insight.

See more

Call Cast lets you presumably hop between devices during calls, however it “only works with certain apps”. Internet Sharing, on the other hand, is more nebulous. Judging from the onscreen text, it’ll give users a way to share their hotspot connection as well as their Wi-Fi password to member devices in a group. It saves you the trouble of having to re-enter your password every time you want to add another phone.

Imminent roll out

That’s pretty much all we can gather from this latest info dump. Given the fact AssembleDebug was able to trigger the update and the near-finished state of the interface, we think it’s safe to say the patch is rolling out fairly soon. It’s unknown exactly when it’ll come out, but Android Police in their coverage predicts it’ll release next month for Google’s March Feature Drop alongside other updates. These include the eSIM transfer tool plus Bluetooth Quick Settings.

As with every leak, take all the details here with a grain of salt. Things could change at any time. That said, if it's released as is, it would be a great upgrade to the current mobile environment. Chromebooks already offer cross-device connectivity to Androids, but it’s limited to primarily app streaming. Improving usability like this could allow Google to finally establish a hardware ecosystem similar to Apple’s own.

While we have you check out TechRadar's roundup of the best Chromebooks for 2024

You might also like

TechRadar – All the latest technology news

Read More

Bye-bye, Bard – Google Gemini AI takes on Microsoft Copilot with new Android app you can try now

Google Bard has been officially renamed as Gemini – and as was recently rumored, there’s going to be a paid subscription to the AI in the same vein that Microsoft introduced with Copilot Pro not so long ago.

Gemini will, of course, sound familiar, as it’s actually the name of Google’s relatively recently introduced AI model which powered Bard – so basically, the latter name is being scrapped, simplifying matters so everything is called Gemini.

There’s another twist here, though, in that Google has a new sprawling AI model called Ultra 1.0, and this freshly built engine – which is the “first to outperform human experts on MMLU (massive multitask language understanding)” according to the company – will drive a new product called Gemini Advanced.

No prizes for guessing that Gemini Advanced is the paid subscription mentioned at the outset. Those who want Gemini Advanced will have to sign up to the Google One AI Premium plan (which is part of the wider Google One offering). That costs £19.99 / £18.99 per month and includes 2TB of cloud storage.

Google is really hammering home how much more advanced the paid Gemini AI will be, and how it’ll be much more capable in terms of reasoning skills, and taking on difficult tasks like coding.

We’re told Gemini Advanced will offer longer more in-depth conversations and will understand context to a higher level based on your previous input. Examples provided by Google include Gemini Advanced acting as a personal tutor capable of creating step-by-step tutorials based on the learning style it has determined is best for you.

Or for creative types, Gemini Advanced will help with content creation, taking into account considerations such as recent trends, and ways in which it might be best for creators to drive audience numbers upwards.

Google is also introducing a dedicated Gemini app for its Android OS (available in the US starting today, and rolling out to more locations “starting next week”). Gemini will be accessible via the Google app on iOS, too.

Owners of the best Android phones will get the ability to use Gemini via that standalone app, or can opt in via Google Assistant, and it’ll basically become your new generative AI-powered helper instead of the latter.

Long press the power button and you’ll summon Gemini (or use “Hey Google”) and you can ask for help in a context-sensitive fashion. Just taken a photo? Prod Gemini and the AI will pop up to suggest some captions for example, or you can get it to compose a text, clarify something about an article currently on-screen, and so on.

Google Assistant voice features will also be catered for by Gemini on Android, such as controlling smart home gadgets.

Naturally, the iOS implementation won’t be anything like this, but within the Google app you’ll have a Gemini button that can be used to create images, write texts, and deliver other more basic functions than you’ll see on Android.

The rollout of the Gemini app on Android, and iOS handsets, starts from today in the US, so some folks may be able to get it right now. It’ll be made available to others in the coming weeks.


Analysis: As Bard exits stage left, will Gemini shine in the spotlight?

Google is pretty stoked about the capabilities of Gemini Advanced, and notes that it employs a diverse set of 57 subjects – from math and physics, through to law and medicine – to power its knowledge base and problem-solving chops.

We’re told by Google that in “blind evaluations with our third-party raters” the Ultra 1.0-powered Gemini Advanced came out as the preferred chatbot to leading rivals (read: Copilot Pro).

Okay, that’s all well and good, but big talk is all part of a big launch – and make no mistake, this is a huge development for Google’s AI ambitions. How the supercharged Ultra 1.0 model pans out in reality, well, that’s the real question. (And we’re playing around with it already, rest assured – stay tuned for a hands-on experience soon).

The other question you’ll likely be mulling is how much will this AI subscription cost? In the US and UK it’ll run to $ 20 / £18.99 per month (about AU$ 30 per month), though you do get a free trial of two months to test the waters, which seems to suggest Google is fairly confident Gemini Advanced will impress.

If $ 20 monthly sounds familiar, well, guess what – that’s exactly what Microsoft charges for Copilot Pro. How’s that for a coincidence? That said, there’s an additional value spin for Google here – the Google One Premium plan doesn’t just have its AI, but other benefits, most notably 2TB worth of cloud storage. Copilot Pro doesn’t come with any extras as such (unless you count unlocking the AI in certain Microsoft apps, such as Word, Excel and so on, for Microsoft 365 subscribers).

So now, not only do we have the race between Google and Microsoft’s respective AIs, but we have the battle between the paid versions – and perhaps the most interesting part of the latter conflict will be how much in the way of functionality is gated from free users.

Thus farm, Copilot Pro is about making things faster and better for paying users, and adding some exclusive features, whereas Gemini Advanced seems to be built more around the idea of adding a lot more depth in terms of features and the overall experience. Furthermore, Google is chucking in bonuses like cloud storage, and looking to really compete on the value front.

However, as mentioned, we’ll need to spend some time with Google’s new paid AI offering before we can draw any real conclusions about how much smarter and more context-aware it is.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft Edge on Android could soon get extensions to tempt you away from Chrome

Will browser extension support be enough to tempt Android users away from Google Chrome to the welcoming arms of Microsoft Edge? We might soon find out, as it looks like Edge is now prepping extension support for its mobile app.

This comes from some digging into the Edge for Android code by tipster @Leopeva64 (via 9to5Google). For now the functionality is hidden behind a flag in the early testing versions of the app, but it could reach the main app as early as March.

Certain extensions – for switching to dark mode, for blocking ads, and for changing the speed of media playback – are already showing up on a rudimentary extensions page, which is another sign that the feature is launching soon.

From the screenshots that have been posted so far, it looks as though a new Extensions button will be added to the menu that pops up when you tap the three horizontal lines, down in the lower-right corner of the Edge for Android interface.

See more

Extended features

Firefox extensions

Firefox recently added extension support to its Android app (Image credit: Future)

You may well know how useful third-party extensions can be on a desktop browser, adding all kinds of additional tools and features to your browser of choice – from changing the way tabs are arranged, to letting you annotate webpages, to managing website volume.

While there are a huge number of extensions available for Chrome on the desktop, Chrome and other browsers have typically shied away from adding extension support on mobile, for a host of different reasons: the screens are smaller, there are fewer system resources available, the interface is simpler, and so on.

Now though, the situation is changing. Firefox recently reintroduced extension support in its Android app, and now it looks as though Edge will follow suit – in an attempt to try and chip away at Chrome's market share. Chrome is the default on around two-thirds of mobile devices worldwide, though that includes iPhones as well as Android devices.

You won't be able to use all the existing Edge extensions on Android – clearly not all of them will work, and the developers will have to adapt them for the different platform – but watch this space for these add-ons arriving on Microsoft's browser.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT steps up its plan to become your default voice assistant on Android

A recent ChatGPT beta is giving a select group of users the ability to turn the AI into their device’s new default voice assistant on Android.

This information comes from industry insider Mishaal Rahman on X (the platform formerly known as Twitter) who posted a video of himself trying out the feature live. According to the post, users can add a shortcut to ChatGPT Assistant, as it’s referred to, directly into an Android’s Quick Settings panel. Tapping the ChatGPT entry on there causes a new UI overlay to appear on-screen, consisting of a plain white circle near the bottom of the display. From there, you verbally give it a prompt, and after several seconds, the assistant responds with an answer. 

See more

The clip shows it does take the AI some time to come up with a response – about 15 seconds. Throughout this time, the white circle will display a bubbling animation to indicate it’s generating a reply. When talking back, the animation turns more cloud-like. You can also interrupt ChatGPT at any time just by tapping the screen. Doing so causes the circle to turn black.

Setting up

The full onboarding process of the feature is unknown although 9To5Google claims in their report you will need to pick a voice when you launch it for the first time. If they like what they hear, they can stick with a particular voice or go back a step to exchange it with another. Previews of each voice can be found on OpenAI’s website too. They consist of three male and two female voices. Once all that is settled, the assistant will subsequently launch as normal with the white circle near the bottom.

To try out this update, you will need a subscription to ChatGPT Plus which costs $ 20 a month. Next, you install either ChatGPT for Android version 1.2024.017 or .018, whatever is available to you. Go to the Beta Features section in ChatGPT’s Settings menu and it should be there ready to be activated. As stated earlier, only a select group of people will gain access. It's not a guarantee.

Future default

Apparently, the assistant is present on earlier builds. 9ToGoogle states the patch is available on ChatGPT beta version 1.2024.010 with limited functionality. They claim the patch introduces the Quick Setting tile, but not the revamped UI.

Rahman in his post says no one can set ChatGPT as their default assistant at the moment. However, lines of code found in a ChatGPT patch from early January suggest this will be possible in the future. We reached out to OpenAI asking if there are plans to expand the beta’s availability. This story will be updated at a later time.

Be sure to check out TechRadar's list of the best ChatGPT extensions for Chrome that everyone should use. There are four in total.

You might also like

TechRadar – All the latest technology news

Read More

WhatsApp on Android could soon let you share files with nearby friends

WhatsApp may receive its own version of Apple’s AirDrop as a recent Android beta shows hints that a file-sharing feature is in the works.

A post on WABetaInfo offers insight into the potential update. Like AirDrop, the feature only works between two people. Both users will need to have the software open to the tool and be “within close proximity” to exchange files. What’s particularly interesting about this file sharing is the receiving person will need to physically shake their smartphone to create a share request. 

WABetaInfo explains this is to maintain a “controlled approach to file exchanges” between contacts. It's similar to how AirDrop lets people configure its settings so they only receive content from trusted sources. However, the website claims it will be possible to share media with non-contacts on WhatsApp. Phone numbers will remain hidden in this situation to preserve anonymity.

And just like sending messaging on WhatsApp, file sharing is end-to-end encrypted according to the website, ensuring personal information and content being sent is protected from outside interference. 

Pending information

That’s pretty much all that is known about WhatsApp’s file-sharing feature. A lot of the finer details have yet to be revealed. 

It’s unknown exactly how sending media to non-contacts will work. Will all receiving users have to shake their device too or will Meta change its mind and throw out that step replacing it with a simple menu setting? Going back to AirDrop, Apple’s version lets you change the receiving setting to Everyone allowing non-contacts to accept content from you.

Additionally, we don’t know if there are any file-size limitations for shared files. The maximum size for sending media to group chats is 2GB at the moment. The upcoming feature will probably have a similar size although it would be nice to see Meta expand the limit. Considering that we live in a world where 4K videos exist, an expansion would be great to have.

No word on when this update will become available to beta testers. WABetaInfo states the tool is still under development, so a preview build doesn’t exist yet. If you’re interested in trying out the file-sharing feature once it’s ready, you can become a WhatsApp beta tester by joining the Google Play Beta Program. You may be one of the lucky few to gain access down the line.

Analysis: cross-platform sharing

One thing we would like to see is compatibility across different operating systems. Imagine being able to send files from an Android phone to an iOS device and vice versa. It would certainly give WhatsApp an edge over Quick Share.

If you’re not familiar, Google and Samsung recently entered a partnership that resulted in many new products and combining Nearby Share into Quick Share. Now Android users can use the function for quick file sharing, hence the name. Assuming Meta rolls out the update in its current state, it could cause a lot of confusion as people would arguably be receiving the same thing twice. Giving WhatsApp's tool cross-platform support would make it stand out considerably.

Be sure to check out TechRadar's WhatsApp channel to get all of the latest news and reviews right on your phone.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT may be plotting to replace Google Assistant on your Android phone, ahead of its landmark bot store launch

We can't say for sure whether or not AI is secretly plotting world domination – but it does appear that ChatGPT developer OpenAI has designs on replacing Google Assistant as the default helper tool on Android devices.

Some digging by the team at Android Authority has revealed hidden code in the latest version of the ChatGPT app for Android: code that triggers a small pop-up prompt at the bottom of the screen, just like Google Assistant (or Siri on the iPhone).

The thinking is that you wouldn't have to launch ChatGPT for Android to get answers from the AI bot – you could just hold down a shortcut button, or even say “hey ChatGPT”. There also seems to be a new tile in the works for the Quick Settings panel on Android, giving users another way of getting to ChatGPT.

This wouldn't exactly be a hostile coup – Android already allows the default digital assistant app to be switched, to something like Alexa or Bixby – but it's interesting that OpenAI wants to expand the reach of ChatGPT. As always though, plans can change, so it's not certain that we'll see this functionality appear.

Store opening

In other ChatGPT news, the GPT Store that OpenAI promised last year is now scheduled to launch next week, after a delay – as per emails sent out to people signed up to a paid ChatGPT plan. It means users can create their own bespoke versions of ChatGPT and sell them on to other people and businesses.

These GPTs – or generative pre-trained transformers – are built on the same well of training data as ChatGPT, but they can be tweaked to take on specific personalities or accomplish particular tasks. Some rather obvious examples would be a bot that helps with tech support questions, or one that comes up with recipes.

Custom bots can also be loaded up with knowledge from outside OpenAI's vaults – so if you've written a hundred scientific papers on dinosaur fossils, for example, you're able to plug all of this data into a GPT and ask questions about the research. Right now, you need a ChatGPT Plus or Enterprise account to build a bot.

OpenAI is no doubt trying to foster the same kind of innovation and growth that we've seen in smartphone apps, ever since Apple opened the iPhone App Store in 2008. However, at the moment we're still waiting on a lot of details, including how users can get verified, and how sales revenue will be split.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft launches Copilot for iPhones and iPads right after Android

That didn't take long: just days after launching a dedicated Copilot app for Android, Microsoft has restored balance to the universe again by making the same app available for those users who prefer iPhones and iPads.

As initially spotted by The Verge, the Copilot app for iOS and iPadOS seems to be an exact replica of the Android one, and is also free to use. The same rules apply: you can use it in a limited fashion without logging in, but signing into a Microsoft account gives you more prompts and more features (like image generation capabilities).

If you do sign in with a Microsoft account, then you can enable the latest and greatest GPT-4 model from Microsoft's partner OpenAI. Responses will generally be slower but better, and bearing in mind that ChatGPT customers have to pay to get the GPT-4 version, this is a pretty good deal from Microsoft.

While it's a notable move from Microsoft to give Copilot its own app, this hasn't come out of nowhere: pretty much all of the functionality here was previously available in the Bing apps for Android and iOS, so little has changed in terms of what you can do.

Copilot everywhere

If you're completely new to generative AI, these tools can produce text and images based on a few user prompts. You can get Copilot to do anything from write a poem about TechRadar to produce an image of a glowing Apple iPhone.

You can also get Copilot to query the web – if you need party game or travel ideas, for example – and have it explain complex topics in simple terms. It's a bit like a supercharged version of Google Assistant or Siri from Apple.

Microsoft is continuing to push forward quickly with upgrades to Copilot, as it knows that the likes of Apple and Google are busy improving their own generative AI tools. It looks inevitable that AI will be one of the hottest tech trends of 2024.

And if you don't want to install Copilot on your phone, you can find it in plenty of other places too. The same features are still available as part of Bing on the web, and Copilot has also now been added to Windows 11 and Windows 10.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft just launched a free Copilot app for Android, powered by GPT-4

If you're keen to play around with some generative AI tech on your phone, you now have another option: Microsoft has launched an Android app for its Copilot chatbot, and like Copilot in Windows 11, it's free to use and powered by GPT-4 and DALL-E 3.

As spotted by @techosarusrex (via Neowin), the Copilot for Android app is available now, and appears to have arrived on December 19. It's free to use and you don't even need to sign into your Microsoft account – but if you don't sign in, you are limited in terms of the number of prompts you can input and the length of the answers.

In a sense, this app isn't particularly new, because it just replicates the AI functionality that's already available in Bing for Android. However, it cuts out all the extra Bing features for web search, news, weather, and so on.

There's no word yet on a dedicated Copilot for iOS app, so if you're using an iPhone you're going to have to stick with Bing for iOS for now if you need some AI assistance. For now, Microsoft hasn't said anything officially on its new Android app.

Text and images

The functionality inside the new app is going to be familiar to anyone who has used Copilot or Bing AI anywhere else. Microsoft has been busy adding the AI everywhere, and has recently integrated it into Windows 11 too.

You can ask direct questions like you would with a web search, get complex topics explained in simple terms, have Copilot generate new text on any kind of subject, and much more. The app can work with text, image and voice prompts too.

Based on our testing of the app, it seems you get five questions or searches per day for free if you don't want to sign in. If you do tell Microsoft who you are, that limit is lifted, and signing in also gives you access to image generation capabilities.

With both Apple's Siri and Google Assistant set to get major AI boosts in the near future, Microsoft won't want to be left behind – and the introduction of a separate Copilot app could help position it as a standalone digital assistant that works anywhere.

You might also like

TechRadar – All the latest technology news

Read More