Google Gemini AI looks like it’s coming to Android tablets and could coexist with Google Assistant (for now)

Google’s new generative AI model, Gemini, is coming to Android tablets. Gemini AI has been observed running on a Google Pixel Tablet, confirming that Gemini can exist on a device alongside Google Assistant… for the time being, at least. Currently, Google Gemini is available to run on Android phones, and it’s expected that it will eventually replace Google Assistant, Google’s current virtual assistant that’s used for voice commands.

When Gemini is installed on Android phones, users would be prompted to choose between using Gemini and Google Assistant. It’s unknown if this restriction will apply to tablets when Gemini finally arrives for them – though at the moment it appears not. 

Man sitting at a table working on a laptop

(Image credit: Shutterstock/GaudiLab)

A discovery in Google Search's code

The news was brought to us via 9to5Google, which did an in-depth report on the latest beta version (15.12) of the Google Search app in the Google Play Store and discovered it contains code referring to using Gemini AI on a “tablet,” and would offer the following features: 

The code also shows that the Google app will host Gemini AI on tablets, instead of a standalone app that currently exists for Android phones. Google might be planning on a separate Gemini app for tablets and possibly other devices, especially if its plans to phase out Google Assistant are still in place. 

9to5Google also warns that this is still as it’s still a beta version of the Google Search app, Google can still change its mind and not roll out these features.

A woman using an Android phone.

(Image credit: Shutterstock/brizmaker)

Where does Google Assistant stand?

When 9to5Google activated Gemini on a Pixel Tablet, it found that Google Assistant and Gemini would function simultaneously. Gemini for Android tablets is yet to be finalized, so Google might implement a similar restriction that prevents both Gemini and Google Assistant running at the same time on tablets. When both were installed and activated, and the voice command “Hey Google” was used, Google Assistant was brought up instead of Gemini.

This in turn contradicted screenshots of the setup screen showing that Gemini will take precedence over Google Assistant if users choose to use it.

The two digital assistants don’t have the same features yet and we know that the Pixel Tablet was designed to act as a smart display that uses Google Assistant when docked. Because Google Assistant will be used when someone asks Gemini to do something it’s unable to do, we may see the two assistants running in parallel for the time being, until Gemini has all of Google Assistant's capabilities, such as smart home features. 

Meanwhile, Android Authority reports that the Gemini experience on the Pixel Tablet is akin to the Pixel Fold and predicts that Google’s tablets will be the first Android to gain Gemini capabilities. This makes sense, as Google may want to use Gemini exclusivity to encourage more people to buy Pixel tablets in the future. The Android tablet market is a highly competitive one, and advanced AI capabilities may help Pixel tablets stand out.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Google Gemini’s new Calendar capabilities take it one step closer to being your ultimate personal assistant

Google’s new family of artificial intelligence (AI) generative models, Gemini, will soon be able to access events scheduled in Google Calendar on Android phones.

According to 9to5Google, Calendar events were on Gemini Experiences Senior Director of Product Management at Google Jack Krawczyk’s “things to fix ASAP” list for what Google would be working to add to Gemini to make it a better-equipped digital assistant. 

Users who have the Gemini app on an Android device can now expect Gemini to respond to voice or text prompts like “Show me my calendar” and “Do I have any upcoming calendar events?” When 9to5Google tried this the week before, Gemini responded that it couldn’t fulfill those types of requests and queries – which was particularly noticeable as those kinds of requests are pretty commonplace with rival (non-AI) digital assistants such as Siri or Google Assistant. However, when those same prompts were attempted this week, Gemini opened the Google Calendar app and fulfilled the requests. It seems that if users would like to enter a new event using Gemini, you need to tell it something like “Add an event to my calendar,” to which it should then prompt the user to fill out the details manually by using voice commands. 

Google Calendar

(Image credit: Shutterstock)

Going all in on Gemini

Google is clearly making progress to set up Gemini as its proprietary all-in-one AI offering (including as a digital assistant, replacing Google Assistant in the future). It’s got quite a few steps before it manages that, with users asking for features like the ability to play music or edit their shopping lists via Gemini. Another significant hurdle for Gemini to clear if it wants to become popular is that it’s only available in the United States for now. 

The race to become the best AI assistant has gotten a little bit more intense recently between Microsoft with Copilot, Google with Gemini, and Amazon with Alexa. Google did recently make some pretty big strides in its ability to compress the larger Gemini models so it could run on mobile devices. The capabilities of these more complex models sound like they can give Gemini’s capabilities a major boost. Google Assistant is pretty widely recognized and this is another feather in Google’s cap. I feel hesitant about placing a bet on any single one of these digital AI assistants, but if Google continues at this pace with Gemini, I think its chances are pretty good.

You might also like

TechRadar – All the latest technology news

Read More

Google has fixed an annoying Gemini voice assistant problem – and more upgrades are coming soon

Last week, Google rebranded its Bard AI bot as Gemini (matching the name of the model it runs on), and pushed out an Android app in the US; and while the new app has brought a few frustrations with it, Google is now busy trying to fix the major ones.

You can, if you want, use Google Gemini as a replacement for Google Assistant on your Android phone – and Google has made this possible even though Gemini lacks a lot of the basic digital assistant features that users have come to rely on.

One problem has now been fixed: originally, when chatting to Gemini using your voice, you had to manually tap on the 'send' arrow to submit your command or question – when you're trying to keep up a conversation with your phone, that really slows everything down.

As per 9to5Google, that's no longer the case, and Google Gemini will now realize that you've stopped talking (and respond accordingly) in the same way that Google Assistant always has. It makes the app a lot more intuitive to use.

Updates on the way

See more

What's more, Google Gemini team member Jack Krawczyk has posted a list of features that engineers are currently working on – including some pretty basic functionality, including the ability to interact with your Google Calendar and reminders.

A coding interpreter is apparently also on the roadmap, which means Gemini would not just be able to produce programming code, but also to emulate how it would run – all within the same app. Additionally, the Google Gemini team is working to remove some of the “preachy guardrails” that the AI bot currently has.

The “top priority” is apparently refusals, which means Gemini declines to complete a task or answer a question. We've seen Reddit posts that suggest the AI bot will sometimes apologetically report that it can't help with a particular prompt – something that's clearly on Google's radar in terms of rolling fixes out.

Krawczyk says the Android app is coming to more countries in the coming days and weeks, and will be available in Europe “ASAP” – and he's also encouraging users to keep the feedback to the Google team coming.

You might also like

TechRadar – All the latest technology news

Read More

Google Maps is getting an AI-boosted upgrade to be an even better navigation assistant and your personal tour guide

It looks like Google is going all-in on artificial intelligence (AI), and following the rebranding of its generative AI chatbot from Bard to Gemini, it’s now bringing generative AI recommendations to Google Maps.

The AI-aided recommendations will help Google Maps perform even better searches for a variety of destinations, and the feature is also supposedly able to function as an advisor that can offer insights and tips about things like location, budgets, and the weather. Once the feature is enabled, it can be accessed through the search function, much like existing Google Maps features. Currently, it’s only available for US users, but hopefully, it will roll out worldwide very soon. 

This upgrade of Google Maps is the latest move in Google’s ramped-up AI push, which has seen developments like AI functionality integrated into Google Workspace apps. We’ve also had hints before that AI features and functions were coming to Google Maps – such as an improved Local Guides feature. Local Guides is intended to synthesize local knowledge and experiences and share them with users to help them discover new places.

What we know about how this feature works

Android Police got a first look at how users were introduced to the new AI-powered recommendations feature. A reader got in touch with the website and explained how they were given an option to Search with generative AI in their Google Maps search bar. When selected, it opened up a page that detailed how the new feature makes use of generative AI to provide you with recommendations in a short onboarding exercise. Tapping Continue opens up the next page that provides users with a list of suggested queries like nearby attractions they can go to kill time or good local restaurants.

Similarly to ChatGPT, Google Maps apparently also includes tips toward the bottom of that page to help you improve your search results. Users can add more details to finetune their search like their budget, a place or area they might have in mind, and what the weather looks like when they’re planning to go somewhere. If you select one of these suggested queries, Google Maps will then explain how it would go through the process of selecting specific businesses and locations to recommend.

When the user doesn’t specify an area or region, Google Maps resorts to using the user’s current location. However, if you’d like to localize your results to an area (whether you’re there or not), you’ll have to mention that in your search.

After users try the feature for the first time and go through the short onboarding in Maps, they can access it instantly through the search menu. According to Android Police, Search with generative AI will appear below the horizontal menu that lists your saved location such as Home, Work, and so on.

A promising feature with plenty of potential

Again, this feature is currently restricted to people in the US, but we hope it’ll open up to users in other regions very soon. Along with AI recommendations, Google Maps is also getting a user interface redesign aimed at upgrading the user experience.

While I get that some users might be getting annoyed or overwhelmed with generative AI being injected into every part of our digital lives, this is one app I'd like to try when equipped with AI. Also, Google is very savvy when it comes to improving the user experience of its apps, and I’m keen to see how this feature’s introduction plays out.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT could become a smart personal assistant helping with everything from work to vacation planning

Now that ChatGPT has had a go at composing poetry, writing emails, and coding apps, it's turning its attention to more complex tasks and real-world applications, according to a new report – essentially, being able to do a lot of your computing for you.

This comes from The Information (via Android Authority), which says that ChatGPT developer OpenAI is working on “agent software” that will act almost like a personal assistant. It would be able to carry out clicks and key presses as it works inside applications from web browsers to spreadsheets.

We've seen something similar with the Rabbit R1, although that device hasn't yet shipped. You teach an AI how to calculate a figure in a spreadsheet, or format a document, or edit an image, and then it can do the job for you in the future.

Another type of agent in development will take on online tasks, according to the sources speaking to The Information: These agents are going to be able to research topics for you on the web, or take care of hotel and flight bookings, for example. The idea is to create a “supersmart personal assistant” that anyone can use.

Our AI agent future?

The Google Gemini logo on a laptop screen that's on an orange background

Google is continuing work on its own AI (Image credit: Google)

As the report acknowledges, this will certainly raise one or two concerns about letting automated bots loose on people's personal computers: OpenAI is going to have to do a lot of work to reassure users that its AI agents are safe and secure.

While many of us will be used to deploying macros to automate tasks, or asking Google Assistant or Siri to do something for us, this is another level up. Your boss isn't likely to be too impressed if you blame a miscalculation in the next quarter's financial forecast on the AI agent you hired to do the job.

It also remains to be seen just how much automation people want when it comes to these tasks: Booking vacations involves a lot of decisions, from the position of your seats on an airplane to having breakfast included, which AI would have to make on your behalf.

There's no timescale on any of this, but it sounds like OpenAI is working hard to get its agents ready as soon as possible. Google just announced a major upgrade to its own AI tools, while Apple is planning to reveal its own take on generative AI at some point later this year, quite possibly with iOS 18.

You might also like

TechRadar – All the latest technology news

Read More

Assistant with Bard video may show how it’ll work and when it could land on your Pixel

New footage has leaked for Google’s Assistant with Bard demonstrating how the digital AI helper could work at launch.

Nail Sadykov posted the video on X (the platform formerly known as Twitter) after discovering the feature on the Pixel Tips app. Apparently, Google accidentally spilled the beans on its tech, so it’s probably safe to say this is legitimate. It looks like something you would see in one of the company’s Keyword posts explaining the feature in detail except there’s no audio.

There will, based on the clip, be two ways to activate Assistant with Bard: either by tapping the Bard app and saying “Hey Google” or pressing and holding the power button. A multimodular input box rises from the bottom where you can type in a text prompt, upload photos, or speak a verbal command. The demo proceeds to only show the second method by having someone take a picture of a wilting plant and then verbally ask for advice on how to save it. 

See more

A few seconds later, Assistant with Bard manages to correctly identify the plant in the image (it’s a spider plant, by the way) and generates a wall of text explaining what can be done to revitalize it. It even links to several YouTube videos at the end.

Assistant with Bard has something of a badly kept secret. It was originally announced back in October 2023 but has since seen multiple leaks. The biggest info dump by far occurred in early January revealing much of the user experience as well as “various in-development features.” What’s been missing up to this point is news on whether or not Assistant with Bard will have any sort of limitations. As it turns out, there may be a few restrictions.

Assistant Limitations

Mishaal Rahman, another industry insider, dove into Pixel Tips searching for more information on the update. He claims Assistant with Bard will only appear on single-screen Pixel smartphones powered by a Tensor chip. This includes the Pixel 6, Pixel 7, and Pixel 8 lines. Older models will not receive the upgrade and neither will the Pixel Tablet, Pixel Fold, or the “rumored Pixel Fold 2”.

Additionally, mobile devices must be running the Android 14 QPR2 beta “or the upcoming stable QPR2 release” although it’s most likely going to be the latter. Rahman states he found a publication date in the Pixels Tip app hinting at a March 2024 release. It’s important to point out that March is also the expected launch window for Android 14 QPR2 and the next Feature Drop for Pixel phones.

No word on whether or not other Android devices will receive Assistant with Bard. It seems it’ll be exclusive to Pixel for the moment. We could see the update elsewhere, however considering that key brands, like Samsung, prefer having their own AI, an Assistant with Bard expansion seems unlikely. But we could be wrong.

Until we learn more, check out TechRadar's list of the best Pixel phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT steps up its plan to become your default voice assistant on Android

A recent ChatGPT beta is giving a select group of users the ability to turn the AI into their device’s new default voice assistant on Android.

This information comes from industry insider Mishaal Rahman on X (the platform formerly known as Twitter) who posted a video of himself trying out the feature live. According to the post, users can add a shortcut to ChatGPT Assistant, as it’s referred to, directly into an Android’s Quick Settings panel. Tapping the ChatGPT entry on there causes a new UI overlay to appear on-screen, consisting of a plain white circle near the bottom of the display. From there, you verbally give it a prompt, and after several seconds, the assistant responds with an answer. 

See more

The clip shows it does take the AI some time to come up with a response – about 15 seconds. Throughout this time, the white circle will display a bubbling animation to indicate it’s generating a reply. When talking back, the animation turns more cloud-like. You can also interrupt ChatGPT at any time just by tapping the screen. Doing so causes the circle to turn black.

Setting up

The full onboarding process of the feature is unknown although 9To5Google claims in their report you will need to pick a voice when you launch it for the first time. If they like what they hear, they can stick with a particular voice or go back a step to exchange it with another. Previews of each voice can be found on OpenAI’s website too. They consist of three male and two female voices. Once all that is settled, the assistant will subsequently launch as normal with the white circle near the bottom.

To try out this update, you will need a subscription to ChatGPT Plus which costs $ 20 a month. Next, you install either ChatGPT for Android version 1.2024.017 or .018, whatever is available to you. Go to the Beta Features section in ChatGPT’s Settings menu and it should be there ready to be activated. As stated earlier, only a select group of people will gain access. It's not a guarantee.

Future default

Apparently, the assistant is present on earlier builds. 9ToGoogle states the patch is available on ChatGPT beta version 1.2024.010 with limited functionality. They claim the patch introduces the Quick Setting tile, but not the revamped UI.

Rahman in his post says no one can set ChatGPT as their default assistant at the moment. However, lines of code found in a ChatGPT patch from early January suggest this will be possible in the future. We reached out to OpenAI asking if there are plans to expand the beta’s availability. This story will be updated at a later time.

Be sure to check out TechRadar's list of the best ChatGPT extensions for Chrome that everyone should use. There are four in total.

You might also like

TechRadar – All the latest technology news

Read More

Amazon tests a new AI assistant to answer your questions while you shop

Amazon is reportedly testing a new AI assistant on its mobile app that can answer customer questions about specific products.

This feature appears to have been initially discovered by e-commerce research firm Marketplace Pulse. According to the firm, the AI can be found under the “Looking for specific info?” section on product pages. The LLM (large language model) powering the feature relies on listing details provided by companies and user reviews to generate responses to inquiries. For example, you can ask if a particular workout shirt is good for running or if it fits well on a tall person. Marketplace Pulse states its main purpose is to save people the trouble of having to read individual reviews by summarizing all the information present into a succinct block of text. 

Amazon AI assistant

(Image credit: Marketplace Pulse/Amazon)

Because it’s in the early stages, the AI assistant is limited in what it can do. You can’t command it to compare two items or “find alternatives.” Although it can’t recommend specific products, Amazon’s chatbot can make soft suggestions. In another example, MarketPlace Pulse asked the app assistant if e-bikes are good for romantic dates. The AI said “not really” and recommended buying a tandem bike instead.

Quirks and unintended features

There are several quirks affecting the chatbot. Unsurprisingly, it’s “prone [to] hallucinating wrong information” about an item. MarketPlace Pulse even claims it outright refused to “answer basic questions”. What’s more, the assistant is capable of answering prompts that apparently “Amazon didn’t build it for.” 

It can generate Python code, write jokes about a product, or answer in languages besides English. CNBC had access to the test and was reportedly successful in describing items “in the style of Yoda from Star Wars.” Despite these abilities, you can’t hold a regular conversation with the AI like you could with ChatGPT.

Amazon's AI assistant quirks

(Image credit: Marketplace Pulse/Amazon)

It’s unknown how widespread the test is. We didn't have access on our phone. Amazon hasn’t said anything official so far, but we reached out to the platform asking for more information about the AI. We also asked Marketplace Pulse if it knows whether the assistant is available to a lot of people or just a select group. This story will be updated at a later time.

Alexa upgrade

Amazon’s AI ambitions don’t stop there as a report from Business Insider reveals the tech giant is currently working on a revamped and paid version of Alexa. The upgrade is called Alexa Plus, which is said to offer a “more conversational and personalized” experience akin to ChatGPT.

The team is aiming to launch Alexa Plus on June 30, according to the report. Unfortunately, development is not going smoothly. A source with intimate knowledge told Business Insider the revamp is “falling short of expectations”. The AI is reportedly hallucinating false information as the team is having a hard time getting the tech to work properly. The project may also be causing a lot of internal fighting with some arguing people are not going to want to pay for another Amazon service.

At a glance, it seems Alexa Plus might miss the June 30 deadline.

If you want your own digital sidekick, check out TechRadar's list of the best AI-powered virtual assistants.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft reveals new Copilot Pro subscription service that turbo-charges the AI assistant in Windows 11 for $20 a month

Microsoft is taking Windows Copilot to the next level with Windows Copilot Pro and bringing Microsoft 365 Copilot to businesses of all sizes. 

Windows Copilot and 365 Copilot are Microsoft’s newest AI digital assistants to help users with all kinds of tasks and projects that we were introduced to last year, and they’re getting a major boost with higher-tier AI functionality. 

Microsoft is officially debuting Copilot Pro, available for individual users to subscribe to for $ 20 per month (per user) starting today January 16. 

This version of Copilot will allow individuals to upgrade their productivity and user experience with the best Copilot has to offer in terms of AI features, capability, performance speed and being able to access Copilot at peak times. 

This will also grant users with a Personal or Home subscription access to Copilot Pro in Microsoft 365 apps like Word, Excel, OneNote, and available for PowerPoint on PC, Mac and iPad. This is similar to the existing Microsoft 365 Copilot made for enterprise customers, which requires an enterprise subscription, but now these Copilot AI capabilities will be available to Microsoft 365 Personal and Family subscribers as well. 

The crème de la Copilot on offer

If you choose to sign up for Copilot Pro, it will grant you priority access to the latest OpenAI models, like the state-of-the-art GPT-4 Turbo from OpenAI, and enable you to build and tailor your own Copilot GPT bot to a topic of your choosing. 

Copilot Pro will give users greater agency in how and what they Copilot Pro by allowing them to toggle between models and try out different options to optimize their experience. 

Users will be able to build and mold these personalized Copilot GPTs in a brand new Copilot GPT Builder (similar to the commercial version launched last year) by answering some straightforward prompts, and Microsoft assures us it’s coming soon. 

You can also look forward to an upgrade to the AI image generation from Microsoft with Image Creator from Designer (formerly known as Bing Image Creator). With Copilot Pro, you’ll get one hundred boosts (accelerated image generation processes), greater image detail and quality, and the landscape image format.

Along with the introduction of Copilot Pro for individual use, Copilot for Microsoft 365 will be available to more types of commercial customers, particularly small- and medium-sized businesses. From now on, there’s no employee minimum, lower prerequisites, and more availability of Copilot subscriptions through Microsoft partners.

New upgrades to Copilot and a new Copilot app

Copilot imagery from Microsoft

(Image credit: Microsoft)

For those users that want to continue experimenting with Copilot for free, there’s something to watch out for as well. The free version of Copilot is getting Copilot GPTs that allow you to customize and tailor a Copilot that you can discuss a particular topic of your choosing. Today you should be able to see some of the topics already available, such as fitness, travel, cooking and more.

Along with these developments, Copilot is getting an iOS app and an Android app, and Copilot is coming to the Microsoft 365 mobile app. With these new apps, you’ll be able to have a single AI run across your devices, able to analyze information from your web usage, your PC use, and the apps you use to make its help more context-specific.  

The Copilot app is equipped with the same powerful tools that the PC version benefits from, such as GPT-4, Dall-E 3’s image creation capabilities, and the ability to input your own images into Copilot and have it respond to them

Copilot will be added to the Microsoft 365 app on both iOS and Android devices over the course of the next month for users who have a Microsoft account, and these users will be able to export the content that they generate as a Word or PDF document. Microsoft’s vision for this is that you’ll be able to summon Copilot almost instantly, as soon as you need it, and no matter what device you're currently using.

Microsoft is just getting started

It also looks like there are plenty more Copilot Pro features in the pipeline – similar to how we’ve seen multiple improvements to the standard version of Copilot in Windows 11. Divya Kumar relayed this while speaking to The Verge, referring to Microsoft’s recent release schedule as a “rolling thunder.” 

With Copilot Pro, Microsoft is aiming to catch the attention of “power users like creators, researchers, programmers and others” that might be interested in the latest innovations that it, with its collaborator OpenAI, has to offer. 

Microsoft has recently overtaken Apple as the most valuable company in the world, and it’s not showing signs of losing steam. Yusuf Mehdi, Microsoft’s Executive Vice President and Consumer Chief Marketing Officer, claims that Copilot empowers “every person and every organization on the planet to achieve more.” If there’s a reason that you might want or even need assistance or advice digitally, it’s clear how eager Microsoft is to be there to meet it.

You might also like

TechRadar – All the latest technology news

Read More

Google Assistant is slated to ditch 17 features in the coming weeks

Google Assistant is going to be shedding some weight as at least 17 “underutilized” features will be removed in the coming weeks.

In a recent announcement post, the tech giant says it wants to focus on the parts of its digital assistant that people actually use, so it will be getting rid of the ones that see little interaction. A list of upcoming dropped features can be found on the Google Help website. They include playing audiobooks on Google Play Books via voice command and asking for information about your contacts. For every feature being removed, the company recommends workarounds you can use to replicate the same action. For example, even though users won’t be able to control audiobooks with their voice, they can still cast them from a mobile device.

Pulling the plug

Not everything will receive an equivalent workaround. Google Assistant’s integration with Calm is getting axed, and there’s nothing you can do to duplicate the service. Google instead recommends playing a meditation video on YouTube. 

It’s worth pointing out that although the Help page lists 17 features, the wording implies more will be removed. We reached out asking for details regarding the exact number of deprecated features. This story will be updated at a later time.

It’s unknown exactly when the company will shut everything down. The announcement post states that beginning on January 26, Google Assistant will send a notification telling you a feature “won’t be available after a certain date” if you ask for it. That day officially remains a mystery. However, 9To5Google claims in its report the date is February 26 for most features. The Nest Hub Commute Tiles and Google Maps App Launcher will go offline a little earlier on February 7.

Upcoming tweaks

In addition to all of the removals, Google will be making a few tweaks to its mobile app. 

Using the microphone icon will now activate “Search results in response to your queries”. But you'll no longer be able to use said microphone for certain Google Assistant actions, like turning on the lights or sending texts. This deprecation extends to the search bar on Pixel phones. On the smartphone, tapping the icon will activate Voice Search instead of Assistant.

The company admits these changes may be jarring for some. If there are issues, they ask that you say “Hey Google, send feedback” to Google Assistant and share your thoughts. 

If you're in the market for an AI assistant to help with your daily routine but don't know where to start, check out TechRadar's list of the best smart speakers for 2024.

You might also like

TechRadar – All the latest technology news

Read More