Google’s Nearby Share tool appears to adopt Samsung’s similar utility name and we wonder what’s going on

Google has suddenly changed the name of its file-sharing tool from Nearby Share to Quick Share which is what Samsung calls its own tool.

It’s a random move that has people scratching their heads wondering what it could mean for Android in the future. This update appears to have been discovered by industry insider Kamila Wojiciechowska who displayed her findings on X (the platform formerly known as Twitter). Wojiciechowska revealed that she received a notification on her phone informing her of the change after installing Google Mobile Services version 23.50.13. 

In addition to the new name, Google altered the logo for the feature as well as the user interface. The logo will now consist of two arrows moving toward each other in a half-circle motion on a blue background. Regarding the UI, it will now display a Quick Settings tile for fast configuration, text explaining what the various options do, and an easier-to-use interface. There’s even a new ability, allowing people to restrict Quick Share visibility down to ten minutes.

Wojieciechowska states this update is not widely available nor is the Nearby Share change common among the people who do receive the patch. This may be something only a handful will receive. She admits to being confused as to why Google is doing this, although it appears this could be the start of a new collaboration between the two companies according to found evidence.

Start of a new partnership

Android Authority in their report claims Wojieciechowska discovered proof of a “migration education flow” for Quick Share after digging through the Play Services app. This could suggest Google and Samsung are combining their file-sharing tools into one. Or at the very least, “making them interoperable”. 

If this is the case, two of the biggest Android brands coming together to unify their services could be a huge benefit for users. Currently separate and similarly behaving features might, if this is any evidence, coalesce into one that’ll work with both Galaxy and non-Galaxy smartphones alike. It's a quality-of-life upgrade that'll reduce software clutter.

Android Authority makes it clear, though, that there isn’t any concrete proof stating the two tools will merge. It’s just given the set of circumstances that seems to be the case. Plus, the whole thing wouldn’t make sense if it wasn’t the result of an upcoming collaboration. Think about it. Why would Google decide to give one of its mobile tools the same name as one of its competitor’s software? That might confuse users. 

There has to be something more to it so we reached out to both companies for more information. This story will be updated at a later time.

Until then, check out TechRadar's list of the best smartphone for 2023.

You might also like

TechRadar – All the latest technology news

Read More

Google’s AI-powered NotebookLM is now available to help organize your life

Google’s AI-powered writing assistant, NotebookLM, is leaving its experimental phase behind and launching as an official service with multiple performance upgrades.

First revealed during I/O 2023 as Project Tailwind, the tool’s primary purpose is to help organize your messy notes by creating a summary detailing what you wrote down. It will even highlight important topics plus come up with several questions to help people gain a better understanding of the material. The big update coinciding with the release is that NotebookLM will now run on Gemini Pro, which the tech giant states is their “best [AI] model” for handling a wide variety of tasks. It claims the AI model will enhance the service’s reasoning skills as well as improve its ability to understand the documents it scans.

What’s more, Google took the feedback from NotebookLM’s five-month testing period and has added 15 new features aiming to improve the user experience. 

Highlight features

The company highlights five specific features in its announcement with the first one being a “new noteboard space”. In this area, you’ll be able to take quotes from the AI chat or excerpts from your notes and pin them at the top for easier viewing. Next, citations in a response will take you directly to the source, “letting you see [a] quote in its original context.”

Highlighting text in said source will now suggest two separate actions. You can have the AI instantly “summarize the text” into a separate note or ask it to define words or phrases, which can be helpful if the topic is full of tough concepts. Down at the bottom, users will see a wider array of follow-up actions from suggestions on how to improve your prose to related ideas that you can add to your writing. NotebookLM will also recommend specific formats for your content that’ll shape it into an email layout or the outline of a script among other things.

NotebookLM sample

(Image credit: Future)

The full list can be found on Google's Help website. Other notable features include an increased word count for sources (they can now be 200,000 words total), the ability to share notebooks with others much like Google Docs, and support for PDFs.

Coming soon

There are more updates on the way. Starting on the week of December 11, NotebookLM will gain an additional seven features. They include a Critique function where you can ask the AI for constructive feedback plus a way to combine all your notes into one big page.

NotebookLM is available in the United States only to users ages 18 and up on desktop and mobile devices. When you visit, you’ll see some examples to help you get started with the service. It’s worth mentioning that despite this being an official launch, Google still regards NotebookLM as “experimental” technology, so it won’t be perfect. No word on if there are plans for an international release although we did ask. This story will be updated at a later time. 

While we have you check out Techradar's roundup of the best AI writers for 2023.

You might also like

TechRadar – All the latest technology news

Read More

Google’s Instrument Playground offers a taste of an AI-generated musical future

Google has opened the gates to its latest experimental AI called Instrument Playground which allows people to generate 20-second music tracks with a single text prompt.

If that description sounds familiar to you, that’s because other companies have something similar like Meta with MusicGen. Google’s version adds two unique twists. First, it’s claimed to be capable of emulating over 100 instruments from around the world. This includes common ones like the piano to more obscure woodwinds like the dizi from China. 

Secondly, the company states you can “add an adjective” to the prompt to give it a certain mood. For example, putting in the word “Happy” will have Instrument Playground generate an upbeat track while “Merry” will create something more Christmassy. It’s even possible to implement sound effects by choosing one of three modes: Ambient, Beat, and Pitch. For the musically inclined, you can activate Advanced mode to launch a sequencer where you can pull together up to four different AI instruments into one song.

Live demo

The Instrument Playground is publicly available so we decided to take it for a spin.

Upon going to the website, you’ll be asked what you want to play. If you’re having a hard time deciding, there is a link below the prompt that opens a list of 65 different instruments. We said we wanted an upbeat electric guitar, and to our surprise, the AI added backup vocals to the riff – sort of. Most of the lyrics are incomprehensible gibberish although Chrome’s Live Caption apparently picked up the word “Satan” in there.

The generated song plays once (although you can replay it at any time by clicking the Help icon). Afterward, you can use the on-screen keyboard to work on the track. It’s not very expansive as users will only be given access to 12 keys centered around the C Major and C Minor scales. What you see on the page is directly tied to the numbers on a computer keyboard so you can use those instead of having to slowly click each one with a mouse.

Instrumental Playground example

(Image credit: Future)

You can use the three modes mentioned earlier to manipulate the file. Ambient lets you alter the track as a whole, Beat highlights what the AI considers to be the “most interesting peaks”, and Pitch can alter the length of a select portion. Users can even shift the octave higher or lower. Be aware the editing tools are pretty rudimentary. This isn’t GarageBand.

Upon finishing, you can record an audio snippet which you can then download as a .wav file to your computer. 

In the works

If you’re interested in trying out Instrument Playground, keep in mind this is an experimental technology that is far from perfect. We’re not musicians, but even we could tell there were several errors in the generated music. Our drum sample had a piano playing in the back and the xylophone sounded like someone hitting a bunch of PVC pipes. 

We reached out to Google with several questions like when will the AI support 100 instruments (If you remember, it’s only at 65 at the time of this writing) and what the company intends to do with it. Right now, Instrument Playground feels like little more than a digital toy, only capable of creating simple beats. It'd be great to see it do more. This story will be updated at a later time.

While we have you, be sure to check out TechRadar's list of the best free music-making software in 2023

You might also like

TechRadar – All the latest technology news

Read More

Google’s AI plans hit a snag as it reportedly delays next-gen ChatGPT rival

Development on Google’s Gemini AI is apparently going through a rough patch as the LLM (large language model) has reportedly been delayed to next year.

This comes from tech news site The Information whose sources claim the project will not see a November launch as originally planned. Now it may not arrive until sometime in the first quarter of 2024, barring another delay. The report doesn’t explain exactly why the AI is being pushed back. Google CEO Sundar Pichai did lightly confirm the decision by stating the company is “focused on getting Gemini 1.0 out as soon as possible [making] sure it’s competitive [and] state of the art”. That said, The Information does suggest this situation is due to ChatGPT's strength as a rival.

Since its launch, ChatGPT has skyrocketed in popularity, effectively becoming a leading force in 2023’s generative AI wave. Besides being a content generator for the everyday user, corporations are using it for fast summarization of lengthy reports and even building new apps to handle internal processes and projections. It’s been so successful that OpenAI has had to pause sign-ups for ChatGPT Plus as servers have hit full capacity.

Plan of attack

So what is Google’s plan moving forward? According to The Information, the Gemini team wants to ensure “the primary model is as good as or better than” GPT-4, OpenAI’s latest model. That is a tall order. GPT-4 is multimodal meaning it can accept video, speech, and text to launch a query and generate new content. What’s more, it boasts overall better performance when compared to the older GPT-3.5 model, now capable of performing more than one task at a time.

For Gemini, Google has several use cases in mind. The tech giant plans on using the AI to power new YouTube creator tools, upgrade Bard, plus improve Google Assistant. So far, it has managed to create mini versions of Gemini “to handle different tasks”, but right now, the primary focus is getting the main model up and running. 

It also plans to court advertisers with their AI as advertising is “Google’s main moneymaker.” Company executives have reportedly talked about using Gemini to generate ad campaigns, including text and images. Videos could come later, too.

Bard upgrade

Google is far from out of the game, and while the company is putting a lot of work into Gemini, it's still building out and updating Bard

First, if you’re stuck on your math homework, Bard will now provide step-by-step instructions on how to solve the problem, similar to Google Search. All you have to do is ask the AI or upload a picture of the question. Additionally, the platform can create charts for you by using the data you enter into the text prompts. Or you can ask it to make a smiley face like we did.

Google Bard's new chart plot feature

(Image credit: Future)

If you want to know more about this technology, we recommend learning about the five ways that ChatGPT is better than Google Bard (and three ways it isn't).

Follow TechRadar on TikTok for news, reviews, unboxings, and hot Black Friday deals!

You might also like

TechRadar – All the latest technology news

Read More

Adobe’s new photo editor looks even more powerful than Google’s Magic Editor

Adobe MAX 2023 is less than a week away, and to promote the event, the company recently published a video teasing its new “object-aware editing engine” called Project Stardust.

According to the trailer, the feature has the ability to identify individual objects in a photograph and instantly separate them into their own layers. Those same objects can then be moved around on-screen or deleted. Selecting can be done either manually or automatically via the Remove Distractions tool. The software appears to understand the difference between the main subjects in an image and the people in the background that you want to get rid of.

What’s interesting is moving or deleting something doesn’t leave behind a hole. The empty space is filled in most likely by a generative AI model. Plus, you can clean up any left-behind evidence of a deleted item. In its sample image, Adobe erases a suitcase held by a female model and then proceeds to edit her hand so that she’s holding a bouquet of flowers instead.  

Image 1 of 2

Project Stardust editing

(Image credit: Adobe)
Image 2 of 2

Project Stardust generative AI

(Image credit: Adobe)

The same tech can also be used to change articles of clothing in pictures. A yellow down jacket can be turned into a black leather jacket or a pair of khakis into black jeans. To do this, users will have to highlight the piece of clothing and then enter what they want to see into a text prompt. 

Stardust replacement tool

(Image credit: Adobe)

AI editor

Functionally, Project Stardust operates similarly to Google’s Magic Editor which is a generative AI tool present on the Pixel 8 series. The tool lets users highlight objects in a photograph and reposition them in whatever manner they please. It, too, can fill gaps in images by creating new pixels. However, Stardust feels much more capable. The Pixel 8 Pro’s Magic Eraser can fill in gaps, but neither it nor Magic Editor can’t generate content. Additionally, Google’s version requires manual input whereas Adobe’s software doesn’t need it.

Seeing these two side-by-side, we can’t but wonder if Stardust is actually powered by Google’s AI tech. Very recently, the two companies announced they were entering a partnership “and offering a free three-month trial for Photoshop on the web for people who buy a Chromebook Plus device. Perhaps this “partnership” runs a lot deeper than free Photoshop considering how similar Stardust is to Magic Editor.

Impending reveal

We should mention that Stardust isn't perfect. If you look at the trailer, you'll notice some errors like random holes in the leather jacket and strange warping around the flower model's hands. But maybe what we see is Stardust in an early stage. 

There is still a lot we don’t know like whether it's a standalone app or will it be housed in, say, Photoshop? Is Stardust releasing in beta first or are we getting the final version? All will presumably be answered on October 10 when Adobe MAX 2023 kicks off. What’s more, the company will be showing other “AI features” coming to “Firefly, Creative Cloud, Express, and more.”

Be sure to check out TechRadar’s list of the best Photoshop courses online for 2023 if you’re thinking of learning the software, but don’t know where to start. 

You might also like

TechRadar – All the latest technology news

Read More

Google’s new AI tool can help organize your messy Google Docs files

Google is launching yet another large language model (LLM) with the purpose of helping people organize their messy Google Docs accounts.

Say you’re a college student who typed in a series of notes into a Google Docs file for class, but you didn’t put a lot of thought into the page’s structure. It’s all one big mess of randomly organized ideas. Now, you can ask the new NotebookLM tool to generate a short summary to read so you have a better idea of what you wrote. The original file will still be there for reference. It’s not going anywhere. The generative AI will even throw in some “key topics and questions” based on the summarized information to help users gain “a better understanding of the material.” What’s more, you are not limited to a single document. Notebook LM is able to pull from multiple sources for its content.

Directing the AI

Like Bard, Google’s other generative AI, you can ask NotebookLM questions to better direct its response if you want to know something in particular. In an example given, a student can upload an “article about neuroscience” and then tell the AI to construct a list of “key terms related to dopamine” from that particular piece.

NotebookLM isn’t only for summarizing your school notes. It can, according to Google, generate ideas, too. Google states a content creator can give the LLM their idea for a new video and then instruct it to write up a rough draft for a script or help a businessperson come up with questions to ask at an investors’ meeting.

As helpful as it may sound, there is one major problem. Believe it or not, NotebookLM can still hallucinate. Even though the main source is your own personal Google Docs account, there's still the possibility it could create false information. The company recommends double-checking the generated responses “against your original source material” just to be safe. If the AI is grabbing from multiple sources, Google states each response will have citations so you’ll know exactly where everything is coming from. 

Future release

NotebookLM is currently seeing a limited release as it is still experimental technology. If you want to try it out yourself, head on over to the Google Labs website and sign up for the waitlist. Once a spot opens up, Google will shoot over an email letting you know. The company is asking the lucky few who gain access to please provide feedback so it can improve the AI.

NotebookLM actually made its world debut during Google I/O 2023 when it was originally known as Project Tailwind. The event saw the tech giant tease a lot of upcoming devices and software; most of which have been released with a few stragglers remaining. Universal Translator, for example, is still missing in action. If you don’t recall, it’s an “AI video dubbing service” that has the ability to translate speech in real-time. There also isn’t a lot of information out there regarding the Sidekick panel, a Google Docs feature that can create text prompts while writing.

We asked Google if it could provide any insight on the missing I/O 2023 tech plus when it will release the final version of NotebookLM. This story will be updated at a later time.

TechRadar – All the latest technology news

Read More

Google’s new Chrome security update to make password management easier

Google is working on a sizable security update that'll introduce a total of seven new features to Chrome for desktop and iOS. 

Four of those features are currently making their way to desktop users, and they all involve the company’s Password Manager software. Be sure to keep an eye out for the patch once it arrives.

Starting from the top, Password Manager will have a new home in Chrome’s Settings menu. There, users will be able to manage their login credentials or adjust their security settings. But if you prefer a more direct approach, “you [can] create a desktop shortcut for Google Password Manager,” according to the post

The tech giant is also adding the ability to write down notes for specific logins. As an example, let’s say you have multiple accounts for one website, but you have a hard time remembering every single detail. You can click the key icon in Chrome’s address bar to open a context menu, revealing your notes that house those details. Clicking the pencil icon lets you make edits. 

Password notes on Chrome

(Image credit: Google)

Next, the company will allow users to import passwords from third-party managers to Chrome on desktop. The Google Help webpage states people must first convert their credentials into a .csv file before uploading anything to the browser. Detailed instructions on how to do this can be found on the Chrome Help website.

However, it appears the tool will only be able to bring in your information from certain apps. Those apps are Microsoft Edge, Safari, 1Password, Bitwarden, Dashlane and LastPass. No word on future plans to support other sources. 

Import password on Chrome

(Image credit: Google)

Coming soon

Regarding the final three additions, they will arrive later in the year.

First, Chrome on desktop will be getting biometric authentication, something that's been exclusive to the mobile app up to this point. Google states that enabling this will add a second “layer of security before” auto-filling credentials. The types of biometric authentication Chrome supports ultimately depends on your computer. For example, if you own a laptop sporting a fingerprint reader, then the browser allow you to sign into accounts with only your fingerprint.

On iOS, Password Checkup on Chrome will begin to flag faulty logins. The tool will urge you to change your information if it detects a weak, reused, or compromised password. The rest of the iOS update consists of minor design tweaks to make some things easier to do. Autofill prompts will be made larger, and whenever you review your saved credentials in the Settings, “multiple saved accounts for one website will be [now] grouped together.”

We reached out to Google for more info on when both the biometric authentication expansion and iOS patch will launch. This story will be updated at a later time.

TechRadar – All the latest technology news

Read More

Google’s AI-boosted search engine enters first public trial – here’s how to try it

Google has opened up access to its Search Labs testing program allowing users to try out the upcoming search engine update with the most notable change being the Search Generative Experience or SGE.

To be clear, Search Labs isn’t technically open to the public as you’ll have to first join a waitlist. If you’ve already signed up, be sure to check your email account for an invitation from Google as they're currently rolling out. Don’t worry if you haven’t entered as there’s still room left in the waitlist on both desktop and mobile.

To join on desktop, you need to first install Google Chrome on your computer. From there, head on over to the Search Labs website, select Join Waitlist, and wait for the invitation to arrive. On mobile devices, launch the Google app. You should see a science beaker-esque icon in the top left corner of the screen. Just like before, select Join Waitlist then wait for the invite. Search Labs is available on both iOS and Android so no one’s being left out. Install the latest app update if you don't see the icon.

Limited-time only

Unless you’re a subscriber to Google One Premium, it may take a while until you get an invite. A recent report from 9To5Google states Premium subscribers are getting “priority access” to Search Labs, although “it won’t be immediate.” “Access spots are limited” at the moment, but more will open up over “the coming weeks. 

But once you get the invite, act fast. SGE and the rest of the Search Labs experiments will be available for a limited time only. It’s unknown for how long, so we asked Google for more information. This story will be updated if we hear back.

There’s been a fair amount of hype surrounding SGE ever since it was first revealed during I/O 2023. The technology essentially enhances Google Search to provide long, detailed responses to queries by taking context into consideration. It could very well completely change how people use the search engine

Word of advice

For the lucky few who get early access to SGE, Google recommends starting off with simple terms so you can get used to how the AI works. Once you get a feel for it, try entering more specific queries. One of the highlighted use cases of SGE is to help people with their shopping. The AI can generate a detailed list of features, reviews, price points, and even link to the product itself.

In addition to Google’s advice, we have some of our own because we’ve used multiple generative AI models from Bing to Brave Summarizer. One thing we’ve learned is generative AIs can hallucinate, meaning they come up with totally false information that bears no resemblance to reality. Don't always believe what you read. And do be mindful of what you enter as generative AIs keep the information you type in. In fact, some major tech corporations, like Samsung, have banned their employees from using ChatGPT after some sensitive information was leaked.

Google I/O 2023 revealed a lot more than just the tech giant’s AI tools. Be sure to check out TechRadar’s coverage of the event as it happened.

TechRadar – All the latest technology news

Read More

Spotify users’ lives will get a lot simpler with Google’s new Play Store update

Spotify and Google are teaming up to give users on Android smartphones more choice on how they pay for a Premium subscription to the music platform.

From later this year, you'll be able to choose for the payment to either go through Spotify's own system or Google Play Billing instead, according to the latest Spotify Blog Post. The change is expected to come to other big-name apps as well, though we’ve yet to hear specifics.

The initiative is being called ‘User Choice Billing’, and it will give you the option to choose between giving more to the creators of the apps you use or continuing to contribute to Google’s Play Store infrastructure. 

But, which payment system is likely to be best for you?


Analysis: Which payment system will be better? 

Giving people more choice is rarely a bad thing, but here it definitely feels like you’re being asked to weigh up two nearly identical options.

When Epic Games tried to circumvent Apple’s App Store and Google’s Play Store payments in Fortnite mobile, it attracted customers by charging less for in-game goodies than if they bought via the third-party. Unfortunately, we don’t expect this will be the case here, as it’s a Google-led initiative.

If Spotify Premium costs less through Spotify than through Google’s Play Store (or vice versa), then you’d have no reason to opt for the pricier option.

So, assuming both systems are equally expensive for the customer which is better? If you want to maximize the portion of your money going to Spotify, then most likely its own private system will be best. 

But, for convenience, Google’s billing is likely to be your best option. As all of your subscription payment data is stored in one place, the next time that you get a new debit or credit card you won’t have to remember everywhere that it’s used – you just have to update your details once, and your subscriptions will all continue.

Additionally, it’ll be easier to keep an eye on the subscriptions that you have. It’s not hard to forget that you have recurring billing set up for an app you rarely use, so by storing all of your subscriptions in one location within Google Play, you’d be able to quickly scroll through and find out what you’re paying without having to decode your bank statements.

If the service spreads to other apps and services, it might also give you the option to pay for digital goods without having to give your card details to a platform that you aren’t familiar with.

However, as with all upcoming features, we’ll have to wait and see just how much of a time or money saver 'User Choice Billing' ends up being when it launches.

TechRadar – All the latest technology news

Read More

Google IO 2022 dates, registration, and what to expect from Google’s online show

Google announced its IO 2022 keynote conference for May 11, concluding on May 12, which will mainly be an online event.

While we don't have confirmation yet of what's appearing, it's likely that we'll see Android 13 appear as a preview, similar to last year's Android 12 launch at Google IO 2021.

Like last year, much of Google IO 2022 will be held online, but you can register to attend, according to its FAQ.

After Google canceled its 2020 event due to the pandemic, Google IO 2021 was online only. But with this year's event seemingly hosting an online and offline audience, we may see more demos this year of what Google's been working on.

We suspect Sundar Pichai will kick off the main keynote on May 10, which we expect will be free to stream as before.

See more

We won't know officially until the May 11 keynote what Google intends to show off, but we can already extrapolate based on the rumors and leaks coming from Google's camp. Below, we'll predict Google's hardware and software lineup for Google IO 2022, as well as explain how the virtual event will work.

LATEST NEWS

Google IO 2022 is announced as a mostly-online event, but you can register to attend.

Cut to the chase

  • What is it? Google’s yearly developer conference
  • When is it? May 11-May 12, 2022
  • How can I register / how much does it cost? On the Google event page for free; all you need is a Google account

Google IO 2022 Registration

(Image credit: Google)

What are the Google IO 2022 dates?

Google revealed that its developer conference would take place from Wednesday, May 11 through Thursday, May 12. Google regularly schedules its annual three-day conference for mid-May, making these dates on-brand for the company.

The traditional keynote hasn't been confirmed as yet, but we expect it to be held on the first day of IO 2022, May 11.

Google IO 2022 Logo

(Image credit: Google)

Is Google IO online-only?

Google canceled the May 2020 event in early March 2020, right at the advent of the pandemic when everyone had begun to shelter in place and live events felt increasingly unsafe.

Google normally holds the Google IO keynote and subsequent developer sessions in physical gatherings at the Shoreline Amphitheater in Mountain View, California, where COVID-19 restrictions on large events are slowly being lifted across the country. 

But with Google IO 2021 repeating the same plan as 2020, many had assumed that IO 2022 would follow. This has turned out to be partly true, as you can register to attend, but the company has said that invites will be hard to attain.

How Google IO 2022 will work

Most casual Google users associate Google IO with the keynote address, which streams online where anyone can watch it. But in previous years you could also buy a pass to attend Google developer sessions, new product demos, labs to learn about new code, and other events for professionals or hobbyists. 

This year, most of those events look to be virtual and free, with Google announcing more details as the event gets closer.

Some Google IO 2022 events will be free to all and rewatchable on-demand, as in previous years. But there will be other events that will require you to reserve a slot due to its popularity.

Google IO stage

(Image credit: Future)

What to expect at Google IO 2022

Based on Google's annual product and software calendar, plus all the leaks and rumors we've heard about, we have a general idea of what Sundar Pichai and the Google execs will discuss during the Google IO 2022 keynote. Here are the highlights:

Android Logo

(Image credit: Google)

Android 13

The latest Android OS is already in the developer beta stage on Pixel 6 phones and lower, but we're certain that Google will spend time outlining Android 13's undisclosed tricks on stage. 

With Apple almost certainly introducing iOS 16 at WWDC in June, Google will want to jump ahead of that and show off its newest innovations first. It could even announce the launch of the Android 13 public beta, though that isn't confirmed.

While the preview shows few hints towards Android 13, it does look as though privacy will be another focus for Google in this release, alongside more refined theme options.

With Android 12L focusing on tablets more than ever, there's a chance that we may see an Android 13L that's primarily tailored for tablets and foldable devices.

The back of a Google Pixel 6 Pro in yellow

(Image credit: Google)

Less likely: Pixel 7 and Pixel Tablet

Google is actively developing the Pixel 7 and a Pixel foldable phone, alongside a rumored Pixel Tablet, potentially for a simultaneous October 2022 release. That's far enough out that Google may not want to show off their specs or hardware until it's closer to Fall.

But Google IO has primarily been software-focused, with the only hardware being shown in previous years, being a Chromecast or Google Home products.

However, with IO 2022 allowing some attendees, there's always a chance that hands-on demos is something that the company will want to take advantage of.

wearOS Google

(Image credit: Google)

New Fitbit hardware or Wear OS updates

Ever since Google bought Fitbit despite antitrust concerns, we've been curious how Google will put its personal spin on the best Fitbits of the future. 

Since Google IO 2021, we've seen a bigger focus from the company on how Wear OS 'fits' in its product line, but we've yet to see another Google-branded smartwatch return.

This may be the year that we see a section dedicated to Fitbit, Wear OS, and more. Google is aware that the Apple Watch rules over all others in the category, and 2022 may be the year that we see some more major improvements.

TechRadar – All the latest technology news

Read More