Windows 11 speech recognition feature gets ditched in September 2024 – but only because there’s something better

Windows 11’s voice functionality is being fully switched over to the new Voice Access feature later this year, and we now have a date for when the old system – Windows Speech Recognition (WSR) – will be officially ditched from the OS.

The date for the replacement of WSR by Voice Access has been announced as September 2024 in a Microsoft support document (as Windows Latest noticed). Note that the change will be ‘starting’ in that month, so will take further time to roll out to all Windows 11 PCs.

However, there’s a wrinkle here, in that this is the case for Windows 11 22H2 and 23H2 users, which means those still on Windows 11 21H2 – the original version of the OS – won’t have WSR removed from their system.

Windows 10 users will still have WSR, of course, as Voice Access is a Windows 11-only feature.


Analysis: WSR to go MIA, but it’s A-OK (for the most part)

This move is no surprise as Microsoft removed Windows Speech Recognition from Windows 11 preview builds back at the end of 2023. So, this change was always going to come through for release versions of Windows 11, it was just a question of when – and now we know.

Will the jettisoning of WSR mean this feature is missed by Windows 11 users? Well, no, not really, because its replacement, Voice Access, is so much better in pretty much every respect. It is leaps and bounds ahead of WSR, in fact, with useful new features being added all the time – such as the ability to concoct your own customized voice shortcuts (a real timesaver).

In that respect, there’s no real need to worry about the transition from WSR to Voice Access – the only potential thorny issue comes with language support. WSR offers a whole lot more in this respect, because it has been around a long time.

However, Voice Access is getting more languages added in the Moment 5 update. And in six months’ time, when WSR is officially canned (or that process begins), we’ll probably have Windows 11 24H2 rolling out, or it’ll be imminent, and we’d expect Voice Access to have its language roster even more filled out at the point.

Those on Windows 11 21H2 will be able to stick with WSR as observed, but then there’s only a very small niche of users left on that OS, as Microsoft has been rolling out an automatic forced upgrade for 21H2 for some time now. (Indeed, this is now happening for 22H2 as of a few weeks ago). Barely anyone should remain on 21H2 at this point, we’d imagine, and those who are might be stuck there due to a Windows update bug, or oversight during the automated rollout.

Windows 10 users will continue with WSR as it’s their only option, but as a deprecated feature, it won’t receive any further work or upgrades going forward. That’s another good reason why Windows 11 users should want to upgrade to Voice Access which is being actively developed at quite some pace.

You might also like…

TechRadar – All the latest technology news

Read More

Windows 11’s next big AI feature could turn your video chats into a cartoon

Windows 11 users could get some smart abilities that allow for adding AI-powered effects to their video chats, including the possibility of transporting themselves into a cartoon world.

Windows Latest spotted the effects being flagged up on X (formerly Twitter) by regular leaker XenoPanther, who discovered clues to their existence by digging around in a Windows 11 preview build.

See more

These are Windows Studio effects, which is a set of features implemented by Microsoft in Windows 11 that use AI – requiring an NPU in the PC – to achieve various tricks. Currently, one of those is making it look like you’re making eye contact with the person on the other end of the video call. (In other words, making it seem like you’re looking at the camera, when you’re actually looking at the screen).

The new capabilities appear to be the choice to make the video feed look like an animated cartoon, a watercolor painting, or an illustrated drawing (like a pencil or felt tip artwork – we’re assuming something like the video for that eighties classic ‘Take on Me’ by A-ha).

If you’re wondering what Windows Studio is capable of as it stands, as well as the aforementioned eye contact feature – which is very useful in terms of facilitating a more natural interaction in video chats or meetings – it can also apply background effects. That includes blurring the background in case there’s something you don’t want other chat participants to see (like the fact you haven’t tied up your study in about three years).

The other feature is automatic framing which keeps you centered, with the image zoomed and cropped appropriately, as (or if) you move around.


Analysis: That’s all, folks!

Another Microsoft leaker, Zac Bowden, replied to the above tweet to confirm these are the ‘enhanced’ Windows Studio effects that he’s talked about recently, and that they look ‘super cool’ apparently. They certainly sound nifty, albeit on the more off-the-wall side of the equation than existing Windows Studio functionality – they’re fun aspects rather than serious presentation-related AI powers.

This is something we might see in testing soon, then, or that seems likely, particularly as two leakers have chimed in here. We might even see these effects arrive in Windows 11 24H2 later this year.

Of course, there’s no guarantee of that, but it also makes sense given that Microsoft is fleshing out pretty much everything under the sun with extra AI capabilities, wherever they can be crammed in – with a particular focus on creativity at the moment (and the likes of the Paint app).

The future is very much the AI PC, complete with NPU acceleration, as far as Microsoft is concerned.

You might also like…

TechRadar – All the latest technology news

Read More

Microsoft makes big promises with new ‘AI PCs’ that will come with AI Explorer feature for Windows 11

Microsoft has told us that it’s working on embedding artificial intelligence (AI) across a range of products, and it looks like it meant it, with the latest reports suggesting a more fleshed-out ‘AI Explorer’ feature for Windows 11.

Windows Central writes that AI Explorer will be the major new feature of an upcoming Windows 11 update, with Microsoft rumored to be working on a new AI assistance experience that’s described as an ‘advanced Copilot’ that will offer an embedded history and timeline feature. 

Apparently, this will transform the activities you do on your PC into searchable moments. It’s said that this AI Explorer will be able to be used in any app, enabling users to search conversations, documents, web pages, and images using natural language.

That promises a lot, implying you’ll be able to make requests like the following that Windows Central gives:

“Find me that list of restaurants Jenna said she liked.”

“Find me that thing about dinosaurs.”

The advanced Copilot should then present everything it deems relevant – including every related word, phrase, image, and topic it can pull. It’s not clear if this means bringing up results from users' data stored locally on their PC or the internet (or a combination, as we see in Windows 11's Search box). I personally would prefer it if AI Explorer kept to just searching local files stored on a device's hard drive for privacy reasons, or at least give us the option to exclude internet results. 

The feature could also offer up suggestions for things you can do based on what you currently have on your screen. For instance, if you’re viewing a photo, you might see suggestions to remove the background in the Photos app. 

The new Photos app in Windows 11

(Image credit: Microsoft)

When we except more information

Rumors suggest that on March 21 there will be an announcement for the Surface Laptop 6 and Surface Pro 10, which are being hailed as Microsoft’s first real “AI PCs,” and will offer a range of features and upgrades powered by Microsoft’s next-gen AI tools. Sources say that these will go head-to-head with rivals like the iPad Pro and MacBook Pro in terms of efficiency and performance.

According to Neowin, we can look forward to the official launch of these PCs in April and June, but the AI features aren’t expected to be included right away. They’re forecasted to be added in the second half of the year, so the first of these shipped PCs will be pretty much like presently existing PCs running Windows 11 with some flashy hardware upgrades. It also seems like AI Explorer is specifically intended for these new machines, even if not right away, and existing device users won’t be able to use it. 

It sounds like we’ll have to continue to watch for more information from Microsoft, especially as it’s not clear what exactly to expect on March 21, but it’s a lot of hype and excitement that I hope it can fulfill. Copilot’s present form is generally thought to be underwhelming and somewhat disappointing, so Microsoft has a lot to deliver if it wants to impress users and show them that it’s leading the pack with generative AI.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

ChatGPT takes the mic as OpenAI unveils the Read Aloud feature for your listening pleasure

OpenAI looks like it’s been hard at work, making moves like continuing to improve the GPT store and recently sharing demonstrations of one of the other highly sophisticated models in its pipeline, the video-generation tool Sora. That said, it looks like it’s not completely resting on ChatGPT’s previous success and giving the impressive AI chatbot the capability to read its responses out loud. The feature is being rolled out on both the web version and the mobile versions of the chatbot. 

The new feature will be called 'Read Aloud', as per an official X (formerly Twitter) post from the generative artificial intelligence (AI) company. These will come in useful for many users, including those who have different accessibility needs and people using the chatbot while on the go.

Users can try it for themselves now, according to the Verge, either on the web version of ChatGPT or a mobile version (iOS and Android), and they will be given five different voices they can select from that ChatGPT can use. The feature is available to try whether you use the free version available to all users, GPT-3.5, or the premium paid version, GPT-4. When it comes to languages, users can expect to be able to use the Read Aloud feature in 37 languages (for now) and ChatGPT will be given the ability to autodetect the language that the conversation is happening in. 

If you want to try it on the desktop version of ChatGPT, there should be a speaker icon that shows up below the generated text that activates the feature. If you'd like to try it on a mobile app version, users can tap on and hold the text to open the Read Aloud feature player. In the player, users can play, pause, and rewind the reading of ChatGPTs’ response. Bear in mind that the feature is still being rolled out, so not every user in every region will have access just yet.

A step in the right direction for ChatGPT

This isn’t the first voice-related feature that ChatGPT has received, with Open AI introducing a voice chat feature in September 2023, which allowed users to make inquiries using voice input instead of typing. Users can keep this setting on, prompting ChatGPT to always respond out loud to their inputs.

The debut of this feature comes at an interesting time, as Anthropic recently introduced similar features to its own generative AI models, including Claude. Anthropic is an OpenAI competitor that’s recently seen major amounts of investment from Amazon. 

Overall, this new feature is great news in my eyes (or ears), primarily for expanding accessibility to ChatGPT, but also because I've had a Read-Aloud plugin for ChatGPT in my browser for a while now. I find it interesting to listen to and analyze ChatGPT’s responses out loud, especially as I’m researching and writing. After all, its responses are designed to be as human-like as possible, and a big part of how we process actual real-life human communication is by speaking and listening to each other. 

Giving Chat-GPT a capability like this can help users think about how well ChatGPT is responding, as it makes use of another one of our primary ways of receiving verbal information. Beyond the obvious accessibility benefits for blind or partially-sighted users, I think this is a solid move by OpenAI in cementing ChatGPT as the go-to generative AI tool, opening up another avenue for humans to connect to it. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Feeling lost in the concrete jungles of the world? Fear not, Google Maps introduces a new feature to help you find entrances and exits

Picture this: you’re using Google Maps to navigate to a place you’ve never been and time is pressing, but you’ve made it! You’ve found the location, but there’s a problem: you don’t know how to get into whatever building you’re trying to access, and panic sets in. Maybe that’s just me, but if you can relate it looks like we’re getting some good news – Google Maps is testing a feature that shows you exactly where you can enter buildings.

According to Android Police, Google Maps is working on a feature showing users entrance indicator icons for selected buildings. I can immediately see how this could make it easier to find your way in and out of a location. Loading markers like this would require a lot of internet data if done for every suitable building in a given area, especially metropolitan and densely packed areas, but it seems Google has accounted for this; the entrance icons will only become visible when you select a precise location and zoom in closely. 

Google Maps is an immensely popular app for navigation as well as looking up recommendations for various activities, like finding attractions or places to eat. If you’ve ever actually done this in practice, you’ve possibly had a situation like I’ve described above, especially if you’re trying to find your way around a larger attraction or building. Trying to find the correct entrance to an expo center or sports stadium can be a nightmare. Places like these will often have multiple entrances with different accessibility options – such as underground train stations that stretch across several streets.

Google's experimentation should help users manage those parts of their journeys better, starting with only certain users and certain buildings for now, displaying icons that indicate both where you can enter a place and exit it (if there are exit/entrance-only doors, for example). This feature follows the introduction of Google Maps’ recent addition of indicators of the best station exits and entrances for users of public transport.

Google Maps being used to travel across New York

(Image credit: Shutterstock / TY Lim)

The present state of the new feature

Android Police tested the new feature on Google Maps version 11.17.0101 on a Google Pixel 7a. As Google seemingly intended, Google Maps showed entrances for a place only when it was selected and while the user zoomed in on it, showing a white circle with a symbol indicating ‘entry’ on it. That said, Android Police wasn’t able to use the feature on other devices running the latest version of Google Maps for different regions, which indicates that Google Maps is rolling this feature out gradually following limited and measured testing. 

While using the Google Pixel 7a, Android Police tested various types of buildings including hotels, doctors’ offices, supermarkets, hardware stores, cafes, and restaurants in cities that include New York City, Las Vegas, San Francisco, and Berlin. Some places had these new entrance and exit markers and some didn’t, which probably means that Google is still in the process of gathering accurate and up-to-date information on these places, most likely via its StreetView tool. Another issue that came up was that some of the indicated entrances were not in the right place, but teething issues are inevitable and this problem seemed more common for smaller buildings where it’s actually easier to find the entrance once you’re there in person.

The entrances were sometimes marked by a green arrow instead of a white circle, and it’s not clear at this point exactly what it means when a green arrow or a white circle is used. Google Maps has a reputation as a very helpful, functional, and often dependable app, so whatever new features are rolled out, Google probably wants to make sure they’re up to a certain standard. I hope they complete the necessary stages of experimenting and implementing this new feature, and I look forward to using it as soon as I can.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

This upcoming feature on Google Keep may finally sway me away from Apple Notes for good

Google Keep is a popular task management and note-taking tool integrated with Google Suite so you can create and tick off to-do lists as you work on your computer or phone. The mobile version of Google Keep could be about to get a new feature that may tempt people away from their other note-taking apps – lock screen access to your notes.

According to 9to5Google, the team behind Google Keep has been pushing to become the default note-taking app on Android devices. In the same way, Apple Notes is the default note-taking app on every iPhone, iPad, and Mac. If Google Keep does become the de facto note-taking app of choice on Android devices, this opens the door to the app having more features that can be integrated more intimately into your phone. 

Alongside lock screen access to recent notes, we could also see improved stylus support so you can jot down your thoughts quickly and do fun doodles with a bit more control of your strokes. In version 5.24 of the app, there’s a new section of the settings menu that lists the lock screen access as ‘coming soon’, which gives me hope that we’ll see the feature sooner rather than later. 

I have no memory, I need lock screen access, please

As an extremely forgetful person who needs to make lists for everything, I am so excited about the possibility of being able to look at my lock screen and see all my important to-dos at a glance, especially if the feature becomes available to non-Android users too. 

You can have shopping lists, reminders, positive affirmations, and reflections all on your lock screen and tick them off as you go through them without even needing to unlock your phone. I currently use Google Keep on my work computer exclusively to tick things off as I go through the day. If I can have my professional to-do list not just on a mobile app but very visible on my lock screen, I can keep tabs on what needs to be done while on my commute to work, and jot down tasks to carry over to the next day on the way back home. 

Apple Notes has been my default note-taking app mostly because I’m an iPhone user, and while it has had a few improvements here and there (like adding grids, text formatting options, and being able to drop in photos into the app) it’s ultimately nothing special in the world of note-taking apps. If Google Keep can implement lock screen access outside of just Android phones, you’d better believe I’m shifting all my shopping list reminders over immediately and saying goodbye to Apple Notes for good. 

TechRadar – All the latest technology news

Read More

YouTube Shorts gains an edge over TikTok thanks to new music video remix feature

YouTube is revamping the Remix feature on its ever popular Shorts by allowing users to integrate their favorite music videos into content.

This update consists of four tools: Sound, Collab, Green Screen, and Cut. The first one lets you take a track from a video for use as background audio. Collab places a Short next to an artist’s content so you can dance alongside it or copy the choreography itself. Green Screen, as the name suggests, allows users to turn a music video into the background of a Short. Then there’s Cut, which gives creators the ability to remove a five-second portion of the original source to add to their own content and repeat as often as they like. 

It’s important to mention that none of these are brand new to the platform as they were actually introduced years prior. Green Screen, for instance, hit the scene back in 2022 although it was only available on non-music videos.

Remixing

The company is rolling out the remix upgrade to all users, as confirmed by 9To5Google, but it’s releasing it incrementally. On our Android, we only received a part of the update as most of the tools are missing. Either way, implementing one of the remix features is easy to do. The steps are exactly the same across the board with the only difference being the option you choose.

To start, find the music video you want to use on the mobile app and tap the Remix button. It’ll be found in the description carousel. Next, select the remix tool. At the time of this writing, we only have access to Sound so that’ll be the one we’ll use.

YouTube Short's new Remix tool for Music Videos

(Image credit: Future)

You will then be taken to the YouTube Shorts editing page where you highlight the 15-second portion you want to use in the video. Once everything’s sorted out, you’re free to record the Short with the music playing in the back.

Analysis: A leg over the competition

The Remix feature’s expansion comes at a very interesting time. Rival TikTok recently lost access to the vast music catalog owned by Universal Music Group (UMG), meaning the platform can no longer host tracks by artists represented by the record label. This includes megastars like Taylor Swift and Drake. TikTok videos with “UMG-owned music” will be permanently muted although users can replace them with songs from other sources.

The breakup between UMG and TikTok was the result of contract negotiations falling through. Apparently, the social media platform was trying to “bully” the record label into accepting a bad deal that wouldn’t have adequately protected artists from generative AI and online harassment.  

YouTube, on the other hand, was more cooperative. The company announced last August they were working with UMG to ensure “artists and right holders would be properly compensated for AI music.” So creators on YouTube are safe to take whatever songs they want from the label – for now. It's possible future negotiations between these two entities will turn sour down the line.

If you're planning on making YouTube Shorts, you'll need a smartphone with a good camera. Be sure to check out TechRadar's list of the best iPhone for 2024 if you need some recommendations.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft does DLSS? Look out world, AI-powered upscaling feature for PC games has been spotted in Windows 11

Windows 11’s big update for this year could come with an operating system-wide upscaling feature for PC games in the same vein as Nvidia DLSS or AMD FSR (or Intel XeSS).

The idea would be to get smoother frame rates by upscaling the game’s resolution. In other words, running at a lower resolution, and artificially ramping it up to a higher level of detail, but with a greater level of fluidity than running natively, all of which would be driven by AI.

The ‘Automatic Super Resolution’ option is currently hidden in test builds of Windows 11 (version 26052 to be precise). Leaker PhantomOfEarth enabled the feature and shared some screenshots of what it looks like in the Graphics panel in the Settings app.

See more

There’s a system-wide toggle for Microsoft’s own take on AI upscaling, and per-app settings if you wish to be a bit more judicious about how the tech is applied.

In theory, this will be ushered in with Windows 11 24H2 – which is now confirmed by Microsoft as the major update for its desktop OS this year. (There’ll be no Windows 12 in 2024, as older rumors had suggested was a possibility).

We don’t know that Automatic Super Resolution will be in 24H2 for sure, though, as it could be intended for a later release, or indeed it might be a concept that’s scrapped during the testing process.


A PC gamer looking happy

(Image credit: Shutterstock)

Analysis: Microsoft’s angle

This is still in its very early stages, of course – and not even officially in testing yet – so there are a lot of questions about how it will work.

In theory, it should be a widely applicable upscaling feature for games that leverages the power of AI, either via a Neural Processing Unit – the NPUs now included in Intel’s new Meteor Lake CPUs, or AMD’s Ryzen 8000 silicon – or the GPU itself (employing Nvidia’s Tensor cores, for example, which are used to drive its own DLSS).

As noted, though, we can’t be sure exactly how this will be applied, though it’s certainly a game-targeted feature – the text accompanying it tells us that much – likely to be used for older PC games, or those not supported by Nvidia DLSS, AMD FSR, or Intel XeSS for that matter.

We don’t expect Microsoft will try and butt heads with Nvidia in terms of attempting to outdo Team Green’s own upscaling, but rather supply a more broadly supported alternative, one which won’t be as good. The trade-off is that wider level of support, much as already seen with AMD’s Radeon Super Resolution (RSR), which is, in all likelihood, what this Windows 11 feature will resemble the most.

Outside of gaming, Automatic Super Resolution may also be applicable to videos, and perhaps other apps – video chatting, maybe, at a guess – to provide some AI supercharging for the provided footage.

Again, there are already features from Nvidia and AMD (the latter is still incoming) that do video upscaling, but again Microsoft would offer broader coverage (as the name suggests, Nvidia’s RTX Video Super Resolution is only supported by RTX graphics cards, so other GPUs are left out in the cold).

We expect Automatic Super Resolution is something Microsoft will certainly be looking to implement, more likely than not, to complement other OS-wide technologies for PC gamers. That includes Auto HDR, which brings HDR (or an approximation of it) to SDR games. (And funnily enough, it looks like Nvidia is working on its own take on that ability, building on RTX Video HDR which is already here for video playback).

As you may have noticed at this point, there are a lot of this kind of performance-enhancing technologies around these days, which is telling in itself. Perhaps part of Microsoft’s angle is a simple system-level switch that confused users can just turn on for upscaling trickery across the board, and ‘it just works’ to quote another famous tech giant.

You might also like…

TechRadar – All the latest technology news

Read More

Your Microsoft OneDrive storage is about to get smarter thanks to this time-saving Copilot feature

Microsoft’s on fire recently with the addition of some super-useful features thanks to its artificial intelligence assistant Copilot, and it looks like OneDrive is finally getting a much-needed AI boost. Soon, you’ll be able to search through your files without having to open them to find the relevant info simply by asking Copilot the question you want answered. 

Say you’re looking for a specific figure or quote but you have too many files to start searching, or you’re like me and don’t organize anything into folders at all (oops). Instead of opening every document and scanning through to find the specific bit of info you’re looking for, you’ll be able to pull up Copilot and tell it what you want to find. You could ask it to find a specific bit of info from a lecture presentation, or group project, and Copilot will go through the files and provide the relevant answers. 

According to MSPoweruser, this feature will work across multiple file types including DOC, DOCX, PDF, TXT,  and more, so you won’t be restricted to just Word documents. 

The feature is included in Microsoft’s 365 roadmap, due to be released to users sometime in May 2024. Hopefully, we’ll see this trickle down to Microsoft’s free Office for Web suite (formerly known as Office Online) which includes an in-browser version of Microsoft Word and 5GB of OneDrive cloud storage. 

A win for the unorganized girlies

This feature alone is enough to entice me away from Google Drive just for the convenience alone. There’s nothing worse than having to crawl through your folders and files to find something you’re looking for. 

I would have appreciated this feature when I was at university, especially with how many notes and textbooks I had scattered around my school One Drive account. By bringing Copilot into the mix, I could have found whatever I was looking for so much faster and saved myself from a fair amount of panic. 

If you work in an industry where you’re constantly dealing with new documents with critical information every day, or a student consistently downloading research papers or textbooks, this new addition to Copilot's nifty AI-powered skill set is well worth keeping an eye out for. 

While I am disappointed this feature will be locked behind the Microsoft 365 subscription, it’s not surprising – Microsoft is investing a lot of time and money into Copilot, so it makes sense that it would use its more advanced features to encourage people to pay to subscribe to Microsoft 365. However, there’s a danger that if it paywalls all the most exciting features, Copilot could struggle to be as popular as it deserves to be. Microsoft won’t want another Clippy or Cortana on its hands.

You might also like…

TechRadar – All the latest technology news

Read More

Microsoft Edge could soon get its own version of Google’s Circle to Search feature

As the old saying goes, “Imitation is the sincerest form of flattery”. Microsoft is seemingly giving Google a huge compliment as new info reveals the tech giant is working on its own version of Circle to Search for Edge.

If you’re not familiar, Circle to Search is a recently released AI-powered feature on the Pixel 8 and Galaxy S24 series of phones. It allows people to circle objects on their mobile devices to quickly look them up on Google Search. Microsoft’s rendition functions similarly. According to the news site Windows Report, it’s called Circle To Copilot. The way it works you circle an on-screen object with the cursor – in this case, it’s an image of the Galaxy S24 Ultra

Immediately after, Copilot appears from the right side with the circled image attached as a screenshot in an input box. You then ask the AI assistant what the object is in the picture, and after a few seconds, it’ll generate a response. The publication goes on to state the tool also works with text. To highlight a line, you will also need to draw a circle around the words.

Windows Report states Circle To Copilot is currently available on the latest version of Microsoft Edge Canary which is an experimental build of the browser. It’s meant for users or developers who want early access to potential features. The publication has a series of instructions explaining how you can activate Circle To Copilot. You'll need to enter a specific command into the browser's Properties menu.

If the command works for you, Circle To Copilot can be enabled by going to the Mouse Gesture section of Edge’s Settings menu and then clicking the toggle switch. It’s the fourth entry from the top.

Work in progress

We followed Windows Report's steps ourselves; however, we were unable to try out the feature. All we got was an error message stating the command to activate the tool was not valid. It seems not everyone who installs Edge Canary will gain access, although this isn’t surprising. 

The dev browser is, not surprisingly, unstable. It’s a testing ground for Microsoft so things don’t always work as well as they should; if at all. It is possible Circle To Copilot will function better in a future patch, however, we don’t know when that will be rolling out. We are disappointed the feature was inaccessible on our PC because we had a couple of questions. Is this something that needs to be manually triggered on Copilot? Or will it function like Ask Copilot where you highlight a piece of content, right-click it, and select the correct option in the context menu?

Out of curiosity, we installed Edge Canary on our Android phone to see if it had the update. As it turns out, no. It may be Circle To Copilot is exclusive to Edge on desktop, but this could change in the future.

Be sure to check TechRadar's list of the best AI-powered virtual assistant for 2024.

You might also like

TechRadar – All the latest technology news

Read More