iOS 17.4 might give you more options for turning off those FaceTime reactions

The FaceTime video reactions Apple introduced in iOS 17 are kind of cool – fireworks when you show two thumbs up, and so on – but you don't necessarily want them going off on every call. Now it looks as though Apple is about to make the feature less prominent.

As per MacRumors, with the introduction of iOS 17.4 and iPadOS 17.4, third-party video calling apps will be able to turn the reactions off by default. In other words, you won't suddenly find balloons filling the screen on a serious call with your boss.

That “by default” is the crucial bit – at the moment, whenever you fire up FaceTime or another video app for the first time, these reactions will be enabled. You can turn them off (and they will then stay off for that app), but you need to remember to do it.

The move also means third-party developers get more control over the effects that are applied at a system level. As The Verge reports, one telehealth provider has already taken the step of informing users that it has no control over these reactions.

Coming soon

FaceTime reactions

A thumbs down is another reaction you can use (Image credit: Apple)

This extra flexibility is made possible through what's called an API or Application Programming Interface – a way for apps to interact with operating systems. It would mean the iOS or iPadOS setting no longer dictates the setting for every other video app.

The changes have been spotted in the latest beta versions of iOS 17.4 and iPadOS 17.4, though there's no guarantee that they'll stay there when the final version of the software rolls out. As yet it's not clear if the same update will be applied to macOS.

iOS 17.3 was pushed out on January 22, so we shouldn't have too much longer to wait to see its successor. Among the iOS 17.4 features in the pipeline, based on the beta version, we've got game streaming apps and automatic transcripts for your podcasts.

Apple will be hoping that a new version helps to encourage more people to actually install iOS 17 too. Uptake has been slower than it was with iOS 16, with users citing bugs and a lack of new features as reasons not to apply the update.

You might also like

TechRadar – All the latest technology news

Read More

Opera: new DMA rules a chance “to put pressure” on Apple to open up for all

It looks like Apple's enclosed ecosystem is slowly opening up—in the EU, at least. On January 25th, 2024, the Big Tech giant revealed changes to its App Store and business model given new requirements under the Digital Market Act (DMA) due to come into force in March.

Apple's announcement was met with controversies, though. Many commentators, including Meta's founder Mark Zuckerberg and music app behemoth Spotify, deemed it as a farce. According to VPN service provider Proton VPN, “Apple is trying to profit off the DMA.” Echoing such concerns, web browser Mozilla sees this as “another example of Apple creating barriers to prevent true browser competition on iOS.”

Developers at Opera are more optimistic about Apple's new iOS browser rules and decided to celebrate launching an AI-powered alternative to Safari. I talked with Jona Bolin, Product Manager at Opera browser for iOS, to understand what all of this means for users in and out of Europe. 

An opportunity to get more control

“I think it's great that they are changing the regulations,” Bolin told me. “For us, it's an opportunity to have high control.” 

He went on to explain that while distribution is a major factor for other developers, the fact that Opera browser is a free service means that it won't be affected as much by new fees and payment requirements.

“Even though, we would have to develop two different apps,” Bolin told me, adding that the challenge will be encouraging users to migrate from one app to another instead. 

That's because as Apple opens up to third-party web browser engines for the first time—until now only Safari's WebKit engine was allowed for iOS—the provider has only done so for EU apps. This ultimately means twice as much work for browsers' developers.

Despite this burden, Bolin expects Apple's changes to make it easier for the team to implement the same level of features across Opera's range of apps. “Out of the box, we would get high security and a better process from where we can build on top of,” he added. 

See more

The Norwegian browser already announced plans to bring its AI-centric browser, Opera One, to iOS to give users a better AI-powered alternative to Safari. This is expected to be released in the next few months.

Outside the EU, both the UK and the US are voting on legislation that echoes the DMA's effort to ensure fair competition within the tech market and protect people's digital rights. 

Bolin hopes that new DMA requirements in the EU could then be only the first step “to put pressure” on the big tech giant to open up its ecosystem for all.

He said: “I think more countries need to move forward and then maybe Apple will also change. We also believe that [the DMA] can be a good test run, so maybe Apple would realize that it's also working on their side. We hope that in the future they will bring it to other markets—we believe that it will happen, eventually.” 

TechRadar – All the latest technology news

Read More

Google Gemini explained: 7 things you need to know the new Copilot and ChatGPT rival

Google has been a sleeping AI giant, but this week it finally woke up. Google Gemini is here and it's the tech giant's most powerful range of AI tools so far. But Gemini is also, in true Google style, really confusing, so we're here to quickly break it all down for you.

Gemini is the new umbrella name for all of Google's AI tools, from chatbots to voice assistants and full-blown coding assistants. It replaces both Google Bard – the previous name for Google's AI chatbot – and Duet AI, the name for Google's Workspace-oriented rival to CoPilot Pro and ChatGPT Plus.

But this is also way more than just a rebrand. As part of the launch, Google has released a new free Google Gemini app for Android (in the US, for now. For the first time, Google is also releasing its most powerful large language model (LLM) so far called Gemini Ultra 1.0. You can play with that now as well, if you sign up for its new Google One AI Premium subscription (more on that below).

This is all pretty head-spinning stuff, and we haven't even scratched the surface of what you can actually do with these AI tools yet. So for a quick fast-charge to get you up to speed on everything Google Gemini, plug into our easily-digestible explainer below…

1. Gemini replaces Google Bard and Duet AI

In some ways, Google Gemini makes things simpler. It's the new umbrella name for all of Google's AI tools, whether you're on a smartphone or desktop, or using the free or paid versions.

Gemini replaces Google Bard (the previous name for Google's “experimental” AI chatbot) and Duet AI, the collection of work-oriented tools for Google Workspace. Looking for a free AI helper to make you images or redraft emails? You can now go to Google Gemini and start using it with a standard Google account.

But if you want the more powerful Gemini Advanced AI tools – and access to Google's newest Gemini Ultra LLM – you'll need to pay a monthly subscription. That comes as part of a Google One AI Premium Plan, which you can read more about below.

To sum up, there are three main ways to access Google Gemini:   

2. Gemini is also replacing Google Assistant

Two phones on an orange background showing the Google Gemini app

(Image credit: Google)

As we mentioned above, Google has launched a new free Gemini app for Android. This is rolling out in the US now and Google says it'll be “fully available in the coming weeks”, with more locations to “coming soon”. Google is known for having a broad definition of “soon”, so the UK and EU may need to be patient.

There's going to be a similar rollout for iOS and iPhones, but with a different approach. Rather than a separate standalone app, Gemini will be available in the Google app.

The Android app is a big deal in particular because it'll let you set Gemini as your default voice assistant, replacing the existing Google Assistant. You can set this during the app's setup process, where you can tap “I agree” for Gemini to “handle tasks on your phone”.

Do this and it'll mean that whenever you summon a voice assistant on your Android phone – either by long-pressing your home button or saying “Hey Google” – you'll speak to Gemini rather than Google Assistant. That said, there is evidence that you may not want to do that just yet…

3. You may want to stick with Google Assistant (for now)

An Android phone on an orange background showing the Google Gemini app

(Image credit: Google)

The Google Gemini app has only been out for a matter of days – and there are early signs of teething issues and limitations when it comes to using Gemini as your voice assistant.

The Play Store is filling up with complaints stating that Gemini asks you to tap 'submit' even when using voice commands and that it lacks functionality compared to Assistant, including being unable to handle hands-free reminders, home device control and more. We've also found some bugs during our early tests with the app.

Fortunately, you can switch back to the old Google Assistant. To do that, just go the Gemini app, tap your Profile in the top-right corner, then go to Settings > Digital assistants from Google. In here you'll be able to choose between Gemini and Google Assistant.

Sissie Hsiao (Google's VP and General Manager of Gemini experiences) claims that Gemini is “an important first step in building a true AI assistant – one that is conversational, multimodal and helpful”. But right now, it seems that “first step” is doing a lot of heavy lifting.

4. Gemini is a new way to quiz Google's other apps

Two phones on an orange background showing the Google Gemini app

(Image credit: Google)

Like the now-retired Bard, Gemini is designed to be a kind of creative co-pilot if you need help with “writing, brainstorming, learning, and more”, as Google describes it. So like before, you can ask it to tell you a joke, rewrite an email, help with research and more. 

As always, the usual caveats remain. Google is still quite clear that “Gemini will make mistakes” and that, even though it's improving by the day, Gemini “can provide inaccurate information, or it can even make offensive statements”.

This means its other use case is potentially more interesting. Gemini is also a new way to interact with Google's other services like YouTube, Google Maps and Gmail. Ask it to “suggest some popular tourist sites in Seattle” and it'll show them in Google Maps. 

Another example is asking it to “find videos of how to quickly get grape juice out of a wool rug”. This means Gemini is effectively a more conversational way to interact with the likes of YouTube and Google Drive. It can also now generate images, which was a skill Bard learnt last week before it was renamed.

5. The free version of Gemini has limitations

Two phones on an orange background showing the Google Gemini Android app

(Image credit: Future)

The free version of Gemini (which you access in the Google Gemini app on Android, in the Google app on iOS, or on the Gemini website) has quite a few limitations compared to the subscription-only Gemini Advanced. 

This is partly because it's based on a simpler large language model (LLM) called Gemini Pro, rather than Google's new Gemini Ultra 1.0. Broadly speaking, the free version is less creative, less accurate, unable to handle multi-step questions, can't really code and has more limited data-handling powers.

This means the free version is best for basic things like answering simple questions, summarizing emails, making images, and (as we discussed above) quizzing Google's other services using natural language.

Looking for an AI assistant that can help with advanced coding, complex creative projects, and also work directly within Gmail and Google Docs? Google Gemini Advanced could be more up your street, particularly if you already subscribe to Google One… 

6. Gemini Advanced is tempting for Google One users

The subscription-only Gemini Advanced costs $ 19.99 / £18.99 / AU$ 32.99 per month, although you can currently get a two-month free trial. Confusingly, you get Advanced by paying for a new Google One AI Premium Plan, which includes 2TB of cloud storage.

This means Gemini Advanced is particularly tempting if you already pay for a Google One cloud storage plan (or are looking to sign up for it anyway). With a 2TB Google One plan already costing $ 9.99 / £7.99 / AU$ 12.49 per month, that means the AI features are effectively setting you back an extra $ 10 / £11 / AU$ 20 a month.

There's even better news for those who already have a Google One subscription with 5TB of storage or more. Google says you can “enjoy AI Premium features until July 21, 2024, at no extra charge”.

This means that Google, in a similar style to Amazon Prime, is combining its subscriptions offerings (cloud storage and its most powerful AI assistant) in order to make them both more appealing (and, most likely, more sticky too).

7. The Gemini app could take a little while to reach the UK and EU

Two phones on an orange background showing the Google Gemini app

(Image credit: Future)

While Google has stated that the Gemini Android app is “coming soon” to “more countries and languages”, it hasn't given any timescale for when that'll happen – and a possible reason for the delay is that it's waiting for the EU AI Act to become clearer.

Sissie Hsiao (Google's VP and General Manager of Gemini experiences) told the MIT Technology Review “we’re working with local regulators to make sure that we’re abiding by local regime requirements before we can expand.”

While that sounds a bit ominous, Hsiao added that “rest assured, we are absolutely working on it and I hope we’ll be able to announce expansion very, very soon.” So if you're in the UK or EU, you'll need to settle for tinkering with the website version for now.

Given the early reviews of the Google Gemini Android app, and its inconsistencies as a Google Assistant replacement, that might well be for the best anyway.

You might also like

TechRadar – All the latest technology news

Read More

These new smart glasses can teach people about the world thanks to generative AI

It was only a matter of time before someone added generative AI to an AR headset and taking the plunge is start-up company Brilliant Labs with their recently revealed Frame smart glasses.

Looking like a pair of Where’s Waldo glasses (or Where’s Wally to our UK readers), the Frame houses a multimodal digital assistant called Noa. It consists of multiple AI models from other brands working together in unison to help users learn about the world around them. These lessons can be done just by looking at something and then issuing a command. Let’s say you want to know more about the nutritional value of a raspberry. Thanks to OpenAI tech, you can command Noa to perform a “visual analysis” of the subject. The read-out appears on the outer AR lens. Additionally, it can offer real-time language translation via Whisper AI.

The Frame can also search the internet via its Perplexity AI model. Search results will even provide price tags for potential purchases. In a recent VentureBeat article, Brilliant Labs claims Noa can provide instantaneous price checks for clothes just by scanning the piece, or fish out home listings for new houses on the market. All you have to do is look at the house in question. It can even generate images on the fly through Stable Diffusion, according to ZDNET

Evolving assistant

Going back to VentureBeat, their report offers a deeper insight into how Noa works. 

The digital assistant is always on, constantly taking in information from its environment. And it’ll apparently “adopt a unique personality” over time. The publication explains that upon activating for the first time, Noa appears as an “egg” on the display. Owners will have to answer a series of questions, and upon finishing, the egg hatches into a character avatar whose personality reflects the user. As the Frame is used, Noa analyzes the interactions between it and the user, evolving to become better at tackling tasks.

Brilliant Labs Frame exploded view

(Image credit: Brilliant Labs)

An exploded view of the Frame can be found on Brilliant Labs’ official website providing interesting insight into how the tech works. On-screen content is projected by a micro-OLED onto a “geometric prism” in the lens. 9To5Google points out this is reminiscent of how Google Glass worked. On the nose bridge is the Frame’s camera sitting on a PCBA (printed circuit board assembly). 

At the end of the stems, you have the batteries inside two big hubs. Brilliant Labs states the frames can last a whole day, and to charge them, you’ll have to plug in the Mister Power dongle, inadvertently turning the glasses into a high-tech Groucho Marx impersonation.

Brilliant Labs Frame with Mister Power

(Image credit: Brilliant Labs)

Availability

Currently open for pre-order, the Frame will run you $ 350 a pair. It’ll be available in three colors: Smokey Black, Cool Gray, and the transparent H20. You can opt for prescription lenses. Doing so will bump the price tag to $ 448.There's a chance Brilliant Labs won’t have your exact prescription. They recommend to instead select the option that closely matches your actual prescription. Shipping is free and the first batch rolls out April 15.

It appears all of the AI features are subject to a daily usage cap. Brilliant Labs has plans to launch a subscription service lifting the limit. We reached out to the company for clarification and asked several other questions like exactly how does the Frame receive input? This story will be updated at a later time.

Until then, check out TechRadar's list of the best VR headsets for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Google Maps is getting an AI-boosted upgrade to be an even better navigation assistant and your personal tour guide

It looks like Google is going all-in on artificial intelligence (AI), and following the rebranding of its generative AI chatbot from Bard to Gemini, it’s now bringing generative AI recommendations to Google Maps.

The AI-aided recommendations will help Google Maps perform even better searches for a variety of destinations, and the feature is also supposedly able to function as an advisor that can offer insights and tips about things like location, budgets, and the weather. Once the feature is enabled, it can be accessed through the search function, much like existing Google Maps features. Currently, it’s only available for US users, but hopefully, it will roll out worldwide very soon. 

This upgrade of Google Maps is the latest move in Google’s ramped-up AI push, which has seen developments like AI functionality integrated into Google Workspace apps. We’ve also had hints before that AI features and functions were coming to Google Maps – such as an improved Local Guides feature. Local Guides is intended to synthesize local knowledge and experiences and share them with users to help them discover new places.

What we know about how this feature works

Android Police got a first look at how users were introduced to the new AI-powered recommendations feature. A reader got in touch with the website and explained how they were given an option to Search with generative AI in their Google Maps search bar. When selected, it opened up a page that detailed how the new feature makes use of generative AI to provide you with recommendations in a short onboarding exercise. Tapping Continue opens up the next page that provides users with a list of suggested queries like nearby attractions they can go to kill time or good local restaurants.

Similarly to ChatGPT, Google Maps apparently also includes tips toward the bottom of that page to help you improve your search results. Users can add more details to finetune their search like their budget, a place or area they might have in mind, and what the weather looks like when they’re planning to go somewhere. If you select one of these suggested queries, Google Maps will then explain how it would go through the process of selecting specific businesses and locations to recommend.

When the user doesn’t specify an area or region, Google Maps resorts to using the user’s current location. However, if you’d like to localize your results to an area (whether you’re there or not), you’ll have to mention that in your search.

After users try the feature for the first time and go through the short onboarding in Maps, they can access it instantly through the search menu. According to Android Police, Search with generative AI will appear below the horizontal menu that lists your saved location such as Home, Work, and so on.

A promising feature with plenty of potential

Again, this feature is currently restricted to people in the US, but we hope it’ll open up to users in other regions very soon. Along with AI recommendations, Google Maps is also getting a user interface redesign aimed at upgrading the user experience.

While I get that some users might be getting annoyed or overwhelmed with generative AI being injected into every part of our digital lives, this is one app I'd like to try when equipped with AI. Also, Google is very savvy when it comes to improving the user experience of its apps, and I’m keen to see how this feature’s introduction plays out.

You might also like

TechRadar – All the latest technology news

Read More

A new, much more convenient way to join Wi-Fi networks may be coming to Windows 11 and I can’t wait

Microsoft could be releasing a new feature for Windows 11 that would make connecting to Wi-Fi networks so much quicker and easier. Users may soon be able to join new networks by scanning a QR code with the camera app, eliminating the need to muck about searching for (or remembering) complicated passwords and keeping track of which password belongs to each network. 

According to MSPoweruser the feature is part of the latest Windows 11 Insider Preview Build 26052. The Windows Insider program is a community that allows Windows enthusiasts and developers to get early access to potential new features and give feedback before they make these features available to regular Windows 11 users. 

The build was made available to the Dev Channels in Preview Build in early February, which demonstrated how users can point their phone camera at a QR code displayed on a laptop or PC already connected to the Wi-Fi, and a pop-up will appear on their phones that will let them connect to the Wi-Fi network without having to enter in any passwords.

This also works with the Camera app in Windows 11, allowing you to connect new Windows 11 devices to the wireless network (either via a QR code displayed on a connected device, or be scanning the QR code that is sometimes included with new routers and printed in their manuals). Of course, those devices will need a camera, which won't be too hard for Windows 11 tablets and laptops, though maybe a bit cumbersome. Desktop PCs will be harder, but you can add a camera to your computer – check out our best webcams guide for our top picks.

Sharing is caring

The feature should also work for mobile hotspots, so you’ll be able to share your connection a lot quicker when you’re working on the go with other team members, or collaborating on group projects for school outside of the classroom. One of my least favorite parts of setting up a new device or working outside is fiddling with the Wi-Fi, so I’m pretty hyped about this feature.

We do have to keep in mind that often some of the features that are put in the Dev Channels don’t actually make it to the public. 

That being said, we do hope the feature does come to regular Windows 11 soon, because it’s an incredibly convenient way to make Wi-Fi sharing much easier and make sure other people can connect to your network without actually having to be given the password, which means this method is more convenient as well. And, if you want to give your wireless network an upgrade, check out our picks for the best Wi-Fi routers.

You might also like…

TechRadar – All the latest technology news

Read More

Microsoft reveals next evolution of Windows – and it won’t be Windows 12

Microsoft has confirmed that the next update for Windows, Windows 11 version 24H2, is indeed coming later this year. While it’s good to know that Microsoft is planning a major update for Windows 11, the news will be disappointing to anyone who was hoping for an imminent release of Windows 12, the rumored next generation of the Windows operating system (OS). We expect Windows 11 24H2 to arrive around September or October, and will continue Microsoft’s focus on developing the AI-aided user experience and quality of life upgrades that the company has been so keen on pushing lately. 

This does mean that we can put any expectations of a Windows 12 to bed, at least until after the second half 2024. Many people were convinced that Windows 11’s successor was coming sooner rather than later because of the heavy emphasis on next-generation AI features and experiences. This rumored release was code-named Hudson Valley, and it’s anticipated to get an official announcement mid-2024, and start rolling out in the latter half of 2024. 

A leaked screenshot of a possible Windows 12 OS mockup.

(Image credit: Microsoft)

Straight from the horse's mouth (or rather, blog)

According to Windows Central, this confirmation of Windows’ annual major feature update comes to us from a Windows 11 preview build changelog published on February 8, 2024. Microsoft writes: 

“Starting with Build 26-xx today, Windows Insiders in the Canary and Dev Channels will see the versioning updated under Settings > System > About (and winver) to version 24H2. This denotes that Windows 11, version 24H2 will be this year’s annual feature update.”

Windows 11 24H2 will still absolutely be worth updating to as Microsoft is currently one of the leaders in the personal computing space that’s actively pursuing and developing AI user assistance. We’ve seen evidence of this with Microsoft’s enthusiastic debut and continued campaign to bring Windows Copilot, its digital AI assistant that’s even getting its own keyboard button, to users. New AI features will make use of recently-manufactured devices’ cutting-edge processors from manufacturers like AMD, Intel, and Qualcomm, who have all recently released (or at least announced) new chips with dedicated support for artificial intelligence.

Qualcomm Snapdragon 8 Gen 2 sustained graphics performance Ziad Asghar Snapdragon Summit 2022

(Image credit: Future / Alex Walker-Todd)

What's the hold up with Windows 12?

There are multiple speculated reasons for why Microsoft is currently sticking to Windows 11 instead of moving on. One such suggestion is  that Microsoft is reluctant to split its PC user base even more with a third major Windows version on the market. Its user base is already somewhat split with many users preferring to stick to Windows 10 (reportedly outnumbering Windows 11 users more than twofold). 

Meanwhile, Windows Central suggests multiple (very reasonable) reasons why Microsoft is currently sticking to Windows 11. First off, Windows’ and Surface’s former leader, Panos Panay, has departed Microsoft. Panay has headed up the Surface team since its inception, and led the development of Windows since 2020. 

It’s a major change-up for Microsoft internally, and along with Windows 10’s continued widespread popularity, the company is probably somewhat hesitant to release Windows 12 during this period. Microsoft is planning to end support for Windows 10 in 2025 to have a better chance of consolidating its user base, and it’s probably waiting at least until then to introduce Windows 12. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Microsoft finally exorcises digital poltergeist that messed with desktop icons and stopped some users from getting Windows 11 23H2

Remember that really strange bug where Windows 11 caused havoc with multi-monitor setups, shuffling desktop icons around, or even moving them onto different screens?

Well, the good news is Microsoft has now fixed this (but the not-so-great news is that it isn’t remedied for Windows 10 users, not yet – we’ll come back to that shortly).

Affected Windows 11 PCs were seeing icons being moved about on the desktop, or seemingly randomly shifted to their other monitors in a highly confusing fashion. Rather like the digital equivalent of a poltergeist possessing your system and causing mischief.

The problem was first spotted in Windows 11 back in November 2023, with the root cause being Copilot – and this led to Microsoft putting a block (a so-called compatibility hold) on rolling out the AI assistant to those with multiple monitors attached to their PC. (And furthermore, there was a block on the Windows 11 23H2 upgrade, for those who hadn’t yet migrated to that version – as it introduces Copilot).

However, all that’s now lifted as the issue has been resolved, as Neowin spotted.

Microsoft updated its known issues with Windows 23H2 (in the release health dashboard) to say that: “This issue was resolved on the service-side for Windows 11, version 23H2 and Windows 11, version 22H2 on devices with updates released January 9, 2024 or later. Non-managed consumer Windows devices with no other compatibility hold should now have Copilot for Windows available. The safeguard hold has been removed as of February 7, 2024.”


Analysis: Ghost in the machine

A service-side tweak means that Microsoft has applied the fix on its end, so there’s no actual update or tinkering that needs to happen with your Windows 11 PC. The fix is just there, and the compatibility hold on Windows 11 23H2 is lifted, so those who’ve been stuck without 23H2 or Copilot should now be able to upgrade just fine.

However, Microsoft observes that it may take up to 48 hours for 23H2 to be offered to your computer. Restarting the PC and manually checking for updates may help to prompt Windows to discover the upgrade.

This is true for Windows 11 PCs previously blocked from 23H2, and also Windows 10 users who wanted to upgrade their device to Windows 11 23H2. The twist here, though, is this icon-flinging bug isn’t actually resolved with Windows 10, if you want to stick with the older OS rather than migrate to Windows 11.

If you recall, Windows 10 users were also affected, and blocked from getting Copilot (when it was subsequently rolled out to them). And sadly, that’s still the case, so those with multiple monitors running Windows 10 still won’t get the AI assistant. With the problem solved in Windows 11, though, presumably it won’t be long before it’s also cured for those staying on Windows 10.

Microsoft updated the Windows 10 release health dashboard to note that: “We are working on a resolution for this issue on Windows 10, version 22H2 and will provide an update in an upcoming release.”

You might also like…

TechRadar – All the latest technology news

Read More

ChatGPT could become a smart personal assistant helping with everything from work to vacation planning

Now that ChatGPT has had a go at composing poetry, writing emails, and coding apps, it's turning its attention to more complex tasks and real-world applications, according to a new report – essentially, being able to do a lot of your computing for you.

This comes from The Information (via Android Authority), which says that ChatGPT developer OpenAI is working on “agent software” that will act almost like a personal assistant. It would be able to carry out clicks and key presses as it works inside applications from web browsers to spreadsheets.

We've seen something similar with the Rabbit R1, although that device hasn't yet shipped. You teach an AI how to calculate a figure in a spreadsheet, or format a document, or edit an image, and then it can do the job for you in the future.

Another type of agent in development will take on online tasks, according to the sources speaking to The Information: These agents are going to be able to research topics for you on the web, or take care of hotel and flight bookings, for example. The idea is to create a “supersmart personal assistant” that anyone can use.

Our AI agent future?

The Google Gemini logo on a laptop screen that's on an orange background

Google is continuing work on its own AI (Image credit: Google)

As the report acknowledges, this will certainly raise one or two concerns about letting automated bots loose on people's personal computers: OpenAI is going to have to do a lot of work to reassure users that its AI agents are safe and secure.

While many of us will be used to deploying macros to automate tasks, or asking Google Assistant or Siri to do something for us, this is another level up. Your boss isn't likely to be too impressed if you blame a miscalculation in the next quarter's financial forecast on the AI agent you hired to do the job.

It also remains to be seen just how much automation people want when it comes to these tasks: Booking vacations involves a lot of decisions, from the position of your seats on an airplane to having breakfast included, which AI would have to make on your behalf.

There's no timescale on any of this, but it sounds like OpenAI is working hard to get its agents ready as soon as possible. Google just announced a major upgrade to its own AI tools, while Apple is planning to reveal its own take on generative AI at some point later this year, quite possibly with iOS 18.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft is giving two Windows 11 apps nifty extra powers – and one of them is AI-related (surprise, surprise)

Microsoft is trying out some interesting new changes in testing for Windows 11, including bolstering a pair of core apps for the OS – with one of them getting supercharged by AI.

Those two apps are Notepad and Snipping Tool, with new versions rolling out to testers who are in the Dev and Canary channels.

The big one is Notepad which is getting an infusion of AI in the form of an ‘Explain with Copilot’ option. This allows you to select any written content in Notepad and via the right-click menu (or Ctrl + E shortcut), summon Copilot to explain more about the selected text, as you might guess.

As Microsoft notes: “You can ask Copilot in Windows to help explain log files, code segments, or any selected content directly from within Notepad.”

Windows 11 Notepad Copilot Panel

(Image credit: Microsoft)

This feature should be available to all testers in those earlier Windows Insider channels in version 11.2401.25.0 of Notepad, though Microsoft observes that some folks may not see it right away. (This is labeled as a ‘known issue’ so it’s seemingly a bug with the deployment).

What’s going on with Snipping Tool? Well, a previously leaked feature is now present in version 11.2401.32.0 in testing, namely the ability to annotate screenshots with shapes and arrows.

That’s pretty handy for composing screen grabs for the likes of instructional step-by-steps where you’ll be pointing out bits to the person following the guide.

Elsewhere in Windows 11 testing, the Beta channel has a new preview version, but there’s not all that much going on here. Build 22635.3140 does make a small but impactful change, though, for Copilot, moving the icon for the AI in the taskbar to the far right-hand side (into the system tray).

Microsoft observes that it makes more sense for the Copilot button to be on the right of the taskbar, given that the panel for the AI opens on the right, so it’ll be directly above the icon. It’s worth remembering that regarding the Copilot panel, Microsoft just made it larger, apparently as a result of feedback from users of the AI.


Analysis: Cowriter MIA?

Regarding that Beta channel tweak for the Copilot icon, that seems a fair enough adjustment to make. Although that said, rumor has it the next update for Windows 11 – which will be Moment 5 arriving later this month in theory – will allow for the ability to undock the AI so it isn’t anchored to the right side of the desktop. Still, that remains speculation for now, and even then there will be those folks who don’t undock Copilot, anyway.

As mentioned, the big testing move here is the new Notepad ability, and it’s no surprise to see more Windows 11 apps getting AI chops. The integration with Copilot here is on a pretty basic level, mind, compared to previous rumors about a fully-featured Cowriter assistant along the lines of the existing Cocreator in Paint. Still, it’s possible this is an initial move, and that a more in-depth Cowriter function could still turn up in the future at some point.

That said, Notepad is not supposed to be a complex app – the idea is it’s a lightweight and streamlined piece of software – so maybe further AI powers won’t be coming to the client.

You might also like…

TechRadar – All the latest technology news

Read More