OpenAI’s ChatGPT might soon be thanking you for gold and asking if the narwhal bacons at midnight like a cringey Redditor after the two companies reach a deal

If you've ever posted a comment or post on Reddit, there's a chance that it will be used as material for training OpenAI's AI models after the two companies confirmed that they've reached a deal that enables this exchange. 

Reddit will be given access to OpenAI's technology to build AI features, and for that (as well as an undisclosed monetary amount), it's giving OpenAI access to Reddit posts in real-time that can be used by tools like ChatGPT to formulate more human-like responses. 

OpenAI will be able to access real-time information from Reddit's data API, software that enables the retrieval of and interaction with information from Reddit's platform, providing OpenAI with structured and unique content from Reddit. This is similar to an agreement Reddit reached with Google at the beginning of the year, allowing Google to train its own AI models on Reddit's data, reported to be worth $ 60 million. 

According to the official Reddit blog post publicizing the deal, the deal will help people discover and engage with Reddit's communities thanks to the Reddit content brought to ChatGPT and other new OpenAI products. Through Reddit's APIs, OpenAI's tools will be able to understand and showcase Reddit's content better, particularly when it comes to recent topics. 

Man sitting at a table working on a laptop

(Image credit: Shutterstock/GaudiLab)

Reddit, the company, and Reddit, the community of users

Users and moderators on Reddit will apparently be offered new features thanks to applications powered by OpenAI's large language models (LLMs). OpenAI will also start advertising on Reddit as an ad partner. 

The blog post put out by Reddit also claims that the deal is in the spirit of keeping the internet open, as well as fostering learning and research to keep it that way. It also cites that it wants to continue to build up its community, recognizing its uniqueness and how Reddit serves as a place for conversation online. Reddit claims that this deal was signed to improve everyone's Reddit experience using AI.

It remains to be seen whether users are convinced of these benefits, but previous changes of this type and scale haven't gone down particularly well. In June 2023, over 7,000 subreddit communities went dark to protest changes to Reddit's API pricing for developers

It also hasn't explicitly been stated by either company that Reddit data will be used to train OpenAI's models, but I think many people assume this will be the case – or that it’s already happening. In contrast, it was disclosed that Reddit would give Google “more efficient ways to train models,” and then there's the fact that OpenAI founder Sam Altman is himself a Reddit shareholder. This doesn't confirm anything specific and, as reported by The Verge, “This partnership was led by OpenAI’s COO and approved by its independent Board of Directors.”

OpenAI CEO Sam Altman speaking during Microsoft's February 7, 2023 event

(Image credit: JASON REDMOND/AFP via Getty Images)

Official statements expressing the benefits of the partnership

Speaking about the partnership and as quoted in the blog post, representatives from both companies said: 

“Reddit has become one of the internet’s largest open archives of authentic, relevant, and always up to date human conversations about anything and everything. Including it in ChatGPT upholds our belief in a connected internet, helps people find more of what they’re looking for, and helps new audiences find community on Reddit.”

– Steve Huffman, Reddit Co-Founder and CEO

“We are thrilled to partner with Reddit to enhance ChatGPT with uniquely timely and relevant information, and to explore the possibilities to enrich the Reddit experience with AI-powered features.”

– Brad Lightcap, OpenAI COO

They're not wrong, and many people make search queries appended with the word “Reddit” as Reddit threads will often provide information directly relevant to what you're searching for. 

It's an interesting development, and OpenAI's sourcing of information – both in terms of accuracy and concerning training data – has been the main topic of discussion around the ethics of its practices for some time. I suppose at least this way, Reddit users are being made aware that their information can be used by OpenAI – even if they don’t really have a choice in the matter. 

The announcement blog post reassures users that Reddit believes that “privacy is a right,” and that it has published a Public Content Policy that gives more detail about Reddit's approach to accessing public content and user protections. We'll have to see if this will be upheld as time goes on, and what the partnership looks like in practice, but I hope both companies will take users' concerns seriously. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Ads in Windows 11 are becoming the new normal and look like they’re headed for your Settings home page

Microsoft looks like it’s forging ahead with its mission to put more ads in parts of the Windows 11 interface, with the latest move being an advert introduced to the Settings home page.

Windows Latest noticed the ad, which is for the Xbox Game Pass, is part of the latest preview release of the OS in the Dev channel (build 26120). For the uninitiated, the Game Pass is Microsoft’s subscription service that grants you access to a host of games for a monthly or yearly subscription fee.

Not every tester will see this advert, though, at least for now, as it’s only rolling out to those who have chosen the option to ‘Get the latest updates as soon as they're available’ (and that’s true of the other features delivered by this preview build). Also, the ad only appears for those signed into a Microsoft account.

Furthermore, Microsoft explains in a blog post introducing the build that the advert for the Xbox Game Pass will only appear to Windows 11 users who “actively play games” on their PC. The other changes provided by this fresh preview release are useful, too, including fixes for multiple known issues, some of which are related to performance hiccups with the Settings app. 

A close up of a keyboard and a woman gaming at a PC in neon lighting

(Image credit: Shutterstock/Standret)

Pushing too far is a definite risk for Microsoft

While I can see this fresh advertising push won’t play well with Windows 11 users, Windows Latest did try the new update and reports that it’s a significant improvement on the previous version of 24H2. So that’s good news at least, and the tech site further observes that there’s a solution for an installation failure bug in here (stop code error ‘0x8007371B’ apparently).

Windows 11 24H2 is yet to roll out officially for all users, but it’s expected to be the pre-installed operating system on the new Snapdragon X Elite PCs that are scheduled to be shipped in June 2024. A rollout to all users on existing Windows 11 devices will happen several months later, perhaps in September or October. 

I’m not the biggest fan of Microsoft’s strategy regarding promoting its own services – and indeed outright ads as is the case here – or the firm’s efforts to push people to upgrade from Windows 10 to Windows 11. Unfortunately, come next year, Windows 10 users will be facing a choice of migrating to Windows 11, or losing out on security updates when support expires for the older OS (in October 2025). That is, if they can upgrade at all – Windows 11’s hardware requirements make this a difficult task for some older PCs.

I hope for my sake personally, and for all Windows 11 users, that Microsoft considers showing that it values us all by not subjecting us to more and more adverts creeping into different parts of the operating system.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

Signed, sealed, delivered, summarized: new Gemini-powered AI feature for Gmail looks like it’s close to launch

A summarize feature powered by Gemini, Google’s recently debuted generative AI model and digital assistance tool, is coming to the Gmail app for Android – and it could make reading and understanding emails much faster and easier. The feature is expected to roll out soon to all users, and they’ll be able to provide feedback by rating the quality of the generated email summaries. 

The feature has been suspected to be in the works for some time now, as documented by Android Authority, and now it’s apparently close to launch.

One of Android Authority’s code sleuth sources managed to get the feature working on the Gmail app version 2024.04.21.626860299 after a fair amount of tinkering. It’s not disclosed what steps they took, so if you want to replicate this, you’ll have to do some experimenting, but the fact that the summarize feature can be up and running shows that Android Gmail users may not have to wait very long.

There is a screenshot featured in Android Authority’s report showing a chat window where the user asks Gemini to summarize an email that they currently have open, and Gemini obliging. Apparently, this feature will be available via a ‘Summarize this email’ button under an email’s subject line, I assume triggering the above prompt, and this should return a compact summary of the email. This could prove to be especially helpful when dealing with a large number of emails, or for particularly long emails with many details.

Once the summary is provided, users will be shown thumbs up and thumbs down buttons under Gemini’s output, similar to OpenAI’s ChatGPT after it gives its reply to a user’s query. This will give Google a better understanding of how helpful the feature is to users and how it could be improved. There will also be a button that allows you to copy the email summary to your clipboard, according to the screenshot. 

A man working at an office and looking at his screen while using Gmail

(Image credit: Shutterstock/fizkes)

When to expect the new feature

The speculation is that the feature could be rolled out during Google’s I/O 2024 event, its annual developer conference, which is scheduled for May 14, 2024. Google is also expected to show off the next iteration of its Pixel A series, the Pixel 8A, it could show its development of augmented reality (AR) technology, and new software and service developments, especially for its devices and ChromeOS (the operating system that powers the best Chromebooks). 

Many Gmail users could potentially find this new summarize feature to be time-saving and that it streamlines their emails, but as with any generative AI, there are concerns about the accuracy of the generated text. If Gemini omits or misinterprets important information, it could lead to oversights or misunderstandings. I’m glad that Google has the feedback system in place, as this will show if the feature is actually serving its purpose well. We’ll have to wait and see, and the proof will be in the pudding whether it results in improved productivity and is reasonably accurate when it’s finally released. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

OpenAI’s Sora just made its first music video and it’s like a psychedelic trip

OpenAI recently published a music video for the song Worldweight by August Kamp made entirely by their text-to-video engine, Sora. You can check out the whole thing on the company’s official YouTube channel and it’s pretty trippy, to say the least. Worldweight consists of a series of short clips in a wide 8:3 aspect ratio featuring fuzzy shots of various environments. 

You see a cloudy day at the beach, a shrine in the middle of a forest, and what looks like pieces of alien technology. The ambient track coupled with the footage results in a uniquely ethereal experience. It’s half pleasant and half unsettling. 

It’s unknown what text prompts were used on Sora; Kamp didn’t share that information. But she did explain the inspiration behind them in the description. She states that whenever she created the track, she imagined what a video representing Worldweight would look like. However, she lacked a way to share her thoughts. Thanks to Sora, this is no longer an issue as the footage displays what she had always envisioned. It's “how the song has always ‘looked’” from her perspective.

Embracing Sora

If you pay attention throughout the entire runtime, you’ll notice hallucinations. Leaves turn into fish, bushes materialize out of nowhere, and flowers have cameras instead of petals. But because of the music’s ethereal nature, it all fits together. Nothing feels out of place or nightmare-inducing. If anything, the video embraces the nightmares.

We should mention August Kamp isn’t the only person harnessing Sora for content creation. Media production company Shy Kids recently published a short film on YouTube called “Air Head” which was also made on the AI engine. It plays like a movie trailer about a man who has a balloon for a head.

Analysis: Lofty goals

It's hard to say if Sora will see widespread adoption judging by this content. Granted, things are in the early stages, but ready or not, that hasn't stopped OpenAI from pitching its tech to major Hollywood studios. Studio executives are apparently excited at the prospects of AI saving time and money on production. 

August Kamp herself is a proponent of the technology stating, “Being able to build and iterate on cinematic visuals intuitively has opened up categorically new lanes of artistry for me”. She looks forward to seeing “what other forms of storytelling” will appear as artificial intelligence continues to grow.

In our opinion, tools such Sora will most likely enjoy a niche adoption among independent creators. Both Kamp and Shy Kids appear to understand what the generative AI can and cannot do. They embrace the weirdness, using it to great effect in their storytelling. Sora may be great at bringing strange visuals to life, but in terms of making “normal-looking content”, that remains to be seen.

People still talk about how weird or nightmare-inducing content made by generative AI is. Unless OpenAI can surmount this hurdle, Sora may not amount to much beyond niche usage.

It’s still unknown when Sora will be made publicly available. OpenAI is holding off on a launch, citing potential interference in global elections as one of its reasons. Although, there are plans to release the AI by the end of 2024.

If you're looking for other platforms, check out TechRadar's list of the best AI video makers for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Vision Pro spatial Personas are like Apple’s version of the metaverse without the Meta

While the initial hype over Apple Vision Pro may have died down, Apple is still busy developing and rolling out fresh updates, including a new one that lets multiple Personas work and play together.

Apple briefly demonstrated this capability when it introduced the Vision Pro and gave me my first test-drive last year but now spatial Personas is live on Vision Pro mixed-reality headsets.

To understand “spatial Personas” you need to start with the Personas part. You capture these somewhat uncanny valley 3D representations of yourself using Vision Pro's spatial (or 3D) cameras. The headset uses that data to skin a 3D representation of you that can mimic your face, head, upper torso, and hand movements and be used in FaceTime and other video calls (if supported).

Spatial Personas does two key things: it gives you the ability to put two (or more) avatars in one space and lets them interact with either different screens or the same one and does so in a spatially aware space. This is all still happening within the confines of a FaceTime call where Vision Pro users will see a new “spatial Persona” button.

To enable this feature, you'll need the visionOS 1.1 update and may need to reboot the mixed reality headset. After that you can at any time during a FaceTime Persona call tap on the spatial icon to enable the featue.

Almost together

Apple Vision Pro spatial Personas

(Image credit: Apple)

Spatial Personas support collaborative work and communal viewing experiences by combining the feature with Apple's SharePlay. 

This will let you “sit side-by-side” (Personas don't have butts, legs or feet, so “sitting” is an assumed experience) to watch the same movie or TV show. In an Environment (you spin the Vision Pro's digital crown until your real world disappears in favor of a selected environment like Yosemite”) you can also play multi-player games. Most Vision Pro owners might choose “Game Room”, which positions the spatial avatars around a game table. A spatial Persona call can become a real group activity with up with five spatial Personas participating at once.

Vision Pro also supports spatial audio which means the audio for the Persona on the right will sound like it's coming from the right. Working in this fashion could end up feeling like everyone is in the room with you, even though they're obviously not.

Currently, any app that supports SharePlay can work with spatial Personas but not every app will allow for single-screen collaboration. If you use window share or share the app, other personas will be able to see but not interact with your app window.

Being there

Apple Vision Pro spatial Personas

Freeform lets multiple Vision Pro spatial Personas work on the same app. (Image credit: Apple)

While your spatial Personas will appear in other people's spaces during the FaceTime call, you'll remain in control of your viewing experience and can still move your windows and Persona to suit your needs, while not messing up what people see in the shared experience.

In a video Apple shared, it shows two spatial Personas positioned on either side of a Freeform app window, which is, in and of itself somewhat remarkable. But things take a surprising turn when each of them can reach out with their Persona hands to control the app with gestures. That feels like a game-changer to me.

In some ways, this seems like a much more limited form of Meta CEO Mark Zuckerberg's metaverse ideal, where we live work and play together in virtual reality. In this case, we collaborate and play in mixed reality while using still somewhat uncanny valley avatars. To be fair, Apple has already vastly improved the look of these things. They're still a bit jarring but less so than when I first set mine up in February.

I haven't had a chance to try the new feature, but seeing those two floating Personas reaching out and controlling an app floating a single Vision Pro space is impressive. It's also a reminder that it's still early days for Vision Pro and Apple's vision of our spatial computing future. When it comes to utility, the pricey hardware clearly has quite a bit of road ahead of it.

You might also like

TechRadar – All the latest technology news

Read More

Google Gemini AI looks like it’s coming to Android tablets and could coexist with Google Assistant (for now)

Google’s new generative AI model, Gemini, is coming to Android tablets. Gemini AI has been observed running on a Google Pixel Tablet, confirming that Gemini can exist on a device alongside Google Assistant… for the time being, at least. Currently, Google Gemini is available to run on Android phones, and it’s expected that it will eventually replace Google Assistant, Google’s current virtual assistant that’s used for voice commands.

When Gemini is installed on Android phones, users would be prompted to choose between using Gemini and Google Assistant. It’s unknown if this restriction will apply to tablets when Gemini finally arrives for them – though at the moment it appears not. 

Man sitting at a table working on a laptop

(Image credit: Shutterstock/GaudiLab)

A discovery in Google Search's code

The news was brought to us via 9to5Google, which did an in-depth report on the latest beta version (15.12) of the Google Search app in the Google Play Store and discovered it contains code referring to using Gemini AI on a “tablet,” and would offer the following features: 

The code also shows that the Google app will host Gemini AI on tablets, instead of a standalone app that currently exists for Android phones. Google might be planning on a separate Gemini app for tablets and possibly other devices, especially if its plans to phase out Google Assistant are still in place. 

9to5Google also warns that this is still as it’s still a beta version of the Google Search app, Google can still change its mind and not roll out these features.

A woman using an Android phone.

(Image credit: Shutterstock/brizmaker)

Where does Google Assistant stand?

When 9to5Google activated Gemini on a Pixel Tablet, it found that Google Assistant and Gemini would function simultaneously. Gemini for Android tablets is yet to be finalized, so Google might implement a similar restriction that prevents both Gemini and Google Assistant running at the same time on tablets. When both were installed and activated, and the voice command “Hey Google” was used, Google Assistant was brought up instead of Gemini.

This in turn contradicted screenshots of the setup screen showing that Gemini will take precedence over Google Assistant if users choose to use it.

The two digital assistants don’t have the same features yet and we know that the Pixel Tablet was designed to act as a smart display that uses Google Assistant when docked. Because Google Assistant will be used when someone asks Gemini to do something it’s unable to do, we may see the two assistants running in parallel for the time being, until Gemini has all of Google Assistant's capabilities, such as smart home features. 

Meanwhile, Android Authority reports that the Gemini experience on the Pixel Tablet is akin to the Pixel Fold and predicts that Google’s tablets will be the first Android to gain Gemini capabilities. This makes sense, as Google may want to use Gemini exclusivity to encourage more people to buy Pixel tablets in the future. The Android tablet market is a highly competitive one, and advanced AI capabilities may help Pixel tablets stand out.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

A cheaper Meta Quest 3 might be coming, but trust me, it won’t look like the leaks

Over the past few days we’ve been treated to two separate Meta Quest 3 leaks – or more accurately, leaks for a new cheaper Quest 3 that’s either called the Meta Quest 3s or Meta Quest 3 Lite, depending on who you believe.

But while the phrase ‘where there's smoke there's fire’ can often ring true in the world of tech leaks, I’m having a really tough time buying what I’ve seen so far from the two designs.

Going in chronological order; the first to drop was a Meta Quest 3 Lite render shared by @ZGFTECH on Twitter.

See more

It looks an awful lot like an Oculus Quest 2 with its slightly bulkier design – perhaps because it seems to use the 2’s fresnel lens system instead of a slimmer pancake lens system like the Quest 3 – but with more rounded edges to match its namesake. 

Interestingly, it also lacks any kind of RGB cameras or a depth sensor – which for me is a massive red flag. Mixed reality is the main focus for XR hardware and software right now, so of all the downgrades to make for the Lite, removing full-color MR passthrough seems the most absurd. It’d be much more likely for Meta to give the Quest 3 Lite a worse display or chipset.

@ZGFTECH did later clarify that they aren’t saying the Quest 3 Lite lacks RGB cameras, just that their renders exclude them because they can’t reveal more “at the moment.” Though as I said before, I expect mixed reality would be a key Quest 3 Lite feature, so I’m more than a little surprised this detail is shrouded in mystery.

Then there’s the Meta Quest 3s leak. The original Reddit post has since been deleted, but copies like this Twitter post remain online.

See more

Just like the Meta Quest 3 Lite leaked design, this bulkier headset suggests a return to fresnel lenses. Although unlike the previous model, we see some possible RGB cameras and sensors on the front face panel. On top of this, we also get some more details about specs – chiefly that the cheaper Quest 3 could boast dual 1,832 x 1,920 pixel displays.

But while the design seems a little more likely (if a little too ugly), the leak itself is setting off my BS detectors. The first issue is that the shared images include elements of a Zoom call that might make it quite easy to determine who the leaker is. To see these early designs the leaker likely had to sign an NDA that would come with some kind of financial penalty for sharing the info, and unless they have zero care for their financial well being I would’ve expected them to be a lot more careful with what they do/don’t share lest they face the wrath of Meta’s well-funded legal team.

On top of this, some of the promotional assets seem a little off. Some of them feature the original Quest 3 rather than the new design, some of the images don’t seem super relevant to a VR gadget, plus ports and buttons seem to change positions and parts change color across various renders.

As such, I’m more than a little unconvinced that this is a genuine leak.

The Meta Quest 3 and controllers on their charging stand

(Image credit: Meta)

Meta Quest 3 Lite: fact or fiction? 

I guess the follow-up question from my skepticism over these leaks is: is a cheaper Meta Quest 3 even on the way? 

Inherently, the idea isn’t absurd. The Quest 3 may be cheaper than many other VR headsets, but at $ 499.99 / £479.99 / AU$ 799.99 it is pricier than the Quest was at launch – $ 299 / £299 / AU$ 479 – and its affordable price point is the central reason the Quest 2 sold phenomenally well.

I’ve previously estimated that the Quest 3 is selling slightly slower than its predecessor did at the same point in its lifespan, so Meta may be looking to juice its figures by releasing a cheaper model.

What’s more, while these leaks have details that leave me more than a little skeptical, the fact that we have had two leaks in such a short stretch of time leaves me feeling like there might be some validity to the rumors.

A Meta Quest 3 player sucking up Stay Puft Marshmallow Men from Ghostbusters in mixed reality using virtual tech extending from their controllers

The Quest 3 Lite needs good quality mixed reality (Image credit: Meta)

So while we can't yet say for certain it's coming, I wouldn't be surprised if Meta announced a Quest 3 Lite or S. I'm just not convinced that it’ll look like either of these leaked designs.

For me, the focus would be on having a sleek mixed reality machine – which would require full-color passthrough and pancake rather than fresnel lenses (which we have seen on affordable XR hardware like the Pico 4).

The cost savings would then come from having lower resolution displays, less storage (starting at 64GB), and having a worse chipset or less RAM than we see in the Quest 3. 

We’ll have to wait and see if Meta announces anything officially. I expect we won’t hear anything until either its Meta Quest Gaming Showcase for 2024 – which is due around June – or this year’s Meta Connect event – which usually lands around September or October.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft is planning to make Copilot behave like a ‘normal’ app in Windows 11

Windows 11 is set for a major change to the Copilot interface, or at least this is something that’s being tried out in testing.

With Windows 11’s preview build 26080 (in both Canary and Dev channels), Microsoft is adding a choice to free Copilot from the shackles that bind the AI assistant to the right-hand side of the screen.

Normally, the Copilot panel appears on the right, and you can’t do anything about that (although Microsoft has been experimenting with the ability to resize it, and other bits and bobs besides).

With this change, you can now undock Copilot, so the AI is in a normal app window, which can be moved wherever you want on the desktop, and resized appropriately. In other words, you’re getting a lot more versatility regarding where you want Copilot to appear.

Also in this preview build, more users are getting Copilot’s new abilities to alter Windows 11 settings. That functionality was already introduced to Canary testers, but is now rolling out to more of those folks, and Windows Insiders in the Dev channel too.

The extra capabilities include getting the AI assistant to empty the Recycle Bin, or turn on Live Captions, or Voice Access (there are a fair few new options on the accessibility front, in fact).


Analysis: Under the hood tinkering, too

Not all testers in the mentioned channels will see the ability to fully free Copilot and let the AI roam the desktop for a while yet, mind. Microsoft says it’s just starting the rollout – and it’ll only be for those in the Canary channel initially. A broader rollout will follow, with Microsoft asking for feedback as it goes, and adjusting things based on what it hears from Windows 11 testers, no doubt.

There are also some ‘under-the-hood improvements’ coming for Copilot as well, as mentioned in the blog post, but mysteriously, Microsoft doesn’t say what. We can only guess that this might be performance related, as that seems the most obvious way that tinkering in the background could improve things with Copilot. (Perhaps it’s to do with ensuring the smooth movement of the undocked panel for the AI, even).

You might also like…

TechRadar – All the latest technology news

Read More

Get ready to learn about what Windows 11 of the future looks like at Microsoft’s March 21 event

We’ve begun getting hints of what Microsoft is gearing up to announce for Windows 11 at its March event, and now we’ve got new pieces of the puzzle. We’re expecting information about a new feature for the Paint app, Paint NPU, and about a feature that’s being referred to as ‘AI Explorer’ internally at Microsoft. 

Microsoft has put up an official page announcing a special digital event named “New Era of Work” which will take place on March 21, starting at 9 PM PDT. On this page, users are met with the tagline “Advancing the new era of work with Copilot” and a description of the event that encourages users to “Tune in here for the latest in scaling AI in your environment with Copilot, Windows, and Surface.”

It sounds like we’re going to get an idea of what the next iteration of Windows Copilot, Microsoft’s new flagship digital AI assistant, will look like and what it’ll be able to do. It also looks like we might see Microsoft’s vision for what AI integration and features will look like for future versions of Windows and Surface products. 

A screenshot of the page announcing Microsoft's digital event.

(Image credit: Microsoft)

What we already know and expect

While we’ll have to wait until the event to see exactly what Microsoft wants to tell us about, we do have some speculation from Windows Latest that one feature we’ll learn about is a Paint app tool powered by new-gen machines’ NPUs (Neural Processing Units). These are processing components that enable new kinds of processes, particularly many AI processes.

This follows earlier reports that indicated that the Paint app was getting an NPU-driven feature, possibly new image editing and rending tools that make use of PCs’ NPUs. Another possible feature that Windows Latest spotted was “LiveCanvas,” which may enable users to draw real-time sketches aided by AI. 

Earlier this week, we also reported about a new ‘AI Explorer’ feature, apparently currently in testing at Microsoft. This new revamped version which has been described as an “advanced Copilot” looks like it could be similar to the Windows Timeline feature, but improved by AI. The present version of Windows Copilot requires an internet connection, but rumors suggest that this could change. 

This is what we currently understand about how the feature will work: it will make records of previous actions users perform, transform them into ‘searchable moments,’ and allow users to search these, as well as retract them. Windows Latest also reinforces the news that most existing PCs running Windows 11 won’t be able to use AI Explorer as it’s designed to use the newest available NPUs, intended to handle and assist higher-level computation tasks. The NPU would enable the AI Explorer feature to work natively on Windows 11 devices and users will be able to interact with AI Explorer using natural language

Using natural language means that users can ask AI Explorer to carry out tasks simply and easily, letting them access past conversations, files, and folders with simple commands, and they will be able to do this with most Windows features and apps. AI Explorer will have the capability to search user history and find information relevant to whatever subject or topic is in the user’s request. We don’t know if it’ll pull this information exclusively from user data or other sources like the internet as well, and we hope this will be clarified on March 21. 

Person working on laptop in kitchen

(Image credit: Getty Images)

What else we might see and what this might mean

 In addition to an NPU-powered Paint app feature and AI Explorer, it looks like we can expect the debut of other AI-powered features including an Automatic Super Resolution feature. This has popped up in Windows 11 23H4 preview builds, and it’s said to leverage PCs’ AI abilities to improve users’ visual experience. This will reportedly be done by utilizing DirectML, an API that also makes use of PCs’ NPUs, and will bring improvements to frame rates in games and apps.

March 21 is gearing up to bring what will at least probably be an exciting presentation, although it’s worth remembering that all of these new features will require an NPU. Only the most newly manufactured Windows devices will come equipped with these, which will leave the overwhelming majority of Windows devices and users in the dust. My guess is Microsoft is really banking on how great the new AI-driven features are to convince users to upgrade to these new models, and with the current state of apps and services like Windows Copilot, that’s still yet to be proven in practice.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Microsoft’s Copilot AI can now read your files directly, but it’s not the privacy nightmare it sounds like

Microsoft has begun rolling out a new feature for its Copilot AI assistant in Windows that will allow the bot to directly read files on your PC, then provide a summary, locate specific data, or search the internet for additional information. 

Copilot has already been aggressively integrated into Microsoft 365 and Windows 11 as a whole, and this latest feature sounds – at least on paper – like a serious privacy issue. After all, who would want an AI peeking at all their files and uploading that information directly to Microsoft?

Well, fortunately, Copilot isn’t just going to be snooping around at random. As spotted by @Leopeva64 on X (formerly Twitter), you have to manually drag and drop the file into the Copilot chat box (or select the ‘Add a file’ option). Once the file is in place, you can proceed to make a request of the AI; the suggestion provided by Leopeva64 is simply ‘summarize’, which Copilot proceeds to do.

Another step towards Copilot being genuinely useful

I’ll admit it, I’m a Copilot critic. Perhaps it’s just because I’m a jaded career journalist with a lifetime of tech know-how and a neurodivergent tilt towards unhealthy perfectionism, but I’ve never seen the value of an AI assistant built into my operating system of choice; however, this is the sort of Copilot feature I actually might use.

The option to summarize alone seems quite useful: more than once, I’ve been handed a chunky PDF with embargoed details about a new tech product, and it would be rather nice not to have to sift through pages and pages of dense legalese and tech jargon just to find the scraps of information that are actually relevant to TechRadar’s readership. Summarizing documents is already something that ChatGPT and Adobe Acrobat AI can do, so it makes sense for Copilot – an AI tool that's specifically positioned as an on-system helper – to be able to do it.

While I personally prefer to be the master of my own Googling, I can see the web-search capabilities being very helpful to a lot of users, too. If you’ve got a file containing partial information, asking Copilot to ‘fill in the blanks’ could save you a lot of time. Copilot appears capable of reading a variety of different file types, from simple text documents to PDFs and spreadsheets. Given the flexible nature of modern AI chatbots, there are potentially many different things you could ask Copilot to do with your files – though apparently, it isn’t able to scan files for viruses (at least, not yet).

If you’re keen to get your hands on this feature yourself, you hopefully won’t have to wait long. While it doesn’t seem to be widely available just yet, Leopeva64 notes that it appears Copilot’s latest new skill “is being rolled out gradually”, so it’ll likely start showing up for more Windows 11 users as time goes on.

The Edge version of Copilot will apparently be getting this feature too, as Leopeva points out that it’s currently available in the Canary prototype build of the browser – if you want to check that out, you just have to sign up for the Edge Insider Program.

You might also like

TechRadar – All the latest technology news

Read More