You might be waiting a while yet for Wi-Fi 7 support in Windows 11 – but Microsoft is on the case

Windows 11 is now adding support for Wi-Fi 7, those who want to use the much-improved wireless standard will doubtless be pleased to learn – but it’s only in testing currently.

That’s despite the fact that there are already Wi-Fi 7 routers out there, and the standard has been officially finalized by the Wi-Fi Alliance (the Wi-Fi Certified 7 program was announced at the start of January 2024, in fact).

As you might guess, it’ll be some time before official Wi-Fi 7 support comes through to the release version of Windows 11, as it’ll need to progress through testing channels first.

Right now, it’s only in the Canary (earliest) test channel with build 26063, a preview release that flew under our radar somewhat, but an important one in this respect. However, it’s also been added for Dev channel testers, Microsoft informed us in the usual blog post on build 26063 (as flagged up by XDA Developers).

WiFi 7 in Windows 11

(Image credit: Microsoft)

As the software giant also pointed out, Wi-Fi 7 (aka 802.11be) is in the order of 4x faster than Wi-Fi 6 and more like 6x quicker than Wi-Fi 5.

If you want to know more about how this new wireless standard takes some big strides forward – and it isn’t just about raw speed, though that is, of course, very important – check out our guide to the ins-and-outs of Wi-Fi 7.


Analysis: Wireless party

In fairness to Microsoft, while it appears to be pretty late to the wireless party, and Wi-Fi 7 may have officially kicked off (at least in some countries, the US, Australia, and UK included), it's still early days for the standard.

The standard may be effectively set in stone now, but that doesn’t mean there won’t be tweaks going forward. There will inevitably be firmware updates for existing Wi-Fi 7 routers to fix or modify things going forward as needed, although all the big cogs in terms of features are now in place.

Windows 11 is one of the final pieces of the puzzle to be added for Wi-Fi 7 support, then, for laptops which sport Wi-Fi 7. And of course as mentioned, you’ll need a Wi-Fi 7 router to benefit from faster wireless speeds. (Those devices are expensive right now, too, it should be noted – though that’s generally true of any cutting-edge tech).

With Wi-Fi 7 we’re getting performance which makes wireless online gaming a reality in terms of it being close to wired (Ethernet) performance, and certainly much better than other fudges for PCs that aren’t plugged directly into the router (such as powerline adapters, which can be notoriously flaky in some scenarios).

What about Windows 10 support for Wi-Fi 7? We’re still not sure on that score, although the last we heard was that it is inbound – but there’s no sign of that yet.

You might also like…

TechRadar – All the latest technology news

Read More

I created an AI app in 10 minutes and I might never think about artificial intelligence in the same way again

Pretty much anything we can do with AI today might have seemed like magic just a year ago, but MindStudio's platform for creating custom AI apps in a matter of minutes feels like a new level of alchemy.

The six-month-old free platform, which you can find right now under youai.ai, is a visual studio for building AI workflows, assistants, and AI chatbots. In its short lifespan it's already been used, according to CEO Dimitry Shapiro, to build more than 18,000 apps.

Yes, he called them “apps”, and if you're struggling to understand how or why anyone might want to build AI applications, just look at OpenAI's relatively new GPT apps (aka GPTs). These let you lock the powerful GPT-3.5 into topic-based thinking that you can package up, share, and sell. Shapiro, however, noted the limits of OpenAI's approach. 

He likened GPTs to “bookmarking a prompt” within the GPT sphere. MindStudio, on  the other hand, is generative model-agnostic. The system lets you use multiple models within one app.

If adding more model options sounds complicated, I can assure you it's not. MindStudio is the AI development platform for non-developers. 

Watch and learn

MindStudio

Choose your template. (Image credit: MindStudio)

To get you started, the company provides an easy-to-follow 18-minute video tutorial. The system also helps by offering a healthy collection of templates (many of them business-focused), or you can choose a blank template. I followed the guide to recreate the demo AI app (a blog post generator), and my only criticism is that the video is slightly out of date, with some interface elements having been moved or renamed. There are some prompts to note the changes, but the video could still do with a refresh.

Still, I had no trouble creating that first AI blog generator. The key here is that you can get a lot of the work done through a visual interface that lets you add blocks along a workflow and then click on them to customize, add details, and choose which AI model you want to use (the list includes GPT- 3.5 turbo, PaLM 2, Llama 2, and Gemini Pro). While you don't necessarily have to use a particular model for each task in your app, it might be that, for example, you should be using GPT-3.5 for fast chatbots or that PaLM would be better for math; however, MindStudio cannot, at least yet, recommend which model to use and when.

Image 1 of 2

MindStudio

Connect the boxes (Image credit: MindStudio)
Image 2 of 2

MindStudio

And then edit their contents (Image credit: MindStudio)

The act of adding training data is also simple. I was able to find web pages of information, download the HTML, and upload it to MindStudio (you can upload up to 150 files on a single app). MindStudio uses the information to inform the AI, but will not be cutting and pasting information from any of those pages into your app responses.

Most of MindStudio's clients are in business, and it does hide some more powerful features (embedding on third-party websites) and models (like GPT 4 Turbo) behind a paywall, but anyone can try their hand at building and sharing AI apps (you get a URL for sharing).

Confident in my newly acquired, if limited, knowledge, I set about building an AI app revolving around mobile photography advice. Granted, I used the framework I'd just learned in the AI blog post generator tutorial, but it still went far better than I expected.

One of the nice things about MindStudio is that it allows for as much or as little coding as you're prepared to do. In my case, I had to reference exactly one variable that the model would use to pull the right response.

MindStudio

Options include setting your model’s ‘temperature’ (Image credit: MindStudio)

There are a lot of smart and dead-simple controls that can even teach you something about how models work. MindStudio lets you set, for instance, the 'Temperature' of your model to control the randomness of its responses. The higher the 'temp', the more unique and creative each response. If you like your model verbose, you can drag another slider to set a response size of up to 3,000 characters.

The free service includes unlimited consumer usage and messages, some basic metrics, and the ability to share your AI via a link (as I've done here). Pro users can pay $ 23 a month for the more powerful models like GPT-4, less MindStudio branding, and, among other things, site embedding. The $ 99 a-month tier includes all you get with Pro, but adds the ability to charge for access to your AI app, better analytics, API access, full chat transcripts, and enterprise support.

Image 1 of 2

MindStudio

Look, may, I made an AI app. (Image credit: MindStudio)
Image 2 of 2

MindStudio

It’s smarter than I am. (Image credit: MindStudio)

I can imagine small and medium-sized businesses using MindStudio to build customer engagement and content capture on their sites, and even as a tool for guiding users through their services.

Even at the free level, though, I was surprised at the level of customization MindStorm offers. I could add my own custom icons and art, and even build a landing page.

I wouldn't call my little AI app anything special, but the fact that I could take the germ of an idea and turn it into a bespoke chatbot in 10 minutes is surprising even to me. That I get to choose the right model for each job within an AI app is even better; and that this level of fun and utility is free is the icing on the cake.

You might also like

TechRadar – All the latest technology news

Read More

What is OpenAI’s Sora? The text-to-video tool explained and when you might be able to use it

ChatGPT maker OpenAI has now unveiled Sora, its artificial intelligence engine for converting text prompts into video. Think Dall-E (also developed by OpenAI), but for movies rather than static images.

It's still very early days for Sora, but the AI model is already generating a lot of buzz on social media, with multiple clips doing the rounds – clips that look as if they've been put together by a team of actors and filmmakers.

Here we'll explain everything you need to know about OpenAI Sora: what it's capable of, how it works, and when you might be able to use it yourself. The era of AI text-prompt filmmaking has now arrived.

OpenAI Sora release date and price

In February 2024, OpenAI Sora was made available to “red teamers” – that's people whose job it is to test the security and stability of a product. OpenAI has also now invited a select number of visual artists, designers, and movie makers to test out the video generation capabilities and provide feedback.

“We're sharing our research progress early to start working with and getting feedback from people outside of OpenAI and to give the public a sense of what AI capabilities are on the horizon,” says OpenAI.

In other words, the rest of us can't use it yet. For the time being there's no indication as to when Sora might become available to the wider public, or how much we'll have to pay to access it. 

Two dogs on a mountain podcasting

(Image credit: OpenAI)

We can make some rough guesses about timescale based on what happened with ChatGPT. Before that AI chatbot was released to the public in November 2022, it was preceded by a predecessor called InstructGPT earlier that year. Also, OpenAI's DevDay typically takes place annually in November.    

It's certainly possible, then, that Sora could follow a similar pattern and launch to the public at a similar time in 2024. But this is currently just speculation and we'll update this page as soon as we get any clearer indication about a Sora release date.

As for price, we similarly don't have any hints of how much Sora might cost. As a guide, ChatGPT Plus – which offers access to the newest Large Language Models (LLMs) and Dall-E – currently costs $ 20 (about £16 / AU$ 30) per month. 

But Sora also demands significantly more compute power than, for example, generating a single image with Dall-E, and the process also takes longer. So it still isn't clear exactly how well Sora, which is effectively a research paper, might convert into an affordable consumer product.

What is OpenAI Sora?

You may well be familiar with generative AI models – such as Google Gemini for text and Dall-E for images – which can produce new content based on vast amounts of training data. If you ask ChatGPT to write you a poem, for example, what you get back will be based on lots and lots of poems that the AI has already absorbed and analyzed.

OpenAI Sora is a similar idea, but for video clips. You give it a text prompt, like “woman walking down a city street at night” or “car driving through a forest” and you get back a video. As with AI image models, you can get very specific when it comes to saying what should be included in the clip and the style of the footage you want to see.

See more

To get a better idea of how this works, check out some of the example videos posted by OpenAI CEO Sam Altman – not long after Sora was unveiled to the world, Altman responded to prompts put forward on social media, returning videos based on text like “a wizard wearing a pointed hat and a blue robe with white stars casting a spell that shoots lightning from his hand and holding an old tome in his other hand”.

How does OpenAI Sora work?

On a simplified level, the technology behind Sora is the same technology that lets you search for pictures of a dog or a cat on the web. Show an AI enough photos of a dog or cat, and it'll be able to spot the same patterns in new images; in the same way, if you train an AI on a million videos of a sunset or a waterfall, it'll be able to generate its own.

Of course there's a lot of complexity underneath that, and OpenAI has provided a deep dive into how its AI model works. It's trained on “internet-scale data” to know what realistic videos look like, first analyzing the clips to know what it's looking at, then learning how to produce its own versions when asked.

So, ask Sora to produce a clip of a fish tank, and it'll come back with an approximation based on all the fish tank videos it's seen. It makes use of what are known as visual patches, smaller building blocks that help the AI to understand what should go where and how different elements of a video should interact and progress, frame by frame.

OpenAI Sora

Sora starts messier, then gets tidier (Image credit: OpenAI)

Sora is based on a diffusion model, where the AI starts with a 'noisy' response and then works towards a 'clean' output through a series of feedback loops and prediction calculations. You can see this in the frames above, where a video of a dog playing in the show turns from nonsensical blobs into something that actually looks realistic.

And like other generative AI models, Sora uses transformer technology (the last T in ChatGPT stands for Transformer). Transformers use a variety of sophisticated data analysis techniques to process heaps of data – they can understand the most important and least important parts of what's being analyzed, and figure out the surrounding context and relationships between these data chunks.

What we don't fully know is where OpenAI found its training data from – it hasn't said which video libraries have been used to power Sora, though we do know it has partnerships with content databases such as Shutterstock. In some cases, you can see the similarities between the training data and the output Sora is producing.

What can you do with OpenAI Sora?

At the moment, Sora is capable of producing HD videos of up to a minute, without any sound attached, from text prompts. If you want to see some examples of what's possible, we've put together a list of 11 mind-blowing Sora shorts for you to take a look at – including fluffy Pixar-style animated characters and astronauts with knitted helmets.

“Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt,” says OpenAI, but that's not all. It can also generate videos from still images, fill in missing frames in existing videos, and seamlessly stitch multiple videos together. It can create static images too, or produce endless loops from clips provided to it.

It can even produce simulations of video games such as Minecraft, again based on vast amounts of training data that teach it what a game like Minecraft should look like. We've already seen a demo where Sora is able to control a player in a Minecraft-style environment, while also accurately rendering the surrounding details.

OpenAI does acknowledge some of the limitations of Sora at the moment. The physics don't always make sense, with people disappearing or transforming or blending into other objects. Sora isn't mapping out a scene with individual actors and props, it's making an incredible number of calculations about where pixels should go from frame to frame.

In Sora videos people might move in ways that defy the laws of physics, or details – such as a bite being taken out of a cookie – might not be remembered from one frame to the next. OpenAI is aware of these issues and is working to fix them, and you can check out some of the examples on the OpenAI Sora website to see what we mean.

Despite those bugs, further down the line OpenAI is hoping that Sora could evolve to become a realistic simulator of physical and digital worlds. In the years to come, the Sora tech could be used to generate imaginary virtual worlds for us to explore, or enable us to fully explore real places that are replicated in AI.

How can you use OpenAI Sora?

At the moment, you can't get into Sora without an invite: it seems as though OpenAI is picking out individual creators and testers to help get its video-generated AI model ready for a full public release. How long this preview period is going to last, whether it's months or years, remains to be seen – but OpenAI has previously shown a willingness to move as fast as possible when it comes to its AI projects.

Based on the existing technologies that OpenAI has made public – Dall-E and ChatGPT – it seems likely that Sora will initially be available as a web app. Since its launch ChatGPT has got smarter and added new features, including custom bots, and it's likely that Sora will follow the same path when it launches in full.

Before that happens, OpenAI says it wants to put some safety guardrails in place: you're not going to be able to generate videos showing extreme violence, sexual content, hateful imagery, or celebrity likenesses. There are also plans to combat misinformation by including metadata in Sora videos that indicates they were generated by AI.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT is getting human-like memory and this might be the first big step toward General AI

ChatGPT is becoming more like your most trusted assistant, remembering not just what you've told it about yourself, your interests, and preferences, but applying those memories in future chats. It's a seemingly small change that may make the generative AI appear more human and, perhaps, pave the way for General AI, which is where an AI brain can operate more like the gray matter in your head.

OpenAI announced the limited test in a blog post on Tuesday, explaining that it's testing the ability of ChatGPT (in both the free version and ChatGPT Plus) to remember what you tell it across all chats. 

ChatGPT can with this update remember casually, just picking up interesting bits along the way, like my preference for peanut butter on cinnamon raisin bagels, or what you explicitly tell it to remember. 

The benefit of ChatGPT having a memory is that new conversations with ChatGPT no longer start from scratch. A fresh prompt could have, for the AI, implied context. A ChatGPT with memory becomes more like a useful assistant who knows how you like your coffee in the morning or that you never want to schedule meetings before 10 AM.

In practice, OpenAI says that the memory will be applied to future prompts. If you tell ChatGPT that you have a three-year-old who loves giraffes, subsequent birthday card ideation chats might result in card ideas featuring a giraffe.

ChatGPT won't simply parrot back its recollections of your likes and interests, but will instead use that information to work more efficiently for you.

It can remember

Some might find an AI that can remember multiple conversations and use that information to help you a bit off-putting. That's probably why OpenAI is letting people easily opt out of the memories by using the “Temporary Chat” mode, which will seem like you're introducing a bit of amnesia to ChatGPT.

Similar to how you can remove Internet history from your browser, ChatGPT will let you go into settings to remove memories (I like to think of this as targeted brain surgery) or you can conversationally tell ChatGPT to forget something.

For now, this is a test among some free and ChatGPT Plus users but OpenAI offered no timeline for when it will roll out ChatGPT memories to all users. I didn't find the feature live in either my free ChatGPT or Plus subscription.

OpenAI is also adding Memory capabilities to its new app-like GPTs, which means developers can build the capability into bespoke chatty AIs. Those developers will not be able to access memories stored within the GPT.

Too human?

An AI with long-term memory is a dicier proposition than one that has a transient, at best, recall of previous conversations. There are, naturally, privacy implications. If ChatGPT is randomly memorizing what it considers interesting or relevant bits about you, do you have to worry about your details appearing in someone else's ChatGPT conversations? Probably not. OpenAI promises that memories will be excluded from ChatGPT's training data.

OpenAI adds in its blog, “We're taking steps to assess and mitigate biases, and steer ChatGPT away from proactively remembering sensitive information, like your health details – unless you explicitly ask it to.” That might help but ChatGPT must understand the difference between useful and sensitive info, a line that might not always be clear.

This update could ultimately have significant implications. ChatGPT can in prompt-driven conversations already seem somewhat human, but its hallucinations and fuzzy memories about, sometimes, even how the conversation started make it clear that more than a few billion neurons still separate us.

Memories, especially information delivered casually back to you throughout ChatGPT conversations, could change that perception. Our relationships with other people are driven in large part by our shared experiences and memories of them. We use them to craft our interactions and discussions. It's how we connect. Surely, we'll end up feeling more connected to a ChatGPT that can remember our distaste of spicy food and our love of all things Rocky Balboa.

You might also like

TechRadar – All the latest technology news

Read More

iOS 17.4 might give you more options for turning off those FaceTime reactions

The FaceTime video reactions Apple introduced in iOS 17 are kind of cool – fireworks when you show two thumbs up, and so on – but you don't necessarily want them going off on every call. Now it looks as though Apple is about to make the feature less prominent.

As per MacRumors, with the introduction of iOS 17.4 and iPadOS 17.4, third-party video calling apps will be able to turn the reactions off by default. In other words, you won't suddenly find balloons filling the screen on a serious call with your boss.

That “by default” is the crucial bit – at the moment, whenever you fire up FaceTime or another video app for the first time, these reactions will be enabled. You can turn them off (and they will then stay off for that app), but you need to remember to do it.

The move also means third-party developers get more control over the effects that are applied at a system level. As The Verge reports, one telehealth provider has already taken the step of informing users that it has no control over these reactions.

Coming soon

FaceTime reactions

A thumbs down is another reaction you can use (Image credit: Apple)

This extra flexibility is made possible through what's called an API or Application Programming Interface – a way for apps to interact with operating systems. It would mean the iOS or iPadOS setting no longer dictates the setting for every other video app.

The changes have been spotted in the latest beta versions of iOS 17.4 and iPadOS 17.4, though there's no guarantee that they'll stay there when the final version of the software rolls out. As yet it's not clear if the same update will be applied to macOS.

iOS 17.3 was pushed out on January 22, so we shouldn't have too much longer to wait to see its successor. Among the iOS 17.4 features in the pipeline, based on the beta version, we've got game streaming apps and automatic transcripts for your podcasts.

Apple will be hoping that a new version helps to encourage more people to actually install iOS 17 too. Uptake has been slower than it was with iOS 16, with users citing bugs and a lack of new features as reasons not to apply the update.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft Store now lets you instantly try games without downloading them – and it might mean I finally use it

The Microsoft Store in Windows 11 is about to get a handy new feature that lets you try games without having to download and install them – but will this innovative feature make the unloved app store more popular?

The Microsoft Store has a pretty large library of games on offer, both for sale and to download for free. However, it’s been lacking the ability to preview a game before downloading and installing it. 

That’s about to change for some games as Microsoft is now giving users the chance to play certain titles instantly right in the Microsoft Store app in Windows 11 – no installation needed. These “Instant Games” are short, easy-to-play games that can be played casually and don’t require a ton of effort to master. They will be located in the ‘Collection’ section in the Microsoft Store, which can be found by clicking on the Gaming tab in the Microsoft Store (this is what it opens to when you open the app), and scrolling to the very bottom. Once you click Collections, you’ll be greeted with the Microsoft Store’s collections of games. 

There’s no explicit Instant Games yet, but they should start appearing under a collection named “Play free games with no downloads”. According to Windows Latest, Instant Games will be indicated with an orange lightning logo. This isn’t how the games show up for me, but this could change soon. It seems like the Instant Games feature is still possibly a work in progress as Microsoft Store version 22312.1401.4.0 has an icon in the left-hand vertical menu that should take you straight to the Instant Games collection, but in Microsoft Store version 22312.1401.5.0 (a later build) the icon has been removed.

Person working on laptop in kitchen

(Image credit: Getty Images)

Looking ahead and how you can play Instant Games

Windows Latest states that Microsoft partnered with a number of game developers to make Instant Games a reality, and that there are currently 69 games that users will be able to play instantly within the Microsoft Store app. Also, it looks like Microsoft is planning to expand the Instant Games selection and work with more game developers. It’ll be interesting to see if Microsoft will partner with game makers to create playable Instant Game demos of their games, as this could be a great addition to the Microsoft store that’ll help users make more informed decisions about what games they purchase and download.

Here’s how you can get Instant Games in your Microsoft Store for yourself (if they don’t show up already): 

1. Update your Microsoft Store app to the latest version. You can do this by going to your Library in the Microsoft Store in the left-hand menu, toward the bottom. If your apps don’t update automatically, here you can navigate and choose which apps to update. Also, make sure you are connected to the Internet.

2. Once updated, go to Gaming in your Microsoft Store left-hand menu (towards to top). 

3. Scroll all the way down to Collections and click on Collections (the word) to open this section.

4. Choose a game, hover over it and click the game artwork. This will take you to the game’s page and you can choose to either Play Now, or Get to download and install the game. If you click Play Now, this will launch a new window that will allow you to play the game.

A screenshot of an Instant Game, Boing FRVR, in the Microsoft Store

(Image credit: Future)

First impressions of Instant Games

When I tried it, it ran very smoothly, which makes sense as the games consume very little system resources. Perhaps inevitably, all of the games contain ads. Windows Latest suggests that you might encounter a 30-second ad when, for instance, you try to reattempt a level, but you can bypass this by simply going back to the main menu. If you close a game, your progress will be saved and you can pick up where you left up when you reopen the Microsoft Store. Microsoft’s Edge browser offers a similar instant gaming feature in its Sidebar.

They’re a good way to pass a few minutes, but the games I tried became very repetitive and they’re not optimized for full screen play. They open up in portrait mode and don’t have the most sophisticated graphics. It’s maybe a more symbolic offering on Microsoft’s part, as many similar games can easily be found for mobile on multiple platforms anyway. We’ll have to see if anyone actually plays these games and if this will foster any good will among users. If it’s user goodwill that Microsoft wants, there are other user requests they can fulfill like scaling back its constant prodding of users to install the Edge browser.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Windows 10’s next update might come with a predictable but annoying extra – yet more badgering to upgrade to Windows 11

Some Windows 10 users are apparently being treated (ahem) to a multi-panel pop-up that takes over the whole screen, and consists of three pages of persuading those with eligible PCs to get the upgrade to Windows 11.

This kind of effectively long-winded nag – three full screens of selling the upgrade to Windows 11 – has been seen before, but it’s now appearing again as shown by Windows Latest.

The tech site observed that they stumbled on this sprawling pop-up after installing the optional update (in preview) for January 2024.

The first screen informs the user about the available free upgrade to Windows 11, and suggests allowing it to download in the background (while still using the PC).

As we’ve seen before, there are sneaky tactics with the buttons too – both available options in the center of the screen are saying ‘yes’ to the upgrade (the choice is either get it right now or schedule the upgrade for later). If you want to ‘Keep Windows 10’ that selection is sort of tucked away towards the bottom of the screen.

Clicking to keep the current OS, mind, means you still have to navigate through another two pages, the first of which tells you that the best choice is to switch to Windows 11, and the second of which makes you confirm that you want to stay on Windows 10.

We should note that Windows Latest calls this a four-page pop-up, but that’s not strictly true. There is a fourth panel, but you’ll only see that if you click the ‘See what’s inside’ button to learn more about Windows 11 (which most upgrade avoiders won’t, of course).


Analysis: Stop it already – or at least go more succinct

And that’s the point for the aforementioned upgrade avoiders, really – we all know what Windows 11 is by now, and we know if our PC is eligible for a free upgrade. Mainly because Microsoft has repeatedly told us so with overly lengthy ads for Windows 11 like this one. In fact, we’ve had something like 10 counts of badgering to upgrade our Windows 10 PC (at least), with the last three (or maybe even four) being this multi-panel effort that takes some clicking through.

So, why is Microsoft still doing this, given that this is definitely not new info at this stage of the game? Okay, so we get that Windows 11 is struggling to attract users, so there’s that obvious problem to rectify. But if you’re going to do this sort of thing, Microsoft, we suggest at least coming up with a new, more succinct nag screen to point out the upgrade (if you must).

Given that this pop-up appeared after installing the latest preview update in testing, it’s quite possible that Windows 10 users will experience this after installing the February cumulative update, which rolls out a week today (and is the finished version of that preview). So, steel yourself appropriately, and get that mouse index finger in training now in order to facilitate as fast a click-through the panels as you can manage.

That said, it’s not a foregone conclusion this will happen, of course, but these kind of sprawling pop-ups are appearing fairly regularly anyway on eligible Windows 10 PCs, as noted.

You might also like…

TechRadar – All the latest technology news

Read More

Don’t know what’s good about Copilot Pro? Windows 11 users might soon find out, as Microsoft is testing Copilot ads for the OS

Windows 11 might be getting ads for Copilot Pro, or at least this possibility is being explored in testing right now it seems.

Copilot Pro, for those who missed it, was recently revealed as Microsoft’s powered-up version of the AI assistant that you have to pay for (via a monthly subscription). And if you haven’t heard about it, well, you might do soon via the Settings panel in Windows 11.

PhantomOfEarth on X (formerly Twitter) spotted the new move from Microsoft, with the introduction of a card for Copilot Pro on the Home page of the Settings app. It provides a brief explanation of what the service is alongside links to find out more (or to get a subscription there and then).

See more

Note that the leaker had to dig around to uncover the Copilot Pro advert, and it was only displayed after messing about with a configuration tool (in Dev and Beta builds). However, two other Windows 11 testers in the Beta channel have responded to say that they have this Copilot Pro card present without doing anything.

In other words, taking those reports at face value, it seems this Copilot Pro ad is on some kind of limited rollout to some testers. At any rate, it’s certainly present in the background of Windows 11 (Beta and Dev) and can be enabled.


Analysis: Adding more ads

The theory, then, is that this will be appearing more broadly to testers, before following with a rollout to everyone using Windows 11. Of course, ideas in testing can be abandoned, particularly if they get criticized a lot, so we’ll just have to watch this space (or rather, the space on the Home page of Settings).

Does it seem likely Microsoft will try to push ahead with a Copilot Pro advert? Yes, it does, frankly. Microsoft isn’t shy about promoting its own services within its products, that’s for sure. Furthermore, AI is set to become a huge part of the Windows 11 experience, and other Microsoft products for that matter, so monetizing it is going to be a priority in all likelihood.

So, a nudge to raise the profile of the paid version of Copilot seems to likely, if not inevitable. Better that it’s tucked away in Settings, we guess, than somewhere more in-your-face like the Start menu.

If you’re wondering what benefits Copilot Pro confers, they include faster performance and responses, along with more customization and options – but this shouldn’t take anything away from the free version of Copilot (or it doesn’t yet, anyway). What it does mean is that the very latest upgrades will likely be reserved for the Pro AI, as we’ve seen initially with GPT-4 Turbo coming to Copilot Pro and not the basic free Copilot.

Via Neowin

You might also like…

TechRadar – All the latest technology news

Read More

Your local Apple Store might close early thanks to the Vision Pro launch

As the Apple Vision Pro launch looms on February 2, 2024, it appears Apple is changing its physical Store opening hours to accommodate the new headset’s arrival.

The alteration currently only affects two days, and will only impact some stores, but if you’re planning to head to your local Apple Store in the next couple of weeks we’d suggest checking the official Apple website first to see if it’s hours have temporarily changed.

As things currently stand, on January 21 all Apple Stores will close at 6pm local time. Some locations are usually open until 7pm on Sundays, so you’ll have an hour less to shop at them. Any Apple Stores that usually close at 6pm on Sundays don’t seem to be affected.

The early closing time is said to give Apple Store employees time to be trained on the new Vision Pro hardware before it goes on sale to the public (via MacRumors).

Lance Ulanoff wearing Apple Vision Pro

Our US editor-in-chief trying out the Vision Pro. (Image credit: Future)

Then on February 2, stores will open from 8am so people can sign up for in-store Vision Pro demos – that’s a whole two hours earlier than Apple Stores usually open. Demos will be assigned on a first-come first-severed basis so if you want to bag one you’ll need to make sure you arrive early to avoid disappointment.

Apple has said it will be running Vision Pro demonstrations from February 2 through to February 4, though it’s currently unclear whether the other two dates will also see stores open early as well.

If you don’t need to test out the new Apple headset before it launches, then you can preorder the Vision Pro on the official Apple Store page from 5am PT / 8am ET on January 19, 2024. If you’re on the fence about the new headset, you can read our guide on if you should preorder the Apple Vision Pro

Also remember that only Apple itself is selling the Vision Pro. Scammers may try and take advantage of the hype – and rumored lack of availability – to sell fake versions of the headset. If you aren’t shopping on Apple’s website or in one of its brick-and-mortar stores then you almost certainly aren’t about to buy a legit Vision Pro headset.

You might also like

TechRadar – All the latest technology news

Read More

Two Windows 11 apps are being ditched – one you might miss, and another you’ve probably forgotten about

Microsoft is dropping two of the core apps which are installed with Windows 11 by default.

As of Windows 11 preview build 26020 (which has just been unleashed in the Canary channel), the WordPad and People apps have been given the elbow.

Although technically, while the People app itself is being dispensed with, that’s because its functionality (or at least much of it) is being transferred to Outlook for Windows, the new default mailbox app for Windows 11 devices (as of the start of 2024).

In short, you’ll still get the People app (contacts) in that mailbox client, but there’ll no longer be an actual People application that can be fired up separately.

WordPad, on the other hand, is being completely dispensed with, or rather it will be when the changes made in this preview build come to the release version of Windows 11.

Going forward from then, any clean installation of Windows 11 won’t have WordPad, and eventually, this app will be removed when users upgrade to a new version of Microsoft’s OS.

You won’t be able to reinstall WordPad once it has gone, either, so this will be a final farewell to the application, which was marked as a deprecated feature back in September 2023.

Also in build 26020, a raft of additions for Voice Access have strengthened Windows 11 on the accessibility front (as seen elsewhere in testing last month).On top of that, Narrator now has natural voices for 10 new locales (in preview), and that includes English (UK) and English (India), as well as the following: Chinese, Spanish (Spain), Spanish (Mexico), Japanese, French, Portuguese, German and Korean.

Furthermore, when the energy saver feature is enabled on a desktop PC (a machine that’s plugged in, rather than running on battery), a new icon is present in the system tray (far-right of the taskbar) to indicate it’s running and saving you a bit of power.

For the full list of changes, check out Microsoft’s blog post for the build.


Analysis: Word up

One thing to clarify here is not to confuse WordPad with Notepad, or Microsoft Word for that matter.

Word is the heavyweight word processor in Microsoft 365 (the suite formerly known as Office), and not a default app. Both WordPad and Notepad are currently default apps in Windows 11, but Notepad is staying firmly put – indeed Microsoft is busy improving this piece of software (adding an autosave feature most recently).

Notepad remains a useful and highly streamlined, much-liked app for jotting notes and the like, whereas WordPad is kind of a ‘lite’ version of Word, and as such a bit more complex in nature (but not anything like a full-on effort such as Word).

WordPad sort of falls between stools a little in that respect, and another reason Microsoft may have decided to drop the app is due to potential security risks (or that was a theory floating around last year, when the software was deprecated).

Even so, there are some folks who will miss WordPad, and with no option to reinstall, they’ll just have to look for a different lightweight word processor for Windows 11 – fortunately, we explore some good alternatives right here.

You might also like…

TechRadar – All the latest technology news

Read More