Microsoft targets another corner of Windows 11 with – you guessed it – adverts, and we’re getting a bit fed up with this

Microsoft is testing adding a fresh batch of ads to the Windows 11 interface, this time in the Start menu.

Recent digging in preview builds had suggested this move was in the cards, and now those cards have been dealt to testers in the Windows 11 preview Beta channel with a new build (version 22635).

The ads are being placed in the ‘Recommended’ panel of the Start menu, and consist of highlighted apps from the Microsoft Store that you might want to try.

These promoted pieces of software appear with a brief description in the Recommended section, alongside the other content such as your commonly-used (already installed) apps.

As Microsoft makes clear in the blog post introducing the build, this is only rolling out in the Beta channel, and just in the US. Also, you can turn off the app promotions if you wish.

Testers who want to do so need to open the Settings app, head to Personalization > Start, and switch off the slider for ‘Show recommendations for tips, app promotions, and more.’


Analysis: Just trying stuff out…

As mentioned, this idea was already flagged up as hidden in test builds, but now it’s a reality – at least for a limited set of testers in the US. In fact, Microsoft clarifies that it is “beginning to roll this out to a small set of Insiders [testers]” so it sounds like the firm is really being tentative. On top of that, Microsoft writes: “We regularly try out new experiences and concepts that may never get released with Windows Insiders to get feedback.”

In other words – don’t panic – we’re just trying out this concept a little bit. It probably won’t ever happen – move along, there’s nothing to see here. Anyway, you get the idea: Microsoft is very aware it needs to tread carefully here, and rightly so.

Advertising like this, wrapped up as suggestions or recommendations, is becoming all too common a theme with Windows 11. Prompting of one kind or another has been floating around in the recent past, whether it’s to encourage folks to sign up for a Microsoft Account, or to use OneDrive as part of a backup strategy, or slipping ads into Outlook is another recent example. Or indeed recommendations for websites to visit, in much the same vein as these app recommendations in this Beta build.

In this case, the idea appears to be driving traffic towards the Microsoft Store – which Microsoft has been making a lot of efforts with lately to improve performance (and the store has come on leaps and bounds in that regard, to be fair).

We don’t want to sound like a broken record, but sadly, we’re going to, as we’re of the firm belief that you can monetize a free product with advertising – no one can argue with that – but when a product is already paid for, shoving in ads on top – particularly with an OS, where you’re cluttering the interface – is just not on.

Microsoft may argue that these recommendations could prove useful, especially if they’re targeted for the user – though there could be privacy issues therein if that’s the way this ends up working – but still, we don’t think it’s right to be inserting these bits of adverts into the UI, no doubt turned on by default. Yes, you can turn them off – thankfully – but you shouldn’t have to in a paid OS.

It’s up to testers to feed back on this one, and let Microsoft know how they feel.

You might also like

TechRadar – All the latest technology news

Read More

Meta teases its next big hardware release: its first AR glasses, and we’re excited

Meta’s Reality Labs division – the team behind its VR hardware and software efforts – has turned 10 years old, and to celebrate the company has released a blog post outlining its decade-long history. However, while a trip down memory lane is fun, the most interesting part came right at the end, as Meta teased its next major new hardware release: its first-ever pair of AR glasses.

According to the blog post, these specs would merge the currently distinct product pathways Meta’s Reality Labs has developed – specifically, melding its AR and VR hardware (such as the Meta Quest 3) with the form factor and AI capabilities of its Ray-Ban Meta Smart Glasses to, as Meta puts it, “deliver the best of both worlds.”

Importantly for all you Quest fans out there, Meta adds that its AR glasses wouldn’t replace its mixed-reality headsets. Instead, it sees them being the smartphones to the headsets’ laptop/desktop computers – suggesting that the glasses will offer solid performance in a sleek form factor, but with less oomph than you’d get from a headset.

Before we get too excited, though, Meta hasn’t said when these AR specs will be released – and unfortunately they might still be a few years away.

When might we see Meta’s AR glasses?

A report from The Verge back in March 2023 shared an apparent Meta Reality Labs roadmap that suggested the company wanted to release a pair of smart glasses with a display in 2025, followed by a pair of 'proper' AR smart glasses in 2027.

The Meta Quest 3 dangling down as a user looks towards a sunny window while holding it

We’re ready for Meta’s next big hardware release (Image credit: Meta)

However, while we may have to wait some time to put these things on our heads, we might get a look at them in the next year or so,

A later report that dropped in February this year, this time via Business Insider, cited unnamed sources who said a pair of true AR glasses would be demoed at this year’s Meta Connect conference. Dubbed 'Orion' by those who claim to be in the know, the specs would combine Meta’s XR (a catchall for VR, AR, and MR) and AI efforts – which is exactly what Meta described in its recent blog post.

As always, we should take rumors with a pinch of salt, but given that this latest teaser came via Meta itself it’s somewhat safe to assume that Meta AR glasses are a matter of when, not if. And boy are we excited.

We want Meta AR glasses, and we want ‘em now 

Currently Meta has two main hardware lines: its VR headsets and its smart glasses. And while it’s rumored to be working on new entries to both – such as a budget Meta Quest 3 Lite, a high-end Meta Quest Pro 2, and the aforementioned third-generation Ray-Ban glasses with a screen – these AR glasses would be its first big new hardware line since it launched the Ray-Ban Stories in 2021.

And the picture Meta has painted of its AR glasses is sublime.

Firstly, while Meta’s current Ray-Ban smart glasses aren’t yet the smartest, a lot of major AI upgrades are currently in beta – and should be launching properly soon.

Ray-Ban meta glasses up close

The Ray-Ban Meta Smart Glasses are set to get way better with AI (Image credit: Future / Philip Berne)

Its Look and Ask feature combines the intelligence of ChatGPT – or in this instance its in-house Meta AI – with the image-analysis abilities of an app like Google Lens. This apparently lets you identify animals, discover facts about landmarks, and help you plan a meal based on the ingredients you have – it all sounds very sci-fi, and actually useful, unlike some AI applications.

We then take those AI-abilities and combine them with Meta’s first-class Quest platform, which is home to the best software and developers working in the XR space. 

While many apps likely couldn’t be ported to the new system due to hardware restrictions – as the glasses might not offer controllers, will probably be AR-only, and might be too small to offer as powerful a chipset or as much RAM as its Quest hardware – we hope that plenty will make their way over. And Meta’s existing partners would plausibly develop all-new AR software to take advantage of the new system.

Based on the many Quest 3 games and apps we’ve tried, even if just a few of the best make their way to the specs they’d help make Meta’s new product feel instantly useful. a factor that’s a must for any new gadget.

Lastly, we’d hopefully see Meta’s glasses adopt the single-best Ray-Ban Meta Smart Glasses feature: their design. These things are gorgeous, comfortable, and their charging case is the perfect combination of fashion and function. 

A closeup of the RayBan Meta Smart Glasses

We couldn’t ask for better-looking smart specs than these (Image credit: Meta)

Give us everything we have already design-wise, and throw in interchangeable lenses so we aren’t stuck with sunglasses all year round – which in the UK where I'm based are only usable for about two weeks a year – and the AR glasses could be perfect.

We’ll just have to wait and see what Meta shows off, either at this year’s Meta Connect or in the future – and as soon as they're ready for prime time, we’ll certainly be ready to test them.

You might also like

TechRadar – All the latest technology news

Read More

Mark Zuckerberg says we’re ‘close’ to controlling our AR glasses with brain signals

Move over eye-tracking and handset controls for VR headsets and AR glasses, according to Mark Zuckerberg – the company’s CEO – Meta is “close” to selling a device that can be controlled by your brain signals. 

Speaking on the Morning Brew Daily podcast (shown below), Zuckerberg was asked to give examples of AI’s most impressive use cases. Ever keen to hype up the products Meta makes – he also recently took to Instagram to explain why the Meta Quest 3 is better than the Apple Vision Pro – he started to discuss the Ray-Ban Meta Smart Glasses that use AI and their camera to answer questions about what you see (though annoyingly this is still only available to some lucky users in beta form).

He then went on to discuss “one of the wilder things we’re working on,” a neural interface in the form of a wristband – Zuckerberg also took a moment to poke fun at Elon Musk’s Neuralink, saying he wouldn’t want to put a chip in his brain until the tech is mature, unlike the first human subject to be implanted with the tech.

Meta’s EMG wristband can read the nervous system signals your brain sends to your hands and arms. According to Zuckerberg, this tech would allow you to merely think how you want to move your hand and that would happen in the virtual without requiring big real-world motions.

Zuckerberg has shown off Meta’s prototype EMG wristband before in a video (shown below) – though not the headset it works with – but what’s interesting about his podcast statement is he goes on to say that he feels Meta is close to having a “product in the next few years” that people can buy and use.

Understandably he gives a rather vague release date and, unfortunately, there’s no mention of how much something like this would cost – though we’re ready for it to cost as much as one of the best smartwatches – but this system could be a major leap forward for privacy, utility and accessibility in Meta’s AR and VR tech.

The next next-gen XR advancement?

Currently, if you want to communicate with the Ray-Ban Meta Smart Glasses via its Look and Ask feature or to respond to a text message you’ve been sent without getting your phone out you have to talk it it. This is fine most of the time but there might be questions you want to ask or replies you want to send that you’d rather keep private.

The EMG wristband allows you to type out these messages using subtle hand gestures so you can maintain a higher level of privacy – though as the podcast hosts note this has issues of its own, not least of which is schools having a harder time trying to stop students from cheating in tests. Gone are the days of sneaking in notes, it’s all about secretly bringing AI into your exam.

Then there are utility advantages. While this kind of wristband would also be useful in VR, Zuckerberg has mostly talked about it being used with AR smart glasses. The big success, at least for the Ray-Ban Meta Smart Glasses is that they’re sleek and lightweight – if you glance at them they’re not noticeably different to a regular pair of Ray-Bans.

Adding cameras, sensors, and a chipset for managing hand gestures may affect this slim design. That is unless you put some of this functionality and processing power into a separate device like the wristband. 

The inside displays are shown off in the photo, they sit behind the Xreal Air 2 Pro AR glasses shades

The Xreal Air 2 Pro’s displays (Image credit: Future)

Some changes would still need to be made to the specs themselves – chiefly they’ll need to have in-built displays perhaps like the Xreal Air 2 Pro’s screens – but we’ll just have to wait to see what the next Meta smart glasses have in store for us.

Lastly, there’s accessibility. By their very nature, AR and VR are very physical things – you have to physically move your arms around, make hand gestures, and push buttons – which can make them very inaccessible for folks with disabilities that affect mobility and dexterity.

These kinds of brain signal sensors start to address this issue. Rather than having to physically act someone could think about doing it and the virtual interface would interpret these thoughts accordingly.

Based on demos shown so far some movement is still required to use Meta’s neural interface so it’s far from the perfect solution, but it’s the first step to making this tech more accessible and we're excited to see where it goes next.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

Elon Musk’s Neuralink has performed its first human brain implant, and we’re a step closer to having phones inside our heads

Neuralink, Elon Musk's brain interface company, achieved a significant milestone this week, with Musk declaring on X (formerly Twitter), “The first human received an implant from yesterday and is recovering well.”

Driven by concerns that AI might soon outpace (or outthink) humans, Musk first proposed the idea of a brain-to-computer interface, then called Neural Lace, back in 2016. envisioning an implant that could overcome limitations inherent in human-to-computer interactions. Musk claimed that an interface that could read brain signals and deliver them directly to digital systems would massively outpace our typical keyboard and mouse interactions.

Four years later, Musk demonstrated early clinical trials with an uncooperative pig, and in 2021 the company installed the device in a monkey that used the interface to control a game of Pong.

It was, in a sense, all fun and games – until this week, and Musk's claim of a human trial and the introduction of some new branding.

Neuralink's first product is now called 'Telepathy' which, according to another Musk tweet, “Enables control of your phone or computer, and through them almost any device, just by thinking.”

As expected, these brain implants are not, at least for now, intended for everyone. Back in 2020, Musk explained that the intention is “to solve important spine and brain problems with a seamlessly implanted device.” Musk noted this week that “Initial users will be those who have lost the use of their limbs. Imagine if Stephen Hawking could communicate faster than a speed typist or auctioneer. That is the goal.”

Neural link devices like Telepathy are bio-safe implants comprising small disk-like devices (roughly the thickness of four coins stuck together) with ultra-fine wires trailing out of them that connect to various parts of the brain. The filaments read neural spikes, and a computer interface interprets them to understand the subject's intentions and translate them into action on, say, a phone, or a desktop computer. In this first trial, Musk noted that “Initial results show promising neuron spike detection,” but he didn't elaborate on whether the patient was able to control anything with his mind.

Musk didn't describe the surgical implantation process. Back in 2020, though, Neuralink introduced its Link surgery robot, which it promised would implant the Neuralink devices with minimal pain, blood, and, we're guessing, trauma. Considering that the implant is under the skin and skull, and sits on the brain, we're not sure how that's possible. It's also unclear if Neuralink used Link to install 'Telepathy.'

The new branding is not that far-fetched. While most people think of telepathy as people transmitting thoughts to one another, the definition is “the communication of thoughts or ideas by means other than the known senses.”

A phone in your head

Still, Musk has a habit of using hyperbole when describing Neuralink. During one early demonstration, he only half-jokingly said “It’s sort of like if your phone went in your brain.” He also later added that, “In the future, you will be able to save and replay memories.”

With the first Neuralink Telepathy device successfully installed, however, Musk appears to be somewhat more circumspect. There was no press conference, or parading of the patient before the reporters. All we have are these few tweets, and scant details about a brain implant that Musk hopes will help humans stay ahead of rapidly advancing AIs.

It's worth noting that for all of Musk's bluster and sometimes objectionable rhetoric, he was more right than he knew about where the state of AI would be by 2024. Back in 2016, there was no ChatGPT, Google Bard, or Microsoft CoPilot. We didn't have AI in Windows and Photoshop's Firefly, realistic AI images and videos, or realistic AI deepfakes. Concerns about AIs taking jobs are now real, and the idea of humans falling behind artificial intelligence sounds less like a sci-fi fantasy and more like our future.

Do those fears mean we're now more likely to sign up for our brain implants? Musk is betting on it.

You might also like

TechRadar – All the latest technology news

Read More

Ray-Ban Meta smart glasses finally get the AI camera feature we were promised, but there’s a catch

When the Ray-Ban Meta smart glasses launched they did so without many of the impressive AI features we were promised. Now Meta is finally rolling out these capabilities to users, but they’re still in the testing phase and only available in the US.

During their Meta Connect 2023 announcement, we were told the follow-up to the Ray-Ban Stories smart glasses would get some improvements we expected – namely a slightly better camera and speakers – but also some unexpected AI integration.

Unfortunately, when we actually got to test the specs out its AI features boiled down to very basic commands. You can instruct them to take a picture, record a video, or contact someone through Messenger or WhatsApp. In the US you could also chat to a basic conversational AI – like ChatGPT – though this was still nothing to write home about. 

While the glasses’ design is near-perfect, the speakers and camera weren’t impressive enough to make up for the lacking AI. So overall in our Ray-Ban Meta Smart Glasses review we didn’t look too favorably on the specs. 

The Ray-Ban Meta Smart Glasses Collection is stylish looking on this person's face

Press the button or ask the AI to take a picture (Image credit: Meta)

Our perception could soon be about to change drastically, however, as two major promised features are on their way: Look and Ask, and Bing integration.

Look and Ask is essentially a wearable voice-controlled Google Lens with a few AI-powered upgrades. While wearing the smart glasses you can say “Hey Meta, look and…” followed by a question about what you can see. The AI will then use the camera to scan your environment so it can provide a detailed answer to your query. On the official FAQ possible questions you can ask include “What can I make with these ingredients?” or “How much water do these flowers need?” or “Translate this sign into English.” 

To help the Meta glasses provide better information when you’re using its conversational and Look and Ask features the specs can also now access the internet via Bing. This should mean the specs can source more up-to-date data letting it answer questions about sports matches that are currently happening, or provide real-time info on what nearby restaurants are the best rated, among other things.

Still not perfect

Orange RayBan Meta Smart Glasses in front of a wall of colorful lenses including green, blue, yellow and pink

(Image credit: Meta)

It all sounds very science fiction, but unfortunately these almost magical capabilities come with a catch. For now, the new features – just like the existing conversational AI – are in beta testing. 

So the glasses might have trouble with some of your queries and provide inaccurate answers, or not be able to find an answer at all. What’s more, as Meta explains in its FAQ any AI-processed pictures you take while part of the beta will be stored by Meta and used to train its AI. So your Look and Ask snaps aren’t private.

Lastly, the Meta Ray-Ban smart glasses beta is only available in the US. So if you live somewhere else like me you won’t be able to try these features out – and probably won’t until 2024.

If you are in the US and happy with the terms of Meta’s Privacy Policy, you can sign up for the Early Access program and start testing these new tools. For everyone else hopefully these features won’t be in beta for long, or at least won’t be US-exclusive – otherwise we’ll be left continuing to wonder why we spent $ 299 / £299 / AU$ 449 on smart specs that aren’t all that much better than dumb Ray-Ban Wayfarers at half the cost.

You might also like

TechRadar – All the latest technology news

Read More

YouTube reveals powerful new AI tools for content creators – and we’re scared, frankly

YouTube has announced a whole bunch of AI-powered tools (on top of its existing bits and pieces) that are designed to make life easier for content creators on the platform.

As The Verge spotted, at the ‘Made on YouTube’ event which just took place, one of the big AI revelations made was something called ‘Dream Screen’, an image and video generation facility for YouTube Shorts.

See more

This lets a video creator just type in something that they’d like for a background. Such as, for example, a panda drinking a cup of coffee – given that request, the AI will take the reins and produce such a video background for the clip (or image).

This is how the process will be implemented to begin with – you prompt the AI, and it makes something for you – but eventually, creators will be able to remix content to produce something new, we’re told.

YouTube Studio is also getting an infusion of AI tools that will suggest content that could be made by individual creators, generating topic ideas for videos that might suit them, based on what’s trending with viewers interested in the kind of content that creator normally deals in.

A system of AI-powered music recommendations will also come into play to furnish audio for any given video.


Analysis: Grab the shovel?

Is it us, or does this sound rather scary? Okay, so content creators may find it useful and convenient to be able to drop in AI generated video or image backgrounds really quickly, and have some music layered on top, and so on.

But isn’t this going to just ensure a whole heap of bland – and perhaps homogenous – content flooding onto YouTube? That seems the obvious danger, and maybe one compounded by the broader idea of suggested content that people want to see (according to the great YouTube algorithm) being provided to creators on YouTube.

Is YouTube set to become a video platform groaning under the collective weight of content that gets quickly put together, thanks to AI tools, and shoveled out by the half-ton?

While YouTube seems highly excited about all these new AI utilities and tools, we can’t help but think it’s the beginning of the end for the video site – at least when it comes to meaningful, not generic, content.

We hope we’re wrong, but this whole brave new direction fills us with trepidation more than anything else. A tidal wave of AI-generated this, that, and the other, eclipsing everything else is clearly a prospect that should be heavily guarded against.

You might also like

TechRadar – All the latest technology news

Read More

We’re amazed at how well Windows 11 runs with 176MB of RAM – but don’t try this at home

Windows 11 running with a tiny amount of system RAM is something we’ve seen before, but this is a new low for the OS (in a good way).

NTDEV, the developer responsible for the stripped-back version of Windows 11 called Tiny11, has got Microsoft’s operating system functioning in a PC with only 176MB of system memory.

How is this possible? By removing pretty much everything the dev could, and using an “extensive trial and error” process to see which services and drivers Windows 11 required to boot (then getting rid of everything else).

Tiny11 has previously been run in as little as 196MB of RAM, the catch with that being that the OS was incredibly slow. You could do things, just about, but at glacial speeds.

With NTDEV’s successful attempt at firing up Windows 11 in just 176MB, the system runs relatively responsively, and for example Task Manager is brought up in just a few seconds. (No, that isn’t snappy, but compare it to the 15-minute wait for Task Manager to appear with the 196MB demonstration).

It’s an eye-opening difference, for sure, but there’s a big catch here, which we’ll discuss next.


Analysis: An impressive feat with a twist of a cheat

The cheat NTDEV has used to do this is that they’ve trimmed Tiny11 down even further and removed a major element – File Explorer – allowing Windows 11 to run faster because it doesn’t have to bring up the desktop.

That’s right, as you can see in the video clip above, there is no desktop, with the system booting to a command line. You then have to type manual commands (in text) to run different apps and functions, which is hardly ideal. Indeed, it’s a very clunky way of operating, but it does allow for the OS to be much faster.

To put this achievement in perspective, normally Window 11 requires 4GB of RAM to run, which is 4096MB. That’s 23x the amount the OS is seen running in here.

What’s the point of this? Well, it’s a fun exercise and nothing more, as nobody sane would want to operate Windows 11 in this manner. Much like huge overclocks on GPUs and CPUs that are completely impractical – and happen for seconds, only thanks to exotic cooling with say, liquid nitrogen – these are feats undertaken just to prove it can be done. Although in this case, the goal isn’t big numbers, but small ones – as tiny as possible.

Is 176MB a world-record low for running Windows 11 (functionally)? We think it could be, but then, technically, are you running Windows if there’s no desktop? There are still windows, of course – brought up via the command line – but without a desktop and icons, the core graphical interface, this isn’t really Windows, is it?

Still, hats off to NTDEV for what remains an impressive achievement.

Via Tom's Hardware

TechRadar – All the latest technology news

Read More

Google changed its privacy policy to reflect Bard AI’s data collecting, and we’re spooked

Google just changed the wording of its privacy policy, and it’s quite an eye-opening adjustment that has been applied to encompass the AI tech the firm is working with.

As TechSpot reports, there’s a section of the privacy policy where Google discusses how it collects information (about you) from publicly accessible sources, and clarifying that, there’s a note that reads: “For example, we may collect information that’s publicly available online or from other public sources to help train Google’s AI models and build products and features, like Google Translate, Bard and Cloud AI capabilities.”

Preivously, that paragraph read that the publicly available info would be used to train “language models” and only mentioned Google Translate.

So, this section has been expanded to make it clear that training is happening with AI models and Bard.

It’s a telling change, and basically points out that anything you post online publicly  may be picked up and used by Google's Bard AI.


Analysis: So what about privacy, plagiarism, and other concerns?

We already knew that Google’s Bard, and indeed Microsoft’s Bing AI for that matter, are essentially giant data hoovers, extracting and crunching online content from all over the web to refine conclusions on every topic under the sun that they might be questioned on.

This change to Google’s privacy policy makes it crystal clear that its AI is operating in this manner, and seeing it in cold, hard, text on the screen, may make some folks step back and question this a bit more.

After all, Google has had Bard out for a while now, so has been working in this manner for some time, and has only just decided to update its policy? That in itself seems pretty sly.

Don’t want stuff you’ve posted online where other people can see it to be used to train Google’s big AI machinery? Well, tough. If it’s out there, it’s fair game, and if you want to argue with Google, good luck with that. Despite the obvious concerns around not just basic privacy issues, but plagiarism (if an AI reply uses content written by others, picked up by Bard’s training) – where do any boundaries lie with the latter? Of course, it’d be impractical (or indeed impossible) to police that anyway.

There are broader issues around accuracy and misinformation when data is scraped from the web in a major-scale fashion, too, of course.

On top of this, there are worries recently expressed by platforms like Reddit and Twitter, with Elon Musk apparently taking a stand against “scraping people’s public Twitter data to build AI models” with those frustrating limitations that have just been brought in (which could be big win for Zuckerberg and Threads, ultimately).

All of this is a huge minefield, really, but the big tech outfits making big strides with their LLM (large language model) data-scraping AIs are simply forging ahead, all eyes on their rivals and the race to establish themselves at the forefront, seemingly with barely a thought about how some of the practical side of this equation will play out.

TechRadar – All the latest technology news

Read More

Windows 11 23H2 update is real, we’re told – but it could disappoint

Windows 11’s big update for later this year, known as 23H2, is reportedly real and inbound for PCs – though how much impact it’ll make is another question (that we’ll come back to shortly).

Windows Latest has been doing some digging and tells us that it has spotted references to ‘23H2’ in a couple of documents, as well as Windows preview builds.

Furthermore, those references have also been seen in Windows 11 itself, in Settings, and Winver, a command that displays the current version of Windows (and labels a test build as 23H2, presumably).

Windows Latest underlines something else in its report, namely that the 23H2 update will be triggered via an enablement package, something we’ve already heard from the rumor mill in recent times.

This means that in theory – we need to take all of this with a fair old sprinkling of salt – Microsoft will preload the 23H2 update before it comes around to release. So Windows 11 users will only need to download a small update – the trigger, or enablement package – to receive the preloaded features.

This also suggests that the update will be a more minor affair, as generally this is the approach Microsoft takes with upgrades that are, shall we say, a little less ambitious in their scope – they are effectively quick and easy updates (relatively speaking).


Analysis: Making way for Windows 12?

This also marries with what we’ve seen thus far in preview builds, namely that there aren’t any huge Windows 11 features appearing in the pipeline thus far. Don’t get us wrong, there’s definitely some solid stuff present in the preview – some key interface changes, and the revamp of File Explorer (complete with a new photo gallery feature) – but the meaty changes appear to be somewhat thin on the ground.

Now, that could change, as there’s still some time before the release of 23H2 – perhaps as much as five months even. But the reality is the upgrade will probably arrive before November, and given the time taken to test larger bits of functionality, there isn’t much breathing room left to get that kind of testing in.

It also makes sense that Microsoft hasn’t officially said anything about 23H2 yet, simply because there’s not all that much to shout about, perhaps.

In reality, as Windows Latest points out, bigger moves are at this point probably being reserved for Windows 12. After all, Microsoft needs to make a splash with a new incarnation of Windows – something in all honesty it failed to do with Windows 11, which initially felt like more of a reskin of Windows 10 than anything else. (Albeit with some good changes on the design front, no doubt – but also frustrations).

So, Microsoft will likely be saving much of the juicier stuff for Windows 12 – or whatever next-gen Windows is called – and that’s quite possibly going to turn up later in 2024, so Windows Latest suggests. And that’s a believable prospect, given that Windows 10 will be pushed out of support in 2025, plus it also aligns with other chatter from the rumor mill, too.

If true, this means that next year’s annual update for Windows 11 (24H2) will likely also be a more minor affair – given that Microsoft will have shifted its attention to Windows 12. Then it’ll only be a matter of time before Microsoft ceases any meaningful feature updates for Windows 11, which is what just happened with Windows 10.

TechRadar – All the latest technology news

Read More

These were the most popular browser extensions during the pandemic

The transition to working from home during the pandemic drastically changed the way in which we use technology with the browser becoming one of the most important tools for remote work.

Now as we've reached the two-year mark of the start of the pandemic, Mozilla has published a new blog post taking a closer look at which browser extensions were the most downloaded and used during the early days of the lockdown in Firefox.

As meetings went virtual with employees relying on video conferencing software like Zoom to connect with their teams, the browser extension Zoom Scheduler saw a 1,522 percent increase in installs. This is because it integrates Google Calendar with Zoom so that users can scheduler or start their Zoom meetings directly from their calendar.

Since remote workers also spent more time looking at their work from home monitors, the Dark Background and Light Text extension, which flips the colors of webpages to make them more visible, saw a 351 percent increase in installs at the beginning of the pandemic. Likewise, the Tree Style Tab extension also experienced a 126 percent increase in downloads as it can help users deal with tab overload by opening browser tabs in a cascading “tree” format similar to vertical tabs in Microsoft Edge

Protecting our privacy and staying entertained

Cybercrime ran rampant during the beginning of the pandemic so in addition to using a VPN and antivirus software when working remotely, Firefox users also began installing privacy extensions for their browser.

Cookie AutoDelete, which eliminates unused cookies whenever you close a tab in Firefox, saw its install numbers skyrocket by 386 percent and the browser extension also averaged more than 206k installs per month between March and May of 2020.

As remote workers used Facebook's social media platform to stay connected during lockdown, Mozilla's own Facebook Container was another popular browser extension. This extension isolates your Facebook identity into a separate “container” so that the social media giant can't track your moves around the web.

Blocking trackers was also important to those working from home during the pandemic which is why the Privacy Badger browser extension saw installs jump by 80 percent globally. An interesting thing about this browser extension is that it gets better at blocking trackers the longer you use it since Privacy Badger “learns” more about the hidden trackers you naturally encounter while online.

When it came to staying entertained while in lockdown, Firefox users installed the BetterTTV browser extension to alter the look and feel of Twitch, the Watch2gether extension to have watch parties with friends and colleagues online and YouTube Non-Stop to solve the problem of the video platform's annoying “Video paused. Continue watching?” prompt.

Regardless of which browser you're currently using, browser extensions can help add to your online experience and make the software and services you depend on while working from home even more useful.

TechRadar – All the latest technology news

Read More