Meta is on the brink of releasing AI models it claims to have “human-level cognition” – hinting at new models capable of more than simple conversations

We could be on the cusp of a whole new realm of AI large language models and chatbots thanks to Meta’s Llama 3 and OpenAI’s GPT-5, as both companies emphasize the hard work going into making these bots more human. 

In an event earlier this week, Meta reiterated that Llama 3 will be rolling out to the public in the coming weeks, with Meta’s president of global affairs Nick Clegg stating that we should expect the large language model “Within the next month, actually less, hopefully in a very short period, we hope to start rolling out our new suite of next-generation foundation models, Llama 3.”

Meta’s large language models are publicly available, allowing developers and researchers free and open access to the tech to create their bots or conduct research on various aspects of artificial intelligence. The models are trained on a plethora of text-based information, and Llama 3 promises much more impressive capabilities than the current model. 

No official date for Meta’s Llama 3 or OpenAI’s GPT-5 has been announced just yet, but we can safely assume the models will make an appearance in the coming weeks. 

Smarten Up 

Joelle Pineau, the vice president of AI research at Meta noted that “We are hard at work in figuring out how to get these models not just to talk, but actually to reason, to plan . . . to have memory.” Openai’s chief operating officer Brad Lightcap told the Finacial Times in an interview that the next GPT version would show progress in solving difficult queries with reasoning. 

So, it seems the next big push with these AI bots will be introducing the human element of reasoning and for lack of a better term, ‘thinking’. Lightcap also said “We’re going to start to see AI that can take on more complex tasks in a more sophisticated way,” adding “ We’re just starting to scratch the surface on the ability that these models have to reason.”

As tech companies like OpenAI and Meta continue working on more sophisticated and ‘lifelike’  human interfaces, it is both exciting and somewhat unnerving to think about a chatbot that can ‘think’ with reason and memory. Tools like Midjourney and Sora have championed just how good AI can be in terms of quality output, and Google Gemini and ChatGPT are great examples of how helpful text-based bots can be in the everyday. 

With so many ethical and moral concerns still unaddressed with the current tools available right now as they are, I dread to think what kind of nefarious things could be done with more human AI models. Plus, you must admit it’s all starting to feel a little bit like the start of a sci-fi horror story.  

You might also like…

TechRadar – All the latest technology news

Read More

OpenAI just gave artists access to Sora and proved the AI video tool is weirder and more powerful than we thought

A man with a balloon for a head is somehow not the weirdest thing you'll see today thanks to a series of experimental video clips made by seven artists using OpenAI's Sora generative video creation platform.

Unlike OpenAI's ChatGPT AI chatbot and the DALL-E image generation platform, the company's text-to-video tool still isn't publicly available. However, on Monday, OpenAI revealed it had given Sora access to “visual artists, designers, creative directors, and filmmakers” and revealed their efforts in a “first impressions” blog post.

While all of the films ranging in length from 20 seconds to a minute-and-a-half are visually stunning, most are what you might describe as abstract. OpenAI's Artist In Residence Alex Reben's 20-second film is an exploration of what could very well be some of his sculptures (or at least concepts for them), and creative director Josephine Miller's video depicts models melded with what looks like translucent stained glass.

Not all the videos are so esoteric.

OpenAI Sora AI-generated video image by Don Allen Stevenson III

OpenAI Sora AI-generated video image by Don Allen Stevenson III (Image credit: OpenAI sora / Don Allen Stevenson III)

If we had to give out an award for most entertaining, it might be multimedia production company shy kids' “Air Head”. It's an on-the-nose short film about a man whose head is a hot-air-filled yellow balloon. It might remind you of an AI-twisted version of the classic film, The Red Balloon, although only if you expected the boy to grow up and marry the red balloon and…never mind.

Sora's ability to convincingly merge the fantastical balloon head with what looks like a human body and a realistic environment is stunning. As shy kids' Walter Woodman noted, “As great as Sora is at generating things that appear real, what excites us is its ability to make things that are totally surreal.” And yes, it's a funny and extremely surreal little movie.

But wait, it gets stranger.

The other video that will have you waking up in the middle of the night is digital artist Don Allen Stevenson III's “Beyond Our Reality,” which is like a twisted National Geographic nature film depicting never-before-seen animal mergings like the Girafflamingo, flying pigs, and the Eel Cat. Each one looks as if a mad scientist grabbed disparate animals, carved them up, and then perfectly melded them to create these new chimeras.

OpenAI and the artists never detail the prompts used to generate the videos, nor the effort it took to get from the idea to the final video. Did they all simply type in a paragraph describing the scene, style, and level of reality and hit enter, or was this an iterative process that somehow got them to the point where the man's balloon head somehow perfectly met his shoulders or the Bunny Armadillo transformed from grotesque to the final, cute product?

That OpenAI has invited creatives to take Sora for a test run is not surprising. It's their livelihoods in art, film, and animation that are most at risk from Sora's already impressive capabilities. Most seem convinced it's a tool that can help them more quickly develop finished commercial products.

“The ability to rapidly conceptualize at such a high level of quality is not only challenging my creative process but also helping me evolve in storytelling. It's enabling me to translate my imagination with fewer technical constraints,” said Josephine Miller in the blog post.

Go watch the clips but don't blame us if you wake up in the middle of the night screaming.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT is broken again and it’s being even creepier than usual – but OpenAI says there’s nothing to worry about

OpenAI has been enjoying the limelight this week with its incredibly impressive Sora text-to-video tool, but it looks like the allure of AI-generated video might’ve led to its popular chatbot getting sidelined, and now the bot is acting out.

Yes, ChatGPT has gone insane–- or, more accurately, briefly went insane for a short period sometime in the past 48 hours. Users have reported a wild array of confusing and even threatening responses from the bot; some saw it get stuck in a loop of repeating nonsensical text, while others were subjected to invented words and weird monologues in broken Spanish. One user even stated that when asked about a coding problem, ChatGPT replied with an enigmatic statement that ended with a claim that it was ‘in the room’ with them.

Naturally, I checked the free version of ChatGPT straight away, and it seems to be behaving itself again now. It’s unclear at this point whether the problem was only with the paid GPT-4 model or also the free version, but OpenAI has acknowledged the problem, saying that the “issue has been identified” and that its team is “continuing to monitor the situation”. It did not, however, provide an explanation for ChatGPT’s latest tantrum.

This isn’t the first time – and it won’t be the last

ChatGPT has had plenty of blips in the past – when I set out to break it last year, it said some fairly hilarious things – but this one seems to have been a bit more widespread and problematic than past chatbot tomfoolery.

It’s a pertinent reminder that AI tools in general aren’t infallible. We recently saw Air Canada forced to honor a refund after its AI-powered chatbot invented its own policies, and it seems likely that we’re only going to see more of these odd glitches as AI continues to be implemented across the different facets of our society. While these current ChatGPT troubles are relatively harmless, there’s potential for real problems to arise – that Air Canada case feels worryingly like an omen of things to come, and may set a real precedent for human moderation requirements when AI is deployed in business settings.

OpenAI CEO Sam Altman speaking during Microsoft's February 7, 2023 event

OpenAI CEO Sam Altman doesn’t want you (or his shareholders) to worry about ChatGPT. (Image credit: JASON REDMOND/AFP via Getty Images)

As for exactly why ChatGPT had this little episode, speculation is currently rife. This is a wholly different issue to user complaints of a ‘dumber’ chatbot late last year, and some paying users of GPT-4 have suggested it might be related to the bot’s ‘temperature’.

That’s not a literal term, to be clear: when discussing chatbots, temperature refers to the degree of focus and creative control the AI exerts over the text it produces. A low temperature gives you direct, factual answers with little to no character behind them; a high temperature lets the bot out of the box and can result in more creative – and potentially weirder – responses.

Whatever the cause, it’s good to see that OpenAI appears to have a handle on ChatGPT again. This sort of ‘chatbot hallucination’ is a bad look for the company, considering its status as the spearpoint of AI research, and threatens to undermine users’ trust in the product. After all, who would want to use a chatbot that claims to be living in your walls?

TechRadar – All the latest technology news

Read More

Mark Zuckerberg thinks the Meta Quest 3 is better than Vision Pro – and he’s got a point

Mark Zuckerberg has tried the Apple Vision Pro, and he wants you to know that the Meta Quest 3 is “the better product, period”. This is unsurprising given that his company makes the Quest 3, but having gone through all of his arguments he does have a point – in many respects, the Quest 3 is better than Apple’s high-end model.

In his video posted to Instagram, Zuckerberg starts by highlighting the fact that the Quest 3 offers a more impressive interactive software library than the Vision Pro, and right now that is definitely the case. Yes, the Vision Pro has Fruit Ninja, some other spatial apps (as Apple calls them), and plenty of ported-over iPad apps, but nothing on the Vision Pro comes close to matching the quality or immersion levels of Asgard’s Wrath 2, Walkabout Mini Golf, Resident Evil 4 VR, The Light Brigade, or any of the many amazing Quest 3 VR games

It also lacks fitness apps. I’m currently testing some for a VR fitness experiment (look out for the results in March) and I’ve fallen in love with working out with my Quest 3 in apps like Supernatural. The Vision Pro not only doesn't offer these kinds of experiences, but its design isn’t suited to them either – the hanging cable could get in the way, and the fabric facial interface would get drenched in sweat; a silicone facial interface is a must-have based on my experience.

The only software area where the Vision Pro takes the lead is video. The Quest platform is badly lacking when it comes to offering the best streaming services in VR – only having YouTube and Xbox Cloud Gaming – and it’s unclear if or when this will change. I asked Meta if it has plans to bring more streaming services to Quest, and I was told by a representative that it has “no additional information to share at this time.” 

Zuckerberg also highlights some design issues. The Vision Pro is heavier than the Quest 3, and if you use the cool-looking Solo Knit Band you won’t experience the best comfort or support – instead most Vision Pro testers recommend you use the Dual-Loop band which more closely matches the design of the Quest 3’s default band as it has over the head support.

You also can’t wear glasses with the Vision Pro, instead you need to buy expensive inserts. On Quest 3 you can just extend the headset away from your face using a slider on the facial interface and make room for your specs with no problem.

Lance Ulanoff wearing Apple Vision Pro

The Vision Pro being worn with the Dual-Loop band (Image credit: Future)

Then there’s the lack of controllers. On the Vision Pro unless you’re playing a game that supports a controller you have to rely solely on hand tracking. I haven’t used the Vision Pro but every account I’ve read or heard – including Zuckerberg’s – has made it clear that hand-tracking isn’t any more reliable on the Vision Pro than it is on Quest; with the general sentiment being that 95% of the time it works seamlessly which is exactly my experience on the Quest 3.

Controllers are less immersive but do help to improve precision – making activities like VR typing a lot more reliable without needing a real keyboard. What’s more, considering most VR and MR software out there right now is designed for controllers software developers have told us it would be a lot easier to port their creations to the Vision Pro if it had handsets.

Lastly, there’s the value. Every Meta Quest 3 and Apple Vision Pro comparison will bring up price so we won’t labor the point, but there’s a lot to be said for the fact the Meta headset is only $ 499.99 / £479.99 / AU$ 799.99 rather than $ 3,499 (it’s not yet available outside the US). Without a doubt the Quest 3 is giving you way better bang for your buck.

The Meta Quest 3 controller being held above a table with a lamp, a plant and the QUest 3 headset on. You can see the buttons and the thumbstick on top.

The Vision Pro could be improved if it came with controllers (Image credit: Future)

Vision Pro: not down or out 

That said, while Zuckerberg makes some solid arguments he does gloss over how the Vision Pro takes the lead, and even exaggerates how much better the Quest 3 is in some areas – and these aren’t small details either.

The first is mixed reality. Compared to the Meta Quest Pro the Vision Pro is leaps and bounds ahead, though reports from people who have tried the Quest 3 suggest the Vision Pro doesn’t offer as much of an improvement – and in ways it is worse as Zuckerberg mentions.

To illustrate the Quest 3’s passthrough quality Zuckerberg reveals the video of him comparing the two headsets is being recorded using a Quest 3, and it looks pretty good – though having used the headset I can tell you this isn’t representative of what passthrough actually looks like. Probably due to how the video is processed recordings of mixed reality on Quest always look more vibrant and less grainy than experiencing it live.

Based on less biased accounts from people who have used both the Quest 3 and Vision Pro it sounds like the live passthrough feed on Apple’s headset is generally a bit less grainy – though still not perfect – but it does have way worse motion blur when you move your head.

Apple Vision Pro spatial videos filmed at the beach being watched by someone wearing the headset on their couch

Mixed reality has its pros and cons on both headsets (Image credit: Apple)

Zuckerberg additionally takes aim at the Vision Pro’s displays pointing out that they seem less bright than the Quest 3’s LCDs and they offer a narrower field of view. Both of these points are right, but I feel he’s not given enough credit to two important details.

While he does admit the Vision Pro offers a higher resolution he does so very briefly. The Vision Pro’s dual 3,680 x 3,140-pixel displays will offer a much crisper experience than the Quest 3’s dual 2064 x 2208-pixel screens. Considering you use this screen for everything the advantage of better visuals can’t be understated – and a higher pixel density should also mean the Vision Pro is more immersive as you’ll experience less of a screen door effect (where you see the lines between pixels as the display is so close to your eyes).

Zuckerberg also ignores the fact that the Vision Pro’s screens are OLEDs. Yes, this will mean they’re less vibrant, but the upshot is they offer much better contrast for blacks and dark colors. Better contrast has been shown to improve a user’s immersion in VR based on Meta and other’s experiments so I wouldn’t be surprised if the next Quest headset also incorporated OLEDs – rumors suggest it will and I seriously hope it does.

Lastly, there’s eye-tracking which is something the Quest 3 lacks completely. I don’t think the unavailability of eye-tracking is actually a problem, but that deserves its own article.

Hamish Hector holding Starburst to his face

This prototype headset showed me how important great contrast is (Image credit: Future)

Regardless of whether you agree with Mark Zuckerberg’s arguments or not one thing that’s clear from the video is that the Vision Pro has got the Meta CEO fired up. 

He ends his video stating his desire for the Quest 3 and the Meta’s open model (as opposed to the closed-off walled-garden Apple has where you can only use the headset how it intends) to “win out again” like Windows in the computing space.

But we’ll have to wait and see how it pans out. As Zuckerberg himself admits “The future is not yet written” and only time will tell if Apple, Meta or some new player in the game (like Samsung with its Samsung XR headset) will come out on top in the long run.

You might also like

TechRadar – All the latest technology news

Read More

Google Maps could become smarter than ever thanks to generative AI

Google Maps is getting a dose of generative AI to let users search and find places in a more conversational manner, and serve up useful and interesting suggestions. 

This smart AI tech comes in the form of an “Ask about” user interface where people can ask Google Maps questions like where to find “places with a vintage vibe” in San Francisco. That will prompt AI to analyze information, like photos, ratings and reviews, about nearby businesses and places to serve up suggestions related to the question being asked.  

From this example, Google said the AI tech served up vinyl record stores, clothing stores, and flea markets in its suggestions. These included the location along with its rating, reviews, number of times rated, and distance by car. The AI then provides review summaries that highlight why a place might be of interest. 

You can then ask follow-up questions that remember your previous query, using that for context on your next search. For example, when asked, “How about lunch?” the AI will take into account the “vintage vibe” comment from the previous prompt and use that to offer an old-school diner nearby.

Screengrabs of the new generative AI features on Google Maps showing searches and suggestions

(Image credit: Google)

You can save the suggestions or share them, helping you coordinate with friends who might all have different preferences like being vegan, checking if a venue is dog friendly, making sure it is indoors, and so on.

By tapping into the search giant’s large-language models, Google Maps can analyze detailed information using data from more than 250 million locations, and photos, ratings and reviews from its community of over 300 million contributors to provide “trustworthy” suggestions. 

The experimental feature is launching this week but is only coming to “select Local Guides” in the US. It will use these members' insights and feedback to develop and test the feature before what’s likely to be its eventual full rollout, which Google has not provided a date for.

 Does anyone want this?  

Users on the Android subreddit were very critical of the feature with some referring to AI as a buzzword that big companies are chasing for clout, user lohet stated: “Generative AI doesn't have any place in a basic database search. There's nothing to generate. It's either there or it's not.”

Many said they would rather see Google improve offline Maps and its location-sharing features. User, chronocapybara summarized the feelings of others in the forum by saying:  “If it helps find me things I'm searching for, I'm all for it. If it offloads work to the cloud, making search slower, just to give me more promoted places that are basically ads, then no.” 

However, AI integration in our everyday apps is here to stay and its inclusion in Google Maps could lead to users being able to discover brand-new places easily and helping smaller businesses gain attention and find an audience.

Until the features roll out, you can make the most of Google Maps with our 10 things you didn't know Google Maps could do

You may also like

TechRadar – All the latest technology news

Read More

Windows 10 might get Copilot sooner than you think

Windows 10 should get Microsoft’s Copilot AI – a feature that was previously exclusive to Windows 11 – in the near future, and some users might benefit from the desktop-based assistant quicker than you think.

As you may have noticed, Copilot came to Windows 10 last week, but only in testing for consumers (Windows 10 Home, and non-business Pro editions). And we’ve just had a clarification about how Copilot will be deployed to Windows 10 users.

As Windows Latest spotted, in a blog post penned earlier this week, Microsoft tells us: “Copilot will begin rolling out to devices running Home and unmanaged [consumer] Pro editions of Windows 10, version 22H2 in the near term. We will roll out this experience in phases using Controlled Feature Rollout (CFR) technology over several months.”

Notice that the full rollout will begin in the ‘near term’ so that certainly suggests we’ll be seeing Copilot in Windows 10 soon enough.

However, it won’t be for everyone. As noted, Copilot will be pushed out in stages, so only some users will get it, and then its reach will gradually be expanded.

In short, a lucky few – presuming you want Copilot, mind – could be getting the AI assistant quite soon indeed.

The deployment of Copilot in Windows 10 will mirror that of Windows 11, we’re also told, meaning that it’ll only come to the US and North America first, as well as parts of Asia and South America. Other regions will be covered down the line.


Analysis: Driving adoption of Copilot

It makes sense that Microsoft would want to get Copilot live in Windows 10 as soon as possible.

After all, witness the remarkable turnaround from the previous announcement that Windows 10 would get no major new features, to suddenly adding the biggest new feature of all from Windows 11. This is presumably the result of Microsoft wanting to drive up the numbers of those using its AI – and Windows 10 users are a billion strong, of course. That’s a very big number indeed.

If this is true, and Microsoft is looking to tap into the Windows 10 user base to this end, then the company will likely want to move sooner rather than later.

More broadly, it seems that Microsoft wants to jam Copilot into pretty much everything it can. As an example, Windows Latest also flagged up the addition of Copilot to the command line in Windows 11 (and presumably Windows 10 eventually).

The theory is that Copilot in Windows 10 will be pretty much equivalent to the Windows 11 version, but as we stand at the beginning of the porting process to the older OS, that isn’t yet true, and the initial incarnation is more limited. Mind you, it’s still a barebones affair in Windows 11, truth be told, and Microsoft has a lot of work to do to fulfill its vision of an AI that can manipulate all manner of settings at the user’s request.

You might also like

TechRadar – All the latest technology news

Read More

Adobe’s new photo editor looks even more powerful than Google’s Magic Editor

Adobe MAX 2023 is less than a week away, and to promote the event, the company recently published a video teasing its new “object-aware editing engine” called Project Stardust.

According to the trailer, the feature has the ability to identify individual objects in a photograph and instantly separate them into their own layers. Those same objects can then be moved around on-screen or deleted. Selecting can be done either manually or automatically via the Remove Distractions tool. The software appears to understand the difference between the main subjects in an image and the people in the background that you want to get rid of.

What’s interesting is moving or deleting something doesn’t leave behind a hole. The empty space is filled in most likely by a generative AI model. Plus, you can clean up any left-behind evidence of a deleted item. In its sample image, Adobe erases a suitcase held by a female model and then proceeds to edit her hand so that she’s holding a bouquet of flowers instead.  

Image 1 of 2

Project Stardust editing

(Image credit: Adobe)
Image 2 of 2

Project Stardust generative AI

(Image credit: Adobe)

The same tech can also be used to change articles of clothing in pictures. A yellow down jacket can be turned into a black leather jacket or a pair of khakis into black jeans. To do this, users will have to highlight the piece of clothing and then enter what they want to see into a text prompt. 

Stardust replacement tool

(Image credit: Adobe)

AI editor

Functionally, Project Stardust operates similarly to Google’s Magic Editor which is a generative AI tool present on the Pixel 8 series. The tool lets users highlight objects in a photograph and reposition them in whatever manner they please. It, too, can fill gaps in images by creating new pixels. However, Stardust feels much more capable. The Pixel 8 Pro’s Magic Eraser can fill in gaps, but neither it nor Magic Editor can’t generate content. Additionally, Google’s version requires manual input whereas Adobe’s software doesn’t need it.

Seeing these two side-by-side, we can’t but wonder if Stardust is actually powered by Google’s AI tech. Very recently, the two companies announced they were entering a partnership “and offering a free three-month trial for Photoshop on the web for people who buy a Chromebook Plus device. Perhaps this “partnership” runs a lot deeper than free Photoshop considering how similar Stardust is to Magic Editor.

Impending reveal

We should mention that Stardust isn't perfect. If you look at the trailer, you'll notice some errors like random holes in the leather jacket and strange warping around the flower model's hands. But maybe what we see is Stardust in an early stage. 

There is still a lot we don’t know like whether it's a standalone app or will it be housed in, say, Photoshop? Is Stardust releasing in beta first or are we getting the final version? All will presumably be answered on October 10 when Adobe MAX 2023 kicks off. What’s more, the company will be showing other “AI features” coming to “Firefly, Creative Cloud, Express, and more.”

Be sure to check out TechRadar’s list of the best Photoshop courses online for 2023 if you’re thinking of learning the software, but don’t know where to start. 

You might also like

TechRadar – All the latest technology news

Read More

The Apple Vision Pro could pack in more storage than the iPhone 15

We know that the Apple Vision Pro isn't going to be available to buy until 2024, but we're learning a little bit more about the specs of the device through leaks from early testers – including how much on-board storage the augmented reality headset might pack.

According to iPhoneSoft (via 9to5Mac), the Vision Pro is going to offer users 1TB of integrated storage as a default option, with 2TB or 4TB a possibility for those who need it (and who have bigger budgets to spend).

Alternatively, it might be that 256GB is offered as the amount of storage on the starting price Vision Pro headset, and that 512GB and 1TB configurations are the ones made available for those who want to spend more.

This information is supposedly from someone who has been given an early look at the AR device, and noticed the storage space listed on one of the settings screens. It's more than the standard iPhone 15 model is expected to have – if it sticks with the iPhone 14 configurations, it will be available with up to 512GB of storage.

Plenty of unknowns

It does make sense for a device like this to offer lots of room for apps and files, and it might go some way to explaining the hefty starting price of $ 3,499 (about £2,750 / AU$ 5,485). Watch this space for more Vision Pro revelations as the launch date gets closer.

While the Apple Vision Pro is now official, there's still a lot we don't know about it – and it may be that we won't find out everything until we actually have the headset in our hands and are able to test it fully.

There have been rumors that two more Vision Pro headsets are in the pipeline, and that some features – such as making group calls using augmented reality avatars – will be held back until those later generations of the device go on sale.

We're also hearing that Apple might not be planning to make a huge number of these headsets, so availability could be a problem. Right now it does feel like a high-end, experimental device rather than something aimed at the mass market.

TechRadar – All the latest technology news

Read More

Windows 11’s next big update could arrive sooner than expected

Windows 11’s next big update, known as 23H2, could be coming sooner rather than later this year.

Or at least that’s the suggestion based off clues Windows Latest picked up on with the July cumulative update for Windows 11.

In that patch, the tech site notes that it has found references to several packages relating to ‘Moment 4’.

As you may be aware, the last feature drop for Windows 11 was Moment 3, so it follows that this is the next feature update – except this is a full upgrade for the OS. In short, Moment 4 is the 23H2 update.

Windows Latest further observes: “We found that Microsoft is testing an enablement package named Microsoft-Windows-23H2Enablement-Package.”

This lines up with what we know about 23H2, as Microsoft has already confirmed that it will be an enablement package. This means that the files for the upgrade will be preloaded to Windows 11 PCs, and can be sent live with a simple flick of an ‘enablement’ switch – a small download that’s easily applied at launch time.


Analysis: Early groundwork is a good sign

These clues being in place in Windows 11 now shows the groundwork for 23H2 is well underway, and this suggests we could see the annual update for the OS soon enough, maybe. Is there a chance it could keep pace with 22H2 and arrive in September? Maybe, though the rumor mill has been pointing to Q4 for 23H2, so October may still be a more realistic release date.

We shall see, but the Beta channel for Windows 11 just got a bunch of new stuff – including a File Explorer revamp, and RGB lighting hub – and again that suggests progress is ticking along nicely with the 23H2 update.

What could work against the ‘sooner rather than later’ theory is that Microsoft’s Copilot AI is still in a very barebones state, and it’s supposed to be included with 23H2. Our personal theory here, though, is this won’t make the cut for the 23H2 update – well, either that, or it’ll be a very limited version of Windows Copilot that’s released. And we don’t think the latter would be a very clever move for Microsoft in terms of making a good first impression with the AI (as we discussed recently in more depth).

TechRadar – All the latest technology news

Read More

Microsoft’s cloud ambitions for Windows could kill off desktop PCs – and sooner than we expected

The rumor mill believes Windows 365 is coming in consumer flavors, one of which will be a ‘family’ bundle, and we’ve also heard some chatter on potential pricing for this subscription.

Windows 365 is a cloud-based installation of Windows 11, meaning it’s streamed to you, rather than being installed on your local PC, and it’s currently available to businesses in three different plans (and there are separate products for the enterprise world, too, all with Office apps bundled).

So the rumor, as Windows Latest has heard from its sources, is that there will be Windows 365 consumer plans aimed at everyday users, with the theory proposed that one will be an individual subscription, and the other a family bundle (for multiple users which will work out cheaper than the single-person plan, naturally).

There’s nothing firm on pricing yet, unsurprisingly, but the rumored internal chatter is that Microsoft has been mulling charging at least $ 10 per month for the cheapest Windows 365 consumer product, or perhaps more like $ 20 for that entry-level subscription.

Take all of this, and especially that nugget on pricing, with a whole heap of salt. We’re told that pricing is very much up in the air at this stage, anyway, but we can expect that consumer plans will likely be cheaper than business subscriptions (and we’d hope that’d be the case).

What timeline are we looking at for the launch of consumer Windows 365? Windows Latest reckons that the cloudy spin on Windows 11 will arrive in the fall, so in theory, it could be just a few short months away.


Analysis: The inevitably cloudy future for consumers

We’re not sure that a release is that near on the horizon, in all honesty – we’re pretty skeptical Microsoft is going to move quite that quickly here.

That said, this route definitely seems to be in the cards, as evidenced by materials that have come to light recently due to the FTC vs Microsoft hearing, which make the software giant’s cloud ambitions very clear.

Namely that Microsoft very much sees the future of the consumer space as shifting Windows 11 to the cloud, and an installation of the OS being managed on a remote server, and streamed to any device, anywhere, rather than sitting on your local PC. And these fresh rumors are certainly a weighty hint that this could happen more quickly than we anticipated.

However, before going all-in with the cloud PC, and ruling out local installations completely, Microsoft might first visit some sort of compromise on Windows 11 users, involving a dual-boot system that can either be used locally or as a cloud PC.

The best of both worlds, if you will, and a slightly easier pill to swallow for those who have concerns about going fully to the cloud with their PC. (Worries that may be numerous around security and data privacy, to pick a couple of obvious issues with Microsoft having all your apps and data on its servers).

Indeed, there’s already work underway in testing with Windows 365 Boot for Windows 11, which allows for logging into either a cloud PC instance or the local installation of Windows on the desktop PC in front of you.

We really don’t know exactly how Microsoft will approach the idea of the cloud PC in the consumer space, but we’ve got a feeling it’s going to have to be pretty cautious and tentative, because this is such a big change. What we do know is the cloud PC concept is almost definitely coming to consumers at some point, and expect to hear more on the rumor mill before too long, no doubt.

Another idea Microsoft may be exploring is the idea of cheap subscription-based and cloud-connected PCs subsidized by adverts.

TechRadar – All the latest technology news

Read More