A new Meta Quest VR headset could launch in early 2025 with LG OLED displays


  • Meta Quest Pro 2 rumored for 2025 release
  • LG expected to make OLED displays for it
  • OLED could provide visual boost next Quest Pros need

The Apple Vision Pro might be the talk of the town in the high-end VR space right now, but not only will it have to contend with rivals like the Samsung XR headset that are expected to launch later this year, new reports claim the Meta Quest Pro 2 will launch in early 2025 to compete with it too.

The original Meta Quest Pro was something of a disappointment. At the time it seemed like a decent option for people looking for a high-end standalone VR headset – especially compared to rivals like the HTC Vive XR Elite. But since the launch of the Meta Quest 3 and Vision Pro – the former of which is not only cheaper but actually has some better specs in the mixed-reality department – it’s fallen by the wayside.

Meta is clearly hoping to make its next Quest Pro device a standout VR gadget. A Korea Economic Daily report (translated from Korean) cites unnamed industry sources who said Meta CEO Mark Zuckerberg is set to meet with the CEO of LG Electronics to discuss a partnership for its next Pro devices.

LG OLED coming to Quest?

It’s been rumored for some time that LG is looking to make an XR device of some kind – either its own or one in partnership with another brand like the collaborative Google and Samsung XR headset – and back in February 2023 we first heard whispers Meta wanted LG to create OLED displays for its headsets

While we hope these high-end screens will make their way to the more budget-friendly Quest line it’s more reasonable to assume that a pricier Quest Pro line would be upgraded to LG OLEDs first. 

A woman sat in front of an LG C3 OLED TV in a blue living room

LG makes fantastic OLED TVs like the LG C3 (Image credit: LG)

Yes, we know the original Oculus Quest got there first, but since then Meta has relied on LCD panels with better brightness and resolution because the original OLED Quest couldn’t benefit from this display tech’s full advantages. The pixels took too long to turn on and off so you could never experience true blacks, despite the ability to achieve true blacks being the main reason to use an OLED.

LG’s next-gen panels hopefully should be able to offer top-of-the-line visuals – one of the four things we want to see from the Meta Quest Pro 2 – but we’ll have to wait and see.

Thankfully, the recent Korea Economic Daily report said we might not be waiting too long. The Meta Quest Pro 2 is apparently being prepared for an early 2025 launch, and while this is a slight departure from Meta’s usual October release strategy it makes some sense. 

However, as with all rumors, we must remember to take these reports with a pinch of salt. Until Meta or LG make an official announcement there's no guarantee they’re working together on the next Quest Pro or any kind of headset – nor a guarantee of when it’ll launch and what specs it might have.

As soon as we do hear anything more concrete, or we spy any interesting leaks and rumors, we’ll be sure to keep you informed.

You might also like

TechRadar – All the latest technology news

Read More

Windows 11 could soon deliver updates that don’t need a reboot

Windows 11 could soon run updates without rebooting, if the rumor mill is right – and there’s already evidence this is the path Microsoft is taking in a preview build.

This comes from a regular source of Microsoft-related leaks, namely Zac Bowden of Windows Central, who first of all spotted that Windows 11 preview build 26058 (in the Canary and Dev channels) was recently updated with an interesting change.

Microsoft is pushing out updates to testers that do nothing and are merely “designed to test our servicing pipeline for Windows 11, version 24H2.” The the key part is we’re informed that those who have VBS (Virtualization Based Security) turned on “may not experience a restart upon installing the update.”

Running an update without requiring a reboot is known as “hot patching” and this method of delivery – which is obviously far more convenient for the user – could be realized in the next major update for Windows 11 later this year (24H2), Bowden asserts.

The leaker has tapped sources for further details, and observes that we’re talking about hot patching for the monthly cumulative updates for Windows 11 here. So the bigger upgrades (the likes of 24H2) wouldn’t be hot-patched in, as clearly there’s too much work going on under the hood for that to happen.

Indeed, not every cumulative update would be applied without a reboot, Bowden further explains. This is because hot patching uses a baseline update, one that can be patched on top of, but that baseline model needs to be refreshed every few months.

Add seasoning with all this info, naturally, but it looks like Microsoft is up to something here based on the testing going on, which specifically mentions 24H2, as well.


Analysis: How would this work exactly?

What does this mean for the future of Windows 11? Well, possibly nothing. After all, this is mostly chatter from the grapevine, and what’s apparently happening in early testing could simply be abandoned if it doesn’t work out.

However, hot patching is something that is already employed with Windows Server, and the Xbox console as well, so it makes sense that Microsoft would want to use the tech to benefit Windows 11 users. It’s certainly a very convenient touch, though as noted, not every cumulative update would be hot-patched.

Bowden believes the likely scenario would be quarterly cumulative updates that need a reboot, followed by hot patches in between. In other words, we’d get a reboot-laden update in January, say, followed by two hot-patched cumulative updates in February and March that could be completed quickly with no reboot needed. Then, April’s cumulative update would need a reboot, but May and June wouldn’t, and so on.

As mentioned, annual updates certainly wouldn’t be hot-patched, and neither would out-of-band security fixes for example (as the reboot-less updates rely on that baseline patch, and such a fix wouldn’t be based on that, of course).

This would be a pretty cool feature for Windows 11 users, because dropping the need to reboot – to be forced to restart in some cases – is obviously a major benefit. Is it enough to tempt upgrades from Windows 10? Well, maybe not, but it is another boon to add to the pile for those holding out on Microsoft’s older operating system. (Assuming they can upgrade to Windows 11 at all, of course, which is a stumbling block for some due to PC requirements like TPM).

You might also like…

TechRadar – All the latest technology news

Read More

Microsoft Paint update could make it even more Photoshop-like with handy new tools

Microsoft Paint received a plethora of new features late last year, introducing layers, a dark mode, and AI-powered image generation. These new updates brought Microsoft Paint up to speed with the rest of Windows 11's modern layout (maybe a different word? Trying to say vibe)  after years of virtually no meaningful upgrades, and it looks like Microsoft still has plans to add even more features to the humble art tool. 

X user @PhantomOfEarth made a post highlighting potential changes spotted in the Canary Development channel, and we could see these new features implemented in Microsoft Paint very soon. The Canary Dev channel is part of the Microsoft Insider Program, which allows Windows enthusiasts and developers to sign up and get an early look at upcoming releases and new features that may be on the way. 

See more

 We do have to take the features we see in such developer channels with a pinch of salt, as it’s common to see a cool upgrade or new software appear in the channel but never actually make it out of the development stage. That being said, PhantonOfEarth originally spotted the big changes set for Windows 11 Paint last year in the same Dev channel, so there’s a good chance that the brush size slider and layer panel update that is now present in the Canary build will actually come to fruition in a public update soon.   

Show my girl Paint some love

It’s great to see Microsoft continue to show some love for the iconic Paint app, as it had been somewhat forgotten about for quite some time. It seems like the company has finally taken note of the app's charm, as many of us can certainly admit to holding a soft spot for Paint and would hate to see it abandoned. I have many memories of using Paint; as a child in IT class learning to use a computer for the first time, or firing it up to do some casual scribbles while waiting for my family’s slow Wi-Fi to connect. 

These proposed features won’t make Paint the next Photoshop (at least for now), but they do bring the app closer to being a simple, free art tool that most everyday people will have access to. Cast your mind back to the middle of last year, when Photoshop introduced image generation capabilities – if you wanted to use them, you’d have to have paid for Adobe Firefly access or a Photoshop license. Now, if you’re looking to do something quick and simple with AI image-gen, you can do it in Paint. 

Better brush size control and layers may not seem like the most important or exciting new features, especially compared to last year's overhaul of Windows Paint, but it is proof that the team at Microsoft is still thinking about Paint. In fact, the addition of a proper layers panel will do a lot to justify the program’s worth to digital artists. It could also be the beginning of a new direction for Paint if more people flock back to the revamped app. I hope that Microsoft continues to improve it – just so long as it remains a free feature of Windows.

You might also like…

TechRadar – All the latest technology news

Read More

Gemma, Google’s new open-source AI model, could make your next chatbot safer and more responsible

Google has unveiled Gemma, an open-source AI model that will allow people to create their own artificial intelligence chatbots and tools based on the same technology behind Google Gemini (the suite of AI tools formerly known as Bard and Duet AI).

Gemma is a collection of open-source models curated from the same technology and research as Gemini, developed by the team at Google DeepMind. Alongside the new open-source model, Google has also put out a ‘Responsible Generative AI Toolkit’ to support developers looking to get to work and experiment with Gemini, according to an official blog post

The open-source model comes in two variations, Gemma 2B and Gemma 7B, which have both been pre-trained to filter out sensitive or personal information. Both versions of the model have also been tested with reinforcement learning from human feedback, to reduce the potential of any chatbots based on Gemma from spitting out harmful content quite significantly. 

 A step in the right direction 

While it may be tempting to think of Gemma as just another model that can spawn chatbots (you wouldn’t be entirely wrong), it’s interesting to see that the company seems to have genuinely developed Gemma to “[make] AI helpful for everyone” as stated in the announcement. It looks like Google’s approach with its latest model is to encourage more responsible use of artificial intelligence. 

Gemma’s release comes right after OpenAI unveiled the impressive video generator Sora, and while we may have to wait and see what developers can produce using Gemma, it’s comforting to see Google attempt to approach artificial intelligence with some level of responsibility. OpenAI has a track record of pumping features and products out and then cleaning up the mess and implementing safeguards later on (in the spirit of Mark Zuckerberg’s ‘Move fast and break things’ one-liner). 

One other interesting feature of Gemma is that it’s designed to be run on local hardware (a single CPU or GPU, although Google Cloud is still an option), meaning that something as simple as a laptop could be used to program the next hit AI personality. Given the increasing prevalence of neural processing units in upcoming laptops, it’ll soon be easier than ever for anyone to take a stab at building their own AI.

You might also like…

TechRadar – All the latest technology news

Read More

Your Meta Quest 3 is getting a hand-tracking upgrade that could unlock foot-tracking

In our Apple Vision Pro review, we commended the headset for wowing us with its dual hand-and-eye-tracking system. Meta has now launched its own dual-tracking system for the Meta Quest 3 and Meta Quest Pro, though the eye-tracking has been swapped for handset tracking so you can use controllers and hands simultaneously – and people are already using it for foot-tracking.

Admittedly this feature isn’t entirely new. Since hand tracking launched it has been possible to swap between the two within apps that support both – though there was a delay when switching modes, and as soon as you put the controllers down they’d disappear from your view (making it a challenge to find them again, in VR).

This new ‘Multimodal’ method that tracks both at the same time has technically been around for a while too. It launched back in July 2023, however, it was in beta which meant official Quest Store apps and App Lab software couldn’t implement it. Instead, software using Multimodal tracking would have to be shared via third-party app stores like SideQuest.

See more

Now with Quest update v62 it has launched fully (via UploadVR) meaning VR games and apps distributed through the native Quest Store can add Multimodal tracking for Meta Quest 3 and Quest Pro users. This not only allows apps to transition instantly from one method to the other, but it also means you can use controllers and your hands at the same time opening up new ways to interact with virtual worlds.

Perhaps we’ll see an adventure game where you wield a sword in one hand and perform Doctor Strange-like spells with your free hand, or existing apps that only use one controller could add some hand-tracking features – even something as simple as the ability to make hand gestures to improve communication in multiplayer games.

People who have been testing the feature have pointed out this new system could allow tracking of multiple body parts at once. In one example, Twitter user @Lunayian attaches Quest Pro controllers to their feet so they can use their hands and feet in VR without a complex tracking rig.

See more

Unfortunately, the Oculus Quest 2 lacks the processing power to enable simultaneous hand and controller tracking with its base handsets. However, you could unlock this feature if you buy and pair Touch Pro controllers with the headset – they’ll cost you $ 299.99 / £299.99 / AU$ 479.99 for two – as they track themselves allowing the Quest 2 to focus on your hands.

You might want to hold off on picking up the Touch Pro controllers though, as while this feature is now live for developers to use in official Quest Store apps it’ll take time to see it in your favorite VR and MR software. Hopefully, we won't be waiting long.

You might also like

TechRadar – All the latest technology news

Read More

Yes, Apple Vision Pro is being returned to stores – but this could actually be a good thing

We’ve officially passed the two-week return window for the Apple Vision Pro, which allowed people who purchased the headset on launch day to hand it back. Social media buzz has suggested that the Vision Pro was being returned in droves. However, inside sources suggest this may not be the case – and offer an interesting insight into who is returning their headset, and why. 

In our Apple Vision Pro review, we touched on the positives and negatives of using the device and rounded up our top three reasons why users may end up returning the headset. As Apple’s first attempt at a mixed-reality headset, the product was always going to be rather polarizing. It lacks the backing of familiarity that other Apple products like a new iPhone or MacBook always have at this point. 

Not to mention the fact that the Apple Vision Pro is expensive. Retailing at $ 3,499/ £2,788, AU$ 6349, it’s easy to imagine more than a few returns are down to buyer's remorse – I know I would slink back to the Apple Store as soon as I found even the slightest discomfort or annoyance (or looked at my bank account, frankly). Especially if I couldn’t get my prescription sorted out for the headset or just found it really uncomfortable. 

In fact, AppleInsider reached out to sources within Apple’s retail chain for more info on the headset returns and noted that discomfort is probably one of the biggest concerns when it comes to it. “Most of our returns, by far, are within a day or two. They're the folks that get sick using it,” one source told AppleInsider’s Mike Wuerthele. “The pukers, the folks that get denied by prescription-filling, that kind of thing. They know real quick.”

Influencer investments – gotta get that content!

The second group of people that seem to be making up most of the returns are influencers and YouTubers. Again, the Vision Pro is a product many people want to get their hands on, so it would make sense that online tech ‘gurus’ would want to jump on the trend at launch. 

With the two-week return window offered by Apple, that’s more than enough time to milk the headset for as much content as possible then give it back, and get your money back too. If you’re a tech content creator, it’s easier to look at the Vision Pro as a short-term investment rather than a personal splurge. 

“It's just the f***ing YouTubers so far,” one retail employee told Wuerthele. 

According to AppleInsider's sources, however, the return process isn’t as simple as just boxing the headset up and dropping it off. Each return is accompanied by a detailed, lengthy survey that will allow users to go in-depth on their reason for return and their experience with the product. This is great news in the long run because it could mean any future iterations of the Apple Vision Pro will be designed and built with this feedback in mind – and the Vision Pro is already arguably a public beta for what will presumably eventually become the ‘Apple Vision’.

Beyond AppleInsider's coverage, prolific Apple leaker and Bloomberg writer Mark Gurman has (unsurprisingly) chipped into the discussion surrounding Vision Pro returns. He reported much the same; some people think it's uncomfortable or induces sickness, while for others it's simply too much money. 

Gurman spoke to a Los Angeles professional who bought and returned the headset, who said 'I loved it. It was bananas,' but then went on to explain that he simply hadn't found himself using it that often, and that the price was just too much: “If the price had been $ 1,500 to $ 2,000, I would have kept it just to watch movies, but at essentially four grand, I’ll wait for version two.”

If users are returning it because they’re not using it as much as they thought they would, certain aspects are making them feel nauseous, or the headset is just really uncomfortable on their head, Apple can take this feedback in mind and carry it forward. It’s a common criticism of VR headsets in general, to be fair – perhaps some people just aren’t built for using this type of product?

You might also like…

TechRadar – All the latest technology news

Read More

Samsung’s XR headset could be launching soon according to a new report

We might not have heard much about it since it was announced in February 2023, but Samsung is apparently still working on the Samsung XR headset (XR being a catchall for VR, MR, and AR), and a new rumor suggests we’ll see it this year.

We know for certain that the Samsung headset is being made in partnership with Google – Samsung has said as much itself – and we know the device will use the Snapdragon XR2 Plus Gen 2 chipset according to a Qualcomm announcement, but that’s about it from official channels. Unofficial reports peg the headset as a cheaper Apple Vision Pro rival with high-end performance but a not-so-high-end price tag – with a rumor saying Samsung delayed the headset to help it stand up better against Apple’s device.

This not only means a solid performance but also high-end displays, with it believed the headset will boast dual OLED screens (one for each eye) likely similar to the 1.03-inch OLEDoS display (OLED on Silicon) – with a 3,500 pixel-per-inch pixel density – it showed off earlier this year. 

That said these screens were created by eMagin rather than the Samsung Display team, and Samsung only acquired this company fairly recently so there’s a chance these displays will be reserved for a later headset model (assuming we even see more than one).

Key Snapdragon XR2 Plus Gen 2 specs, including that it has support fo 4.3k displays, 8x better AI performance, and 2.5x better GPU performance

The Snapdragon XR2 Plus Gen 2 promises big thing for the Samsung headset (Image credit: Qualcomm)

But given the headset was apparently delayed to give the team more time to improve its screens, there’s a chance these impressive OLED panels could make their way into the headset. We hopefully won’t be waiting long to find out if they have. A new report (translated into English) from Korean Economic Daily (nicknamed Hankyung) suggests the Samsung XR headset will drop in the second half of the year.

We should always take rumors with a pinch of salt but this isn’t the first time we’ve heard the Samsung headset will launch in late 2024 – with it previously being suggested that the Samsung VR headset might arrive alongside the Galaxy Z Flip 6 which is also due to launch in the second half of 2024.

If it is coming this year, let's hope Samsung has had enough time to learn from its rivals' mistakes. Mark Zuckerberg might think the Meta Quest 3 is better than the Vision Pro but it has some issues of its own, and the Vision Pro isn’t perfect either according to all the people sending it back to Apple for a refund.

You might also like

TechRadar – All the latest technology news

Read More

Apple could be working on a new AI tool that animates your images based on text prompts

Apple may be working on a new artificial intelligence tool that will let you create basic animations from your photos using a simple text prompt. If the tool comes to fruition, you’ll be able to turn any static image into a brief animation just by typing in what you want it to look like. 

According to 9to5Mac, Apple researchers have published a paper that details procedures for manipulating image graphics using text commands. The tool, Apple Keyframer, will use natural language text to tell the proposed AI system to manipulate the given image and animate it. 

Say you have a photo of the view from your window, with trees in the background and even cars driving past. From what the paper suggests, you’ll be able to type commands such as ‘make the leaves move as if windy’ into the Keyframer tool, which will then animate the specified part of your photo.

You may recognize the name ‘keyframe’ if you’re an Apple user, as it’s already part of Apple’s Live Photos feature – which lets you go through a ‘live photo’ GIF and select which frame, the keyframe, you want to be the actual still image for the photo. 

Better late than never? 

Apple has been notably slow to jump onto the AI bandwagon, but that’s not exactly surprising. The company is known to play the long game and let others beat out the kinks before they make their move, as we’ve seen with its recent foray into mixed reality with the Apple Vision Pro (this is also why I have hope for a foldable iPhone coming soon). 

I’m quite excited for the Keyframer tool if it does come to fruition because it’ll put basic animation tools into the palm of every iPhone user who might not know where to even start with animation, let alone make their photos move.

Overall, the direction Apple seems to be taking in terms of AI tools seems to be a positive one. The Keyframer tool comes right off the back of Apple’s AI-powered image editing tool, which again reinforces the move towards user experience improvement rather than just putting out things that mirror the competition from companies like OpenAI, Microsoft, and Google.

I’m personally glad to see that Apple’s dive into the world of artificial intelligence tools isn’t just another AI chatbot like ChatGPT or Google Gemini, but rather focusing on tools that offer unique new features for iOS and macOS products. While this project is in the very early stages of inception, I’m still pretty hyped about the idea of making funny little clips of my cat being silly or creating moving memories of my friends with just a few word prompts. 

As for when we’ll get our hands on Keyframer, unfortunately there’s no release date in sight just yet – but based on previous feature launches, Apple willingly revealing details at this stage indicates that it’s probably not too far off, and more importantly isn’t likely to get tossed aside. After all, Apple isn’t Google.

You might also like…

TechRadar – All the latest technology news

Read More

DeepMind and Meta staff plan to launch a new AI chatbot that could have the edge over ChatGPT and Bard

Since the explosion in popularity of large language AI models chatbots like ChatGPT, Google Gemini, and Microsoft Copilot, many smaller companies have tried to wiggle their way into the scene. Reka, a new AI startup, is gearing up to take on artificial intelligence chatbot giants like Gemini (formerly known as Google Bard) and OpenAI’s ChatGPT – and it may have a fighting chance to actually do so. 

The company is spearheaded by Singaporean scientist Yi Tay, working towards Reka Flash, a multilingual language model that has been trained in over 32 languages. Reka Flash also boasts 21 billion parameters, with the company stating that the model could have a competitive edge with Google Gemini Pro and OpenAI’s ChatGPT 3.5 across multiple AI benchmarks. 

According to TechInAsia, the company has also released a more compact version of the model called Reka Edge, which offers 7 billion parameters with specific use cases like on-device use. It’s worth noting that ChatGPT and Google Gemini have significantly more training parameters (approximately 175 billion and 137 billion respectively), but those bots have been around for longer and there are benefits to more ‘compact’ AI models; for example, Google has ‘Gemini Nano’, an AI model designed for running on edge devices like smartphones that uses just 1.8 billion parameters – so Reka Edge has it beat there.

So, who’s Yasa?

The model is available to the public in beta on the official Reka site. I’ve had a go at using it and can confirm that it's got a familiar ChatGPT-esque feel to the user interface and the way the bot responds. 

The bot introduced itself as Yasa, developed by Reka, and gave me an instant rundown of all the things it could do for me. It had the usual AI tasks down, like general knowledge, sharing jokes or stories, and solving problems.

Interestingly, Yasa noted that it can also assist in translation, and listed 28 languages it can swap between. While my understanding of written Hindi is rudimentary, I did ask Yasa to translate some words and phrases from English to Hindi and from Hindi to English. 

I was incredibly impressed not just by the accuracy of the translation, but also by the fact that Yasa broke down its translation to explain not just how it got there, but also breaking down each word in the phrase or sentence and translated it word forward before giving you the complete sentence. The response time for each prompt no matter how long was also very quick. Considering that non-English-language prompts have proven limited in the past with other popular AI chatbots, it’s a solid showing – although it’s not the only multilingual bot out there.

Image 1 of 2

Reka translating

(Image credit: Future)
Image 2 of 2

Reka AI Barbie

(Image credit: Future)

I tried to figure out how up-to-date the bot was with current events or general knowledge and finally figured out the information.  It must have been trained on information that predates the release of the Barbie movie. I know, a weird litmus test, but when I asked it to give me some facts about the pink-tinted Margot Robbie feature it spoke about it as an ‘upcoming movie’ and gave me the release date of July 28, 2023. So, we appear to have the same case as seen with ChatGPT, where its knowledge was previously limited to world events before 2022

Of all the ChatGPT alternatives I’ve tried since the AI boom, Reka (or should I say, Yasa) is probably the most immediately impressive. While other AI betas feel clunky and sometimes like poor-man’s knockoffs, Reka holds its own not just with its visually pleasing user interfaces and easy-to-use setup, but for its multilingual capabilities and helpful, less robotic personality.

You might also like…

TechRadar – All the latest technology news

Read More

Windows 11 could finally make color management easier, and that’s great news for artists and gamers

Microsoft might be planning to release a new color management panel that’ll make picking the perfect color profile for your PC much easier. The perfect color settings make games pop out of the display more vividly, and if you’re a digital artist or photographer, the right color profile could make or break your next masterpiece. 

According to VideoCardz, the change was spotted in the Windows Insider program's latest Insider Preview Build 26052. This is a community of Windows enthusiasts and developers that get early access to potential new features and upgrades, and give feedback before the features are available to regular Windows 11 users. 

The new color management panel showcased in the build has been updated to the modern Windows 11 aesthetic and relocated to the main Settings menu, with easy-to-navigate options and a simpler layout. The old color management menu, which had to be accessed via the Windows Control Panel, has been effectively removed in Build 26052.

See more

Better control,hopefully … 

Most people who just use their PC for office work or school projects might never venture to this section of the Settings menu, but this could be great news for photographers, digital artists, video editors, and gamers who rely on getting the most out of their monitors. 

From the side-by-side screenshot comparison in the above tweet (sorry, 'X post'), you can see some new features too: the option to color-calibrate your monitor for specific profiles and enable automatic color balancing for compatible Windows apps. If you don’t want to manually color calibrate, you can either select the best option from the available profiles or create your own so you get the most accurate hues. 

While we're excited about this change, we do have to keep in mind that some features that are put into the Dev channel don’t always make it out to the public, so there is a chance we might never see it reach the public build.  We do however hope to see it come to Windows 11 soon because it’ll be a convenient way of managing your color preferences and profiles within the menu layout you’re already familiar with. 

If you want to give it a go, you’ll have to sign up to join the Windows Insider program first. Once you’ve done that you’ll be able to go straight to the ‘display’ section of your general settings and see the ‘Color Management’ option, where you can play around with different profiles and settings. 

You might also like…

TechRadar – All the latest technology news

Read More