Meta Quest’s software is coming to new Asus ROG and Lenovo headsets

It’s a big day for Quest users. Meta has announced it’s giving third-party companies open access to its headsets' operating system to expand the technology. The tech giant wants developers to take the OS, expand into other frontiers, and accomplish two main goals: give consumers more choice in the virtual reality gaming market and give developers a chance to reach a wider audience.

Among this first batch of partners, some are already working on a Quest device. First off, ASUS’ ROG (Republic of Gamers) is said to be developing “an all-new performance gaming headset.” Lenovo’s on the list too and they’re seemingly working on three individual models: one for productivity, one for education, and one for entertainment. 

This past December, Xbox Cloud Gaming landed on Quest headsets as a beta bringing a wave of new games to the hardware. Microsoft is teaming up with Meta again “to create a limited-edition Meta Quest [headset], inspired by Xbox.” 

New philosophy

Meta is also making several name changes befitting their tech’s transformation. 

The operating system will now be known as Horizon OS. The company’s Meta Quest Store will be renamed the Horizon Store, and the mobile app will eventually be rebranded as the Horizon app. To aid with the transition, third-party devs are set to receive a spatial app framework to bring their software over to Horizon OS or help them create a new product.

With Horizon at the core of this ecosystem, Meta aims to introduce social features that dev teams “can integrate… into their [software]”. They aim to bridge multiple platforms together creating a network existing “across mixed reality, mobile, and desktop devices.” Users will be able to move their avatars, friend groups, and more onto other “virtual spaces”. 

This design philosophy was echoed by Meta CEO Mark Zuckerberg. In a recent Instagram video, Zuckerberg stated he wants Horizon OS to be an open playground where developers can come in and freely create software rather than a walled garden similar to iOS.  

Breaking down barriers

It’ll be a while until we see any of these headsets launch. Zuckerberg said in his post that “it’s probably going to take a couple of years for these” products to launch. At the moment, Meta is “removing the barriers” between its App Lab and digital storefront allowing devs to publish software on the platform as long as they meet “basic technical and content” guidelines. It’s unknown if there'll be any more limitations apart from requiring third-party companies to use Snapdragon processors.

No word if other tech brands will join in. Zuckerberg says he hopes to see the Horizon Store offer lots of software options from Steam, Xbox Cloud Gaming, and even apps from the Google Play Store – “if they’re up for it.” It seems Google isn’t on board with Horizon OS yet. 

Rumors have been circulating these past several months claiming Google and Samsung are working together on an XR/VR headset. Perhaps the two are ignoring Meta’s calls to focus on their “so-called Apple Vision Pro rival”.

Be sure to check out TechRadar's recommendations for the best VR headsets for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Nvidia explains how its ACE software will bring ChatGPT-like AI to non-player characters in games

Earlier this year at Computex 2023, Nvidia revealed a new technology during its keynote presentation: Nvidia ACE, a ‘custom AI model foundry’ that promised to inject chatbot-esque intelligence into non-player characters in games.

Now, Nvidia has more to say about ACE: namely, NVIDIA NeMo SteerLM, a new technique that will make it easier than ever before for game developers to make characters that act and sound more realistic and organic.

We’ve heard about NeMo before, back when Nvidia revealed its ‘NeMo Guardrails’ software for making sure that large language model (LLM) chatbots such as the ever-present ChatGPT are more “accurate, appropriate, on topic and secure”. NeMo Steer LM acts in a similar but more creative way, allowing game devs to ‘steer’ AI behavior in certain directions with simple sliders; for example, making a character more humorous, or more aggressive and rude.

I was a bit critical of NeMo Guardrails back when it was originally unveiled, since it raises the question of exactly who programs acceptable behaviors into AI models. In publicly accessible real-world chatbot tools, programmer bias could lead to AI-generated responses that offend some while appearing innocuous to others. But for fictional characters, I’m willing to believe that NeMo has huge potential. Imagine a gameworld where every character can truly react dynamically and organically to the player’s words and actions – the possibilities are endless!

The problems with LLMs in games

Of course, it’s not quite as simple as that. While SteerLM does promise to make the process of implementing AI-powered NPCs a lot more straightforward, there are still issues surrounding the use of LLMs in games in general. Early access title Vaudeville shows that AI-driven narrative games have a long way to go, and that’s not even the whole picture.

LLM chatbots such as ChatGPT and Bing AI have proven in the past that they’re not infallible when it comes to remaining on-topic and appropriate. Indeed, when I embarked on a quest to break ChatGPT, I was able to make it say things my editor sadly informed me were not fit for publication. While tools such as Nvidia’s Guardrails can help, they’re not perfect – and as AI models continue to evolve and advance, it may become harder than ever to keep them playing nice.

Even beyond the potential dangers of introducing actual AI models into games – let alone ones with SteerLM’s ‘toxicity’ slider, which on paper sounds like a lawsuit waiting to happen – a major stumbling block to implementing tools like this could actually be hardware-related.

Screenshot of 'Jin the ramen shop owner', an AI-generated non-player character.

Nvidia’s Computex demo of ‘Jin the ramen shop owner’ was technologically impressive but raises a lot of questions about AI in games. (Image credit: Nvidia)

If a game uses local hardware acceleration to power its SteerLM-enhanced NPCs, the performance could be affected by how powerful your computer is when it comes to running AI-based workloads. This introduces an entirely new headache for both game devs and gamers: inconsistency in game quality dependent not on anything the developers can control, but on the hardware used by the player.

According to the Steam Hardware Survey, the majority of PC gamers are still using RTX 2000 or older GPUs. Hell, the current top spot is occupied by the budget GTX 1650, a graphics card that lacks the Tensor cores used by RTX GPUs to carry out high-end machine-learning processes. The 1650 isn’t incapable of running AI-related tasks, but it’s never going to keep up with the likes of the mighty RTX 4090.

I’m picturing a horrible future for PC gaming, where your graphics card determines not just the visual fidelity of the games you play, but the quality of the game itself. For those lucky enough to own, say, an RTX 5000 GPU, incredibly lifelike NPC dialogue and behavior could be at your fingertips. Smarter enemies, more helpful companions, dynamic and compelling villains. For the rest of us, get used to dumb and dumber character AI as game devs begin to rely more heavily on LLM-managed NPCs.

Perhaps this will never happen. I certainly hope so, anyway. There’s also the possibility of tools like SteerLM being implemented in a way that doesn’t require local hardware acceleration; that would be great! Gamers should never have to shell out for the very best graphics cards just to get the full experience from a game – but I’ll be honest, my trust in the industry has been sufficiently battered over the last few years that I’m braced for the worst.

You might also like

TechRadar – All the latest technology news

Read More

Huge visual enhancements could be coming to your favorite Oculus Quest 2 software

Your favorite VR games and apps’ visuals could soon be sharper than ever as Meta is unlocking a new resolution-boosting tool for developers.

Developed in collaboration with Qualcomm – the manufacturer of the Snapdragon chips used by Meta’s headsets – Quest Super Resolution upscaling tool promises to boost image quality and deliver a smoother experience. So expect the best VR games and apps to have sharper images, and be running at higher framerates on your Oculus Quest 2 and Meta Quest Pro than they did before the upgrade.

The Quest Super Resolution upgrade follows a major boost to the CPU and GPU performance of Meta’s headsets that came last month in June 2023. Both the Quest 2 and Quest Pro’s CPUs saw a 26% speed boost last month, while the Quest 2 and Quest Pro’s GPUs got a performance boost of 19% and 11% respectively.

Meta was able to achieve these upgrades via a software patch rather than releasing new hardware because it has allowed the existing components to run at higher clock speeds. To avoid the systems getting too hot while you’re wearing them, the Quest headsets’ components were underclocked – read: their maximum performance is held back compared to what it should be able to do running normally. June’s update removed some of these limitations, with Meta likely deciding it was being a bit too conservative with its underclocked approach.

Thanks to Quest Super Resolution, developers have a new way to utilize the Quest system’s improved GPU capabilities. But we’ll have to wait for them to implement Super Resolution into their software before we see any improvements in the VR software we love.

How does Meta Quest Super Resolution work? 

Meta’s blog post gets a little jargon-heavy in its “What is Meta Quest Super Resolution?” section – calling it a “single-pass spatial upscaling and sharpening technique.” What you need to know is that upscaling is a way to get better visual quality out of your hardware without sacrificing performance.

Quest Super Resolution in action (Image credit: Meta)

In general, upscaling works by having a GPU render an image at a lower resolution (say, 1080p or full-HD) and then using tricks to scale it up to a higher one (like 4K, or even 8K). While an upscaled image typically won’t look as crisp as one rendered at the target resolution, it’s a lot less taxing for a GPU to create an upscaled image – as such it can usually run upscaled software at a higher framerate.

Higher smoother framerates are a must-have for VR apps. If the visuals are choppy, or run below a minimum of 90fps, that’s when wearing a headset can make you feel motion sick.

Meta Quest Super Resolution's upscaling algorithm has a few special tricks up its sleeves, too. The highest setting can apparently greatly reduce artifacts caused by upscaled objects blurring into one another at their edges. You can see this in the image above, the Super Resolution image looks the most crisp, with well-defined edges to the objects in the complex scene.

Want to learn more about upscaling? Check out our Nvidia DLSS vs AMD FSR piece to learn about how these two technologies stack up against one another.

TechRadar – All the latest technology news

Read More

Our favorite free video editing software gets unexpected performance boost from new macOS Sonoma

One of the big announcements at Apple’s WWDC 2023 was macOS Sonoma (we looked it up; it means “Valley of the Moon”). 

Apple claims the new operating system has a sharp focus on productivity and creativity. It says “the Mac experience is better than ever.” To prove it, the company revealed screensavers, iPhone widgets running on Macs, a gaming mode, and fresh video conferencing features. 

But the new macOS has another surprising feature for users of our pick for best free video editing software.  

The final cut 

Beyond WWDC’s bombshell reveal – yes, Snoopy is an Apple fan now – the event served up more than enough meat to keep users happy. There’s a new Macbook Air 15-inch on the way, said to be the “world’s thinnest.” The watchOS 10 beta countdown has started. And the Vision Pro is dividing opinion. Is the VR headset the future or will it lose you friends?

The reveal of the new Mac operating system, meanwhile, feels quieter somehow. Muted. Perhaps new PDF editor functionalities and a host of “significant” updates to the Safari browser aren’t as eye-catching as a pair of futuristic AR/VR ski goggles.  

However, Craig Federighi, Apple’s senior vice president of Software Engineering, said, “macOS is the heart of the Mac, and with Sonoma, we’re making it even more delightful and productive to use.” 

What he didn’t say, but the company later revealed, is that Sonoma adds an extra bonus for video editors. 

Designed for remote and hybrid in-studio workflows, the operating system brings a high-performance mode to the Screen Sharing app. Taking advantage of the media engine in Apple silicon, users are promised responsive remote access with low-latency audio, high frame rates, and support for up to two virtual displays. 

According to Apple, “This mode empowers pros to securely access their content creation workflows from anywhere – whether editing in Final Cut Pro or DaVinci Resolve, or animating complex 3D assets in Maya.” It also enables remote colour workflows that previously demanded the best video editing Macs and video editing PCs

It seems Final Cut Pro is getting a lot of attention lately. May saw the launch of Final Cut Pro for iPad – how did it take so long? – and now better support in the operating system. What next? Perhaps that open-letter from film & TV professionals pleading for improved support really did focus minds at Apple Park.  

TechRadar – All the latest technology news

Read More

What is xrOS? The Apple VR headset’s rumored software explained

The Apple VR headset is getting close to its rumored arrival at WWDC 2023 on June 5 – and the mixed-reality wearable is expected be launched alongside an exciting new operating system, likely called xrOS.

What is xrOS? We may now be approaching iOS 17, iPadOS 16 and macOS 13 Ventura on Apple's other tech, but the Apple VR headset – rumored to be called the Apple Reality One – is expected to debut the first version of a new operating system that'll likely get regular updates just like its equivalents on iPhone, iPad and Mac.

The latest leaks suggest that Apple has settled on the xrOS name for its AR/VR headset, but a lot of questions remain. For example, what new things might xrOS allow developers (and us) to do in mixed reality compared to the likes of iOS? And will xrOS run ports of existing Apple apps like Freeform?

Here's everything we know so far about xrOS and the kinds of things it could allow Apple's mixed-reality headset to do in both augmented and virtual reality.

xrOS release date

It looks likely that Apple will launch its new xrOS operating system, alongside its new AR/VR headset, at WWDC 2023 on June 5. If you're looking to tune in, the event's keynote is scheduled to kick off at 10am PT / 1pm ET / 6pm BST (or 3am ACT on June 6).

This doesn't necessarily mean that a final version of xrOS will be released on that day. A likely scenario is that Apple will launch an xrOS developer kit to allow software makers to develop apps and experiences for the new headset. 

See more

While not a typical Apple approach, this is something it has done previously for the Apple TV and other products. A full version of xrOS 1.0 could then follow when the headset hits shelves in late 2023.

The software's name now at least looks set in stone. As spotted by Parker Ortolani on Twitter on May 16, Apple trademarked the 'xrOS' name in its traditional 'SF Pro' typeface in New Zealand, via a shell company. 

We'd previously seen reports from Bloomberg  that 'xrOS' would be the name for Apple's mixed-reality operating system, but the timing of this discovery (and the font used) bolster the rumors that it'll be revealed at WWDC 2023.

Apple Glasses

(Image credit: Future)

A report from Apple leaker Mark Gurman on December 1, 2022, suggested that Apple had “recently changed the name of the operating system to “xrOS” from “realityOS,” and that the name stands for “extended reality”. This term covers both augmented reality (which overlays information on the real world) and virtual reality, a more sealed experience that we're familiar with on the likes of the Meta Quest 2.

While xrOS is expected to have an iOS-like familiarity – with apps, widgets and a homescreen – the fact that the Apple AR/VR headset will apparently run both AR and VR experiences, and also use gesture inputs, explains why a new operating system has been created and will likely be previewed for developers at WWDC.

What is xrOS?

Apple's xrOS platform could take advantage of the AR/VR headset's unique hardware, which includes an array of chips, cameras and sensors. It's different from ARKit, the software that lets your iPhone or iPad run AR apps. Apple's xrOS is also expected to lean heavily on the design language seen on the iPhone, in order to help  fans feel at home.

According to Bloomberg's Gurman, xrOS “will have many of the same features as an iPhone and iPad but in a 3D environment”. This means we can expect an iOS-like interface, complete with re-arrangeable apps, customizable widgets and a homescreen. Apple is apparently also creating an App Store for the headset.

See more

Stock apps on the AR/VR headset will apparently include Apple's Safari, Photos, Mail, Messages and Calendar apps, plus Apple TV Plus, Apple Music and Podcasts. App developers will also be able to take advantage of its health-tracking potential.

Gurman says that the headset experience will feel familiar to Apple fans – when you put it on, he claims that “the main interface will be nearly identical to that of the iPhone and iPad, featuring a home screen with a grid of icons that can be reorganized”. 

But how will you type when wearing the Apple Reality Pro (as it's rumored to be called)? After all, there probably won't be any controllers.

Spacetop computer used in public

The Sightful Spacetop (above) gives us a glimpse of how the Apple AR/VR headset could work us a virtual Mac display. (Image credit: Sightful)

Instead, you'll apparently be able to type using a keyboard on an iPhone, Mac or iPad. There's also the slightly less appealing prospect of using the Siri voice assistant. Apple is rumored to be creating a system that lets you type in mid-air, but Gurman claims that this feature “is unlikely to be ready for the initial launch”.

It's possible that you'll be able to connect the headset to a Mac, with the headset serving as the Mac's display. We've recently seen a glimpse of how this might work with the Spacetop (above), a laptop that connects to some NReal AR glasses to give you a massive 100-inch virtual display.

What apps will run on xrOS?

We've already mentioned that Apple's AR/VR headset will likely run some optimized versions of existing stock apps, including Safari, Photos, Mail, Messages, Contacts, Reminders, Maps and Calendar. 

But given that those apps aren't exactly crying out for a reinvention in AR or VR, they're likely to be sideshows to some of the more exciting offerings from both Apple and third-party developers. 

So what might those be? Here are some of the most interesting possibilities, based on the latest rumors and what we've seen on the likes of the Meta Quest Pro

1. Apple Fitness Plus

An AR fitness experience on the Litesport app

Apps like Litesport (above) give us a glimpse of AR fitness experiences that could arrive of Apple’s headset. (Image credit: Litesport)

Assuming the Apple AR/VR headset is light and practical enough for workouts – which is something we can't say for the Apple AirPods Max headphones – then it definitely has some AR fitness potential.

According to a report from Bloomberg's Mark Gurman on April 18, Apple is planning to tap that potential with “a version of its Fitness+ service for the headset, which will let users exercise while watching an instructor in VR”.

Of course, VR fitness experiences are nothing new, and we've certainly enjoyed some of the best Oculus Quest fitness games. An added AR component could make them even more powerful and motivating, with targets added to your real-world view.

2. Apple Freeform

The Freeform app on an iPad on an orange background

(Image credit: Apple)

We called Apple's Freeform, which gives you a blank canvas to brainstorm ideas with others, “one of its best software releases in years”. And it could be taken to the next level with a version of AR or VR.

Sure enough, Bloomberg's aforementioned report claims that “Apple is developing a version of its Freeform collaboration app for the headset”, which it apparently “sees as a major selling point for the product”.

Okay, work-themed AR/VR work experiences might not sound thrilling and we certainly had misgivings after working for a whole week in VR with the Meta Quest Pro. But mixed-reality whiteboards also sound potentially fun, particularly if we get to play around with them in work time.

3. Apple TV Plus

A basketball team scoring in a NextVR stream

(Image credit: NextVR)

Because Apple's headset will have a VR flipside to its AR mode, it has huge potential for letting us watch TV and video on giant virtual screens, or in entirely new ways. This means that Apple TV Plus will also likely be pre-installed in xrOS.  

Another claim from that Bloomberg report on April 18 was that “one selling point for the headset will be viewing sports in an immersive way”. This makes sense, given Apple already has deals for Major League Baseball and Major League Soccer on Apple TV Plus

And while they're only rumors, Apple has also considered bidding for Premier League soccer rights in the UK. Well, it'd be cheaper than a season ticket for Manchester United.

4. FaceTime

Joining a call through FaceTime links in macOS 12 Monterey

(Image credit: Apple)

While we haven't been blown away by our experiences with VR meetings in Horizon Workrooms on the Meta Quest, the Apple mixed-reality headset will apparently deliver a next-gen version of FaceTime – and the Reality Pro's hardware could take the whole experience up a notch,

With an earlier report from The Information suggesting that Apple's headset will have at least 12 cameras (possibly 14) to track your eyes, face, hands and body, it should do a decent job of creating a 3D version of you in virtual meeting rooms.

We still haven't really seen a major real-world benefit to VR video meets, even if you can do them from a virtual beach. But we're looking forward to trying it out, while crossing our virtual fingers that it works more consistently than today's non-VR FaceTime.

5. Adobe Substance 3D Modeler 

Adobe has already released some compelling demos, plus some beta software called Substance 3D Modeler (above), showing the potential of its creative apps in VR headsets. Will that software's list of compatible headsets soon include the Apple Reality Pro? It certainly seems possible.

The software effectively lets you design 3D objects using virtual clay in a VR playground. Quite how this would work with Apple's headset on xrOS isn't clear, given it's rumored to lack any kind of physical controllers. 

These kinds of design tools feel like a shoo-in for Apple's headset, given many of its users are already happy to shell out thousands on high-end Macs and MacBooks to use that kind of software in a 2D environment.

TechRadar – All the latest technology news

Read More

Our favorite free 3D modeling software gets free AI add-on

Free 3D modeling software Blender now supports AI art generator Stable Diffusion, courtesy of an equally free Stability AI add-on. 

Dubbed Stability for Blender, the text-to-image generator lets users “add AI post-processing effects to renders,” Stability AI has revealed. Working inside the 3D software, they can experiment on scenes without endless remodeling, generating textures, animations, and images through text prompts and the tool’s style presets. 

By integrating Stable Diffusion within Blender, the firm hopes to streamline the design process, and make the tools more accessible without investing in dedicated hardware like high-end GPUs and graphic design laptops

But the AI tool may also offer an unexpected bonus for creatives. Artists, animators, and modelers who download the addon will be able to use the platform’s image editor to create textures and images. 

According to the company, users can go further, keyframing all properties and creating animations that “use Blender's built-in animation system to automate properties in Stable Diffusion.” By entering a text prompt, the tool creates an image using the artist’s existing render. 

3D model made in Blender

(Image credit: Stability AI / Blender)

In doing so, Stability for Blender could potentially swerve one of the biggest challenges facing content creators, content distributors, and AI developers: copyright infringement.

With everyone from artists to Getty Images suing AI platforms for illegal use of copyrighted materials, employing rendered frames as a starting point for the diffusion process may yet prove a (somewhat) definitive answer to the question “Who owns AI art?” 

The launch comes just months after rival AI platform OpenAI revealed its own 3D model builder. Its POINT-E solution “produces 3D models in only 1-2 minutes on a single GPU”, the developer said, while admitting it currently “falls short of the state-of-the-art in terms of sample quality”. 

Users with a Stability AI API key can get the Blender add-on by clicking here.  

TechRadar – All the latest technology news

Read More

The love for open source software is showing no signs of slowing down

The love for open source software is spread across the whole technology spectrum, a new report looking at the state of developing software and the tools needed to do so has claimed.

The 2022 State of Open Source Report, conducted by OpenLogic, surveyed 2,660 professionals and their organisations that use open source tools. 

If you are a software developer or work in an adjacent industry then this is probably no surprise: open source tools are the glue that holds so many things together, a community of selfless individuals working towards a bigger goal. 

Open source love

The report asked respondents a series of questions to gauge their interest (and love) for open source, covering a wide variety of roles and companies (see below for more on the specific methodologies).

Most respondents use an open source programming language or framework, closely followed by databases, OSes, Git repos, frameworks for AI/ML/DL, and the cloud. 

2022 open source report

(Image credit: OpenLogic)

When it comes to reservations, respondents highlighted a lack of skills. But, perhaps most interestingly, a full 27% said they had no reservations at all.

2022 open source report

(Image credit: OpenLogic)

When it came to reasons for using open source, the answers were clear: access to innovations and latest technologies; no license cost, meaning an overall cost reduction; modernising their technology stack; many options for similar technologies; and constant releases and patches.

2022 open source report

(Image credit: OpenLogic)

Methodology 

Most (38%) are technology companies, but lots of other sectors are represented: consulting, banking and finance, transport, telecoms, education, healthcare, public sector, and so on. 39% of companies were between 100 and 1,000 employees, 32% were under 100 employees, and 28% were over 1,000 employees.

In terms of the regions, North America dominated, representing 52.6% of respondents, followed by Asia Pacific (12.4%), UK and Europe (10.9%), Asia (7.7%), Middle East (6.6%), Latin America (5.2%), Africa (4.2%), and Oceania (0.32%). 

Full Stack Developers were the highest respondents, representing 21.8%, followed by Back End (18.5%), Front End (16%), Engineering (15.7%), Project Management (14.4%), Architect (14.4%), DevOps (12.6%), and so on. 

TechRadar – All the latest technology news

Read More

Google wants secure open-source software to be the future

After attending the recent White House Open Source Software Security Summit, Google is now calling for a public-private partnership to not only fund but also staff essential open-source projects.

In a new blog post, president of global affairs and chief legal officer at both Google and Alphabet, Kent Walker laid out the search giant's plans to better secure the open-source software ecosystem.

For too long, businesses and governments have taken comfort in the assumption that open source software is generally secure due to its transparent nature. While many believe that more eyes watching can help detect and resolve problems in the open source community, some projects actually don't have many eyes on them while others have few or none at all.

To its credit, Google has been working to raise awareness of the state of open source security and the company has invested millions in developing frameworks and new protective tools. However, the Log4j vulnerability and others before it have shown that more work is needed across the ecosystem to develop new models to maintain and secure open source software.

Public-private partnership 

In his blog post, Kent proposes creating a new public-private partnership to identify a list of critical open source projects to help prioritize and allocate resources to ensure their security.

In the long term though, new ways of identifying open source software and components that may pose a system risk need to be implemented so that the level of security required can be anticipated and the appropriate resources can be provided.

At the same time, security, maintenance and testing baselines need to be established across both the public and private sector. This will help ensure that national infrastructure and other important systems can continue to rely on open source projects. These standards also should be developed through a collaborative process according to Kent with an “emphasis on frequent updates, continuous testing and verified integrity”. Fortunately, the software community has already started this work with organizations like OpenSFF working across industry to create these standards.

Now that Google has weighed in on the issue of open source security, expect other tech giants like Microsoft and Apple to propose their own ideas regarding the matter.

We've also rounded up the best open source software and the best business laptops

TechRadar – All the latest technology news

Read More