Mark Zuckerberg says we’re ‘close’ to controlling our AR glasses with brain signals

Move over eye-tracking and handset controls for VR headsets and AR glasses, according to Mark Zuckerberg – the company’s CEO – Meta is “close” to selling a device that can be controlled by your brain signals. 

Speaking on the Morning Brew Daily podcast (shown below), Zuckerberg was asked to give examples of AI’s most impressive use cases. Ever keen to hype up the products Meta makes – he also recently took to Instagram to explain why the Meta Quest 3 is better than the Apple Vision Pro – he started to discuss the Ray-Ban Meta Smart Glasses that use AI and their camera to answer questions about what you see (though annoyingly this is still only available to some lucky users in beta form).

He then went on to discuss “one of the wilder things we’re working on,” a neural interface in the form of a wristband – Zuckerberg also took a moment to poke fun at Elon Musk’s Neuralink, saying he wouldn’t want to put a chip in his brain until the tech is mature, unlike the first human subject to be implanted with the tech.

Meta’s EMG wristband can read the nervous system signals your brain sends to your hands and arms. According to Zuckerberg, this tech would allow you to merely think how you want to move your hand and that would happen in the virtual without requiring big real-world motions.

Zuckerberg has shown off Meta’s prototype EMG wristband before in a video (shown below) – though not the headset it works with – but what’s interesting about his podcast statement is he goes on to say that he feels Meta is close to having a “product in the next few years” that people can buy and use.

Understandably he gives a rather vague release date and, unfortunately, there’s no mention of how much something like this would cost – though we’re ready for it to cost as much as one of the best smartwatches – but this system could be a major leap forward for privacy, utility and accessibility in Meta’s AR and VR tech.

The next next-gen XR advancement?

Currently, if you want to communicate with the Ray-Ban Meta Smart Glasses via its Look and Ask feature or to respond to a text message you’ve been sent without getting your phone out you have to talk it it. This is fine most of the time but there might be questions you want to ask or replies you want to send that you’d rather keep private.

The EMG wristband allows you to type out these messages using subtle hand gestures so you can maintain a higher level of privacy – though as the podcast hosts note this has issues of its own, not least of which is schools having a harder time trying to stop students from cheating in tests. Gone are the days of sneaking in notes, it’s all about secretly bringing AI into your exam.

Then there are utility advantages. While this kind of wristband would also be useful in VR, Zuckerberg has mostly talked about it being used with AR smart glasses. The big success, at least for the Ray-Ban Meta Smart Glasses is that they’re sleek and lightweight – if you glance at them they’re not noticeably different to a regular pair of Ray-Bans.

Adding cameras, sensors, and a chipset for managing hand gestures may affect this slim design. That is unless you put some of this functionality and processing power into a separate device like the wristband. 

The inside displays are shown off in the photo, they sit behind the Xreal Air 2 Pro AR glasses shades

The Xreal Air 2 Pro’s displays (Image credit: Future)

Some changes would still need to be made to the specs themselves – chiefly they’ll need to have in-built displays perhaps like the Xreal Air 2 Pro’s screens – but we’ll just have to wait to see what the next Meta smart glasses have in store for us.

Lastly, there’s accessibility. By their very nature, AR and VR are very physical things – you have to physically move your arms around, make hand gestures, and push buttons – which can make them very inaccessible for folks with disabilities that affect mobility and dexterity.

These kinds of brain signal sensors start to address this issue. Rather than having to physically act someone could think about doing it and the virtual interface would interpret these thoughts accordingly.

Based on demos shown so far some movement is still required to use Meta’s neural interface so it’s far from the perfect solution, but it’s the first step to making this tech more accessible and we're excited to see where it goes next.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

Mark Zuckerberg thinks the Meta Quest 3 is better than Vision Pro – and he’s got a point

Mark Zuckerberg has tried the Apple Vision Pro, and he wants you to know that the Meta Quest 3 is “the better product, period”. This is unsurprising given that his company makes the Quest 3, but having gone through all of his arguments he does have a point – in many respects, the Quest 3 is better than Apple’s high-end model.

In his video posted to Instagram, Zuckerberg starts by highlighting the fact that the Quest 3 offers a more impressive interactive software library than the Vision Pro, and right now that is definitely the case. Yes, the Vision Pro has Fruit Ninja, some other spatial apps (as Apple calls them), and plenty of ported-over iPad apps, but nothing on the Vision Pro comes close to matching the quality or immersion levels of Asgard’s Wrath 2, Walkabout Mini Golf, Resident Evil 4 VR, The Light Brigade, or any of the many amazing Quest 3 VR games

It also lacks fitness apps. I’m currently testing some for a VR fitness experiment (look out for the results in March) and I’ve fallen in love with working out with my Quest 3 in apps like Supernatural. The Vision Pro not only doesn't offer these kinds of experiences, but its design isn’t suited to them either – the hanging cable could get in the way, and the fabric facial interface would get drenched in sweat; a silicone facial interface is a must-have based on my experience.

The only software area where the Vision Pro takes the lead is video. The Quest platform is badly lacking when it comes to offering the best streaming services in VR – only having YouTube and Xbox Cloud Gaming – and it’s unclear if or when this will change. I asked Meta if it has plans to bring more streaming services to Quest, and I was told by a representative that it has “no additional information to share at this time.” 

Zuckerberg also highlights some design issues. The Vision Pro is heavier than the Quest 3, and if you use the cool-looking Solo Knit Band you won’t experience the best comfort or support – instead most Vision Pro testers recommend you use the Dual-Loop band which more closely matches the design of the Quest 3’s default band as it has over the head support.

You also can’t wear glasses with the Vision Pro, instead you need to buy expensive inserts. On Quest 3 you can just extend the headset away from your face using a slider on the facial interface and make room for your specs with no problem.

Lance Ulanoff wearing Apple Vision Pro

The Vision Pro being worn with the Dual-Loop band (Image credit: Future)

Then there’s the lack of controllers. On the Vision Pro unless you’re playing a game that supports a controller you have to rely solely on hand tracking. I haven’t used the Vision Pro but every account I’ve read or heard – including Zuckerberg’s – has made it clear that hand-tracking isn’t any more reliable on the Vision Pro than it is on Quest; with the general sentiment being that 95% of the time it works seamlessly which is exactly my experience on the Quest 3.

Controllers are less immersive but do help to improve precision – making activities like VR typing a lot more reliable without needing a real keyboard. What’s more, considering most VR and MR software out there right now is designed for controllers software developers have told us it would be a lot easier to port their creations to the Vision Pro if it had handsets.

Lastly, there’s the value. Every Meta Quest 3 and Apple Vision Pro comparison will bring up price so we won’t labor the point, but there’s a lot to be said for the fact the Meta headset is only $ 499.99 / £479.99 / AU$ 799.99 rather than $ 3,499 (it’s not yet available outside the US). Without a doubt the Quest 3 is giving you way better bang for your buck.

The Meta Quest 3 controller being held above a table with a lamp, a plant and the QUest 3 headset on. You can see the buttons and the thumbstick on top.

The Vision Pro could be improved if it came with controllers (Image credit: Future)

Vision Pro: not down or out 

That said, while Zuckerberg makes some solid arguments he does gloss over how the Vision Pro takes the lead, and even exaggerates how much better the Quest 3 is in some areas – and these aren’t small details either.

The first is mixed reality. Compared to the Meta Quest Pro the Vision Pro is leaps and bounds ahead, though reports from people who have tried the Quest 3 suggest the Vision Pro doesn’t offer as much of an improvement – and in ways it is worse as Zuckerberg mentions.

To illustrate the Quest 3’s passthrough quality Zuckerberg reveals the video of him comparing the two headsets is being recorded using a Quest 3, and it looks pretty good – though having used the headset I can tell you this isn’t representative of what passthrough actually looks like. Probably due to how the video is processed recordings of mixed reality on Quest always look more vibrant and less grainy than experiencing it live.

Based on less biased accounts from people who have used both the Quest 3 and Vision Pro it sounds like the live passthrough feed on Apple’s headset is generally a bit less grainy – though still not perfect – but it does have way worse motion blur when you move your head.

Apple Vision Pro spatial videos filmed at the beach being watched by someone wearing the headset on their couch

Mixed reality has its pros and cons on both headsets (Image credit: Apple)

Zuckerberg additionally takes aim at the Vision Pro’s displays pointing out that they seem less bright than the Quest 3’s LCDs and they offer a narrower field of view. Both of these points are right, but I feel he’s not given enough credit to two important details.

While he does admit the Vision Pro offers a higher resolution he does so very briefly. The Vision Pro’s dual 3,680 x 3,140-pixel displays will offer a much crisper experience than the Quest 3’s dual 2064 x 2208-pixel screens. Considering you use this screen for everything the advantage of better visuals can’t be understated – and a higher pixel density should also mean the Vision Pro is more immersive as you’ll experience less of a screen door effect (where you see the lines between pixels as the display is so close to your eyes).

Zuckerberg also ignores the fact that the Vision Pro’s screens are OLEDs. Yes, this will mean they’re less vibrant, but the upshot is they offer much better contrast for blacks and dark colors. Better contrast has been shown to improve a user’s immersion in VR based on Meta and other’s experiments so I wouldn’t be surprised if the next Quest headset also incorporated OLEDs – rumors suggest it will and I seriously hope it does.

Lastly, there’s eye-tracking which is something the Quest 3 lacks completely. I don’t think the unavailability of eye-tracking is actually a problem, but that deserves its own article.

Hamish Hector holding Starburst to his face

This prototype headset showed me how important great contrast is (Image credit: Future)

Regardless of whether you agree with Mark Zuckerberg’s arguments or not one thing that’s clear from the video is that the Vision Pro has got the Meta CEO fired up. 

He ends his video stating his desire for the Quest 3 and the Meta’s open model (as opposed to the closed-off walled-garden Apple has where you can only use the headset how it intends) to “win out again” like Windows in the computing space.

But we’ll have to wait and see how it pans out. As Zuckerberg himself admits “The future is not yet written” and only time will tell if Apple, Meta or some new player in the game (like Samsung with its Samsung XR headset) will come out on top in the long run.

You might also like

TechRadar – All the latest technology news

Read More

Google Docs will now really let you stamp your mark on your work

Making sure your work gets the respect it deserves will soon be a lot easier in Google Docs thanks to a new privacy tool coming to the service.

The word processor tool, part of Google Workspace, has announced users can now add background text identifiers such as watermarks to their documents.

This means that Google Docs users can now mark their work in order to protect copyright, show that the information within is confidential, or simply notify readers that it is a draft or work in progress.

Google Docs watermark

In a blog post outlining the new feature, Google notes that text watermarks will repeat on every page on your document, making it useful for indicating file status.

Users can also include an image watermark, such as a company logo or sign, or include other images above or behind text. To find the new feature, which has no admin control, users simply need to go to Insert > Watermark > Text

The feature will work across other platforms too, as when working with Microsoft Word documents, text watermarks will be preserved when importing or exporting your files.

Google Docs watermark

(Image credit: Google Workspace)

The tool will be available to all Google Workspace customers, as well as G Suite Basic and Business customers, with the rollout starting in January 2022 and due to take a few weeks.

The news should be a boost to legal and high-end businesses dealing in confidential documents, and comes shortly after a further new functionality also looked to add greater depth to Docs that sees a new process for formal document approvals for high-priority files (such as contracts, legal documents and the like), building upon existing comment and suggested edit features.

Google Docs has also recently boosted its citations feature, making the software a more viable choice for students and academics. When adding a citation to an essay or research paper, users will soon be able to search for sources via an in-built database, and then automatically populate the necessary fields (title, publisher, date of publication etc.).

TechRadar – All the latest technology news

Read More

Olympus E-M1 Mark III is a Micro Four Thirds powerhouse with a price tag to match

The Olympus E-M1 Mark III has followed up its recent leak with an official announcement – and it confirms that one of the most powerful Micro Four Thirds cameras we've seen will come with a premium price tag to match.

The E-M1 Mark III, which sits below the flagship E-M1X and inherits many of its features, is aimed at pros and keen amateurs who prize speed, handheld shooting and portability in a system with a wide range of native lenses.

While Four Thirds sensors are smaller than their APS-C and full-frame equivalents, they do allow cameras like the E-M1 Mark III to pack in features that would otherwise be tricky to squeeze into a 500g body. One of those is an in-body image stabilization (IBIS) system that claims to provide 7.5 stops of compensation, allowing handheld shooters to use slower shutter speeds to help preserve image quality.  

The 20.4MP Live MOS sensor is sadly the same as its predecessor, but is paired with a new TruePic IX processor that powers some impressive AF and software skills, which we first saw in the E-M1X. These include the 50MP Handheld High Res Shot, which helps landscape shooters get around the 20MP limitation of the sensor, and Face and Eye Priority autofocus, which stems from the 121-point Phase Detection AF.

Other improvements on the Olympus E-M1 Mark II include a new 'multi selector' (otherwise known as a joystick) for quickly selecting AF points, and 'Live ND' for seeing the effects of the in-camera neutral density filter on your snaps in the viewfinder.

Olympus E-M1 Mark III

Pro price tags

Aside from these new features, the E-M1 Mark III shares a lot of similarities with its predecessor, including a weather-sealed magnesium alloy body and its SSWF (Super Sonic Wave Filter Filter) tech, which should keep the sensor free of pesky dust particles.

The camera's 4K video has also been given a minor bump with the inclusion of the flat OM-Log400 profile, which lets more advanced shooters grade footage in post-production. 

So does the Olympus E-M1 Mark III have any downsides? While it's shaping up to be a fine all-rounder, the main one is likely to be price. It'll be available to buy body-only from late February for $ 1,799.99 / £1,599.99 (around AU$ 3086), or in various kit lens combinations.  

These kit bundles include one with the M.Zuiko Digital ED 12-40mm f/2.8 Pro lens for $ 2499.99 / £2,199.99 (around AU$ 4,240) or another with the M.Zuiko Digital ED 12-100mm f/4.0 IS Pro lens for $ 2899.99 / £2,499.99 (around AU$ 4,820).

These prices are quite hefty when you can pick up a full-frame Sony A7 III and Nikon Z6 for around the same asking price. On the other hand, the E-M1 Mark III is targeted at different photographers and Micro Four Thirds lenses are considerably smaller and more affordable than their full-frame equivalents.

If you don't need all of the E-M1 Mark III's new features, the E-M1 Mark II will remain on sale for a body-only price of £1,299.99 (around $ 1,680 / AU$ 2,510) or £1,999.99 (around $ 2580 / AU$ 3860).

TechRadar – All the latest technology news

Read More