Meta Quest 3 Lite leak suggests it’ll pack the Quest 3’s brain into the Quest 2’s body

Meta may still be remaining schtum about the Meta Quest 3 Lite (or the Quest 3s, as some rumors are calling it), but that hasn’t stopped leaks from seeping out into the public sphere. The latest info dump tells us seemingly everything about the budget-friendly hardware’s technical specifications.

These latest details come via @Lunayian on Twitter who claims to “have seen multiple devkits and spoken to several people familiar with the device.” They then include an infographic that outlines the details they “feel comfortable sharing.” 

See more

In many ways this Meta Quest 3 alternative shares a lot of similarities with the original. Chiefly, it boasts a Snapdragon XR2 Gen 2 chipset from Qualcomm, the same tracking ring-less controllers, and the same two 4MP RGB passthrough cameras from full-color mixed reality.

But you would notice some downgrades borrowed from the Quest 2. This includes the screen resolution which is just 1,832 x 1,920 pixels per eye rather than the Quest 3’s 2,064 × 2,208 pixels; a bulky fresnel lens system instead of the Quest 3’s slimmer pancake lenses; and rather than gradual IPD (InterPupillary Distance) adjustments we've returned to the Quest 2’s three set positions.

Basically, this leak suggests the Quest 3 Lite has the Quest 3’s brain, and the Quest 2’s body.

The Oculus Quest 2 headset sat on top of its box and next to its controllers

The Quest 2’s bulk could make a comeback (Image credit: Shutterstock / agencies)

One key detail we’re still missing is the price. 

According to previous leaks the Quest 3 Lite will be cheaper than the Meta Quest 3 – something supported by the specs leaked here – but it’s unclear exactly how much it will cost. 

Adopting the Oculus Quest 2’s launch price of $ 299 / £299 / AU$ 479 seems most likely, but given the Quest 3 Lite offers most of the Quest 3’s upgrades we wouldn’t be shocked if the Lite landed somewhere around $ 399 / £399 / AU$ 639 – in between the Quest 2 and Quest 3 launches (the Quest 3 costs $ 499 / £479 / AU$ 799).

One thing we can say with some confidence is the Quest 2’s current $ 199.99 / £199.99 / AU$ 359.99 price is almost certainly far too low for this rumored upcoming model – so if you’re after a super-cheap VR headset the Quest 2 might be your best bet while it’s available. Although given we’re starting to see more and more Quest 3 exclusives, it might not be the best long-term buy.

Wait before you buy a Quest 2

As we always recommend, you should take this rumor with a pinch of salt. Until Meta announces the Quest 3 Lite, Quest 3s, or whatever it chooses to call it, we don’t know when or if this budget-friendly VR headset will launch.

But it seems very likely that something is on the way – and I have a feeling we might see it soon as Meta usually hosts a June gaming showcase, which could be the perfect place to announce this new device.

If you’re looking to buy one of the best VR headsets, I'd recommend waiting – unless you’re dead set on getting a Meta Quest 3. That’s doubly true if the headset you have your sights set on is the Quest 2 as this Lite model looks set to beat it in the most important ways and hopefully won’t break the bank either.

You might also like

TechRadar – All the latest technology news

Read More

Mark Zuckerberg says we’re ‘close’ to controlling our AR glasses with brain signals

Move over eye-tracking and handset controls for VR headsets and AR glasses, according to Mark Zuckerberg – the company’s CEO – Meta is “close” to selling a device that can be controlled by your brain signals. 

Speaking on the Morning Brew Daily podcast (shown below), Zuckerberg was asked to give examples of AI’s most impressive use cases. Ever keen to hype up the products Meta makes – he also recently took to Instagram to explain why the Meta Quest 3 is better than the Apple Vision Pro – he started to discuss the Ray-Ban Meta Smart Glasses that use AI and their camera to answer questions about what you see (though annoyingly this is still only available to some lucky users in beta form).

He then went on to discuss “one of the wilder things we’re working on,” a neural interface in the form of a wristband – Zuckerberg also took a moment to poke fun at Elon Musk’s Neuralink, saying he wouldn’t want to put a chip in his brain until the tech is mature, unlike the first human subject to be implanted with the tech.

Meta’s EMG wristband can read the nervous system signals your brain sends to your hands and arms. According to Zuckerberg, this tech would allow you to merely think how you want to move your hand and that would happen in the virtual without requiring big real-world motions.

Zuckerberg has shown off Meta’s prototype EMG wristband before in a video (shown below) – though not the headset it works with – but what’s interesting about his podcast statement is he goes on to say that he feels Meta is close to having a “product in the next few years” that people can buy and use.

Understandably he gives a rather vague release date and, unfortunately, there’s no mention of how much something like this would cost – though we’re ready for it to cost as much as one of the best smartwatches – but this system could be a major leap forward for privacy, utility and accessibility in Meta’s AR and VR tech.

The next next-gen XR advancement?

Currently, if you want to communicate with the Ray-Ban Meta Smart Glasses via its Look and Ask feature or to respond to a text message you’ve been sent without getting your phone out you have to talk it it. This is fine most of the time but there might be questions you want to ask or replies you want to send that you’d rather keep private.

The EMG wristband allows you to type out these messages using subtle hand gestures so you can maintain a higher level of privacy – though as the podcast hosts note this has issues of its own, not least of which is schools having a harder time trying to stop students from cheating in tests. Gone are the days of sneaking in notes, it’s all about secretly bringing AI into your exam.

Then there are utility advantages. While this kind of wristband would also be useful in VR, Zuckerberg has mostly talked about it being used with AR smart glasses. The big success, at least for the Ray-Ban Meta Smart Glasses is that they’re sleek and lightweight – if you glance at them they’re not noticeably different to a regular pair of Ray-Bans.

Adding cameras, sensors, and a chipset for managing hand gestures may affect this slim design. That is unless you put some of this functionality and processing power into a separate device like the wristband. 

The inside displays are shown off in the photo, they sit behind the Xreal Air 2 Pro AR glasses shades

The Xreal Air 2 Pro’s displays (Image credit: Future)

Some changes would still need to be made to the specs themselves – chiefly they’ll need to have in-built displays perhaps like the Xreal Air 2 Pro’s screens – but we’ll just have to wait to see what the next Meta smart glasses have in store for us.

Lastly, there’s accessibility. By their very nature, AR and VR are very physical things – you have to physically move your arms around, make hand gestures, and push buttons – which can make them very inaccessible for folks with disabilities that affect mobility and dexterity.

These kinds of brain signal sensors start to address this issue. Rather than having to physically act someone could think about doing it and the virtual interface would interpret these thoughts accordingly.

Based on demos shown so far some movement is still required to use Meta’s neural interface so it’s far from the perfect solution, but it’s the first step to making this tech more accessible and we're excited to see where it goes next.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

Elon Musk’s Neuralink has performed its first human brain implant, and we’re a step closer to having phones inside our heads

Neuralink, Elon Musk's brain interface company, achieved a significant milestone this week, with Musk declaring on X (formerly Twitter), “The first human received an implant from yesterday and is recovering well.”

Driven by concerns that AI might soon outpace (or outthink) humans, Musk first proposed the idea of a brain-to-computer interface, then called Neural Lace, back in 2016. envisioning an implant that could overcome limitations inherent in human-to-computer interactions. Musk claimed that an interface that could read brain signals and deliver them directly to digital systems would massively outpace our typical keyboard and mouse interactions.

Four years later, Musk demonstrated early clinical trials with an uncooperative pig, and in 2021 the company installed the device in a monkey that used the interface to control a game of Pong.

It was, in a sense, all fun and games – until this week, and Musk's claim of a human trial and the introduction of some new branding.

Neuralink's first product is now called 'Telepathy' which, according to another Musk tweet, “Enables control of your phone or computer, and through them almost any device, just by thinking.”

As expected, these brain implants are not, at least for now, intended for everyone. Back in 2020, Musk explained that the intention is “to solve important spine and brain problems with a seamlessly implanted device.” Musk noted this week that “Initial users will be those who have lost the use of their limbs. Imagine if Stephen Hawking could communicate faster than a speed typist or auctioneer. That is the goal.”

Neural link devices like Telepathy are bio-safe implants comprising small disk-like devices (roughly the thickness of four coins stuck together) with ultra-fine wires trailing out of them that connect to various parts of the brain. The filaments read neural spikes, and a computer interface interprets them to understand the subject's intentions and translate them into action on, say, a phone, or a desktop computer. In this first trial, Musk noted that “Initial results show promising neuron spike detection,” but he didn't elaborate on whether the patient was able to control anything with his mind.

Musk didn't describe the surgical implantation process. Back in 2020, though, Neuralink introduced its Link surgery robot, which it promised would implant the Neuralink devices with minimal pain, blood, and, we're guessing, trauma. Considering that the implant is under the skin and skull, and sits on the brain, we're not sure how that's possible. It's also unclear if Neuralink used Link to install 'Telepathy.'

The new branding is not that far-fetched. While most people think of telepathy as people transmitting thoughts to one another, the definition is “the communication of thoughts or ideas by means other than the known senses.”

A phone in your head

Still, Musk has a habit of using hyperbole when describing Neuralink. During one early demonstration, he only half-jokingly said “It’s sort of like if your phone went in your brain.” He also later added that, “In the future, you will be able to save and replay memories.”

With the first Neuralink Telepathy device successfully installed, however, Musk appears to be somewhat more circumspect. There was no press conference, or parading of the patient before the reporters. All we have are these few tweets, and scant details about a brain implant that Musk hopes will help humans stay ahead of rapidly advancing AIs.

It's worth noting that for all of Musk's bluster and sometimes objectionable rhetoric, he was more right than he knew about where the state of AI would be by 2024. Back in 2016, there was no ChatGPT, Google Bard, or Microsoft CoPilot. We didn't have AI in Windows and Photoshop's Firefly, realistic AI images and videos, or realistic AI deepfakes. Concerns about AIs taking jobs are now real, and the idea of humans falling behind artificial intelligence sounds less like a sci-fi fantasy and more like our future.

Do those fears mean we're now more likely to sign up for our brain implants? Musk is betting on it.

You might also like

TechRadar – All the latest technology news

Read More

Google Gemini is its most powerful AI brain so far – and it’ll change the way you use Google

Google has announced the new Gemini artificial intelligence (AI) model, an AI system that will power a host of the company’s products, from the Google Bard chatbot to its Pixel phones. The company calls Gemini “the most capable and general model we’ve ever built,” claiming it would make AI “more helpful for everyone.”

Gemini will come in three 'sizes': Ultra, Pro and Nano, with each one designed for different uses. All of them will be multimodal, meaning they’ll be able to handle a wide range of inputs, with Google saying that Gemini can take text, code, audio, images and video as prompts.

While Gemini Ultra is designed for extremely demanding use cases such as in data centers, Gemini Nano will fit in your smartphone, raising the prospect of the best Android smartphones gaining a significant AI advantage.

With all of this new power, Google insists that it conducted “rigorous testing” to identify and prevent harmful results arising from people’s use of Gemini. That was challenging, the company said, because the multimodal nature of Gemini means two seemingly innocuous inputs (such as text and an image) can be combined to create something offensive or dangerous.

Coming to all your services and devices

Google has been under pressure to catch up with OpenAI’s ChatGPT and its advanced AI capabilities. Just a few days ago, in fact, news was circulating that Google had delayed its Gemini announcement until next year due to its apparent poor performance in a variety of languages. 

Now, it turns out that news was either wrong or Google is pressing ahead despite Gemini’s rumored imperfections. On this point, it’s notable that Gemini will only work in English at first.

What does Gemini mean for you? Well, if you use a Pixel 8 Pro phone, Google says it can now run Gemini Nano, bringing all of its AI capabilities to your pocket. According to a Google blog post, Gemini is found in two new Pixel 8 Pro features: Smart Reply in Gboard, which suggests message replies to you, and Summarize in Recorder, which can sum up your recorded conversations and presentations.

The Google Bard chatbot has also been updated to run Gemini, which the company says is “the biggest upgrade to Bard since it launched.” As well as that, Google says that “Gemini will be available in more of our products and services like Search, Ads, Chrome and Duet AI” in the coming months, Google says.

As part of the announcement, Google revealed a slate of Gemini demonstrations. These show the AI guessing what a user was drawing, playing music to match a drawing, and more.

Gemini vs ChatGPT

Google Gemini revealed at Google I/O 2023

(Image credit: Google)

It’s no secret that OpenAI’s ChatGPT has been the most dominant AI tool for months now, and Google wants to end that with Gemini. The company has made some pretty bold claims about its abilities, too.

For instance, Google says that Gemini Ultra’s performance exceeds current state-of-the-art results in “30 of the 32 widely-used academic benchmarks” used in large language model (LLM) research and development. In other words, Google thinks it eclipses GPT-4 in nearly every way.

Compared to the GPT-4 LLM that powers ChatGPT, Gemini came out on top in seven out of eight text-based benchmarks, Google claims. As for multimodal tests, Gemini won in all 10 benchmarks, as per Google’s comparison.

Does this mean there’s a new AI champion? That remains to be seen, and we’ll have to wait for more real-world testing from independent users. Still, what is clear is that Google is taking the AI fight very seriously. The ball is very much in OpenAI’s (and Microsoft's) court now.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft’s Copilot chatbot will get 6 big upgrades soon – including ChatGPT’s new brain

Microsoft has announced that Copilot – the AI chatbot formerly known as Bing Chat – is soon to get half a dozen impressive upgrades.

This batch of improvements should make the Copilot chatbot considerably more powerful in numerous respects (outside of, and inside of Windows 11).

So, first off, let’s break down the upgrades themselves (listed in a Microsoft blog post) before getting into a discussion of what difference they’re likely to make.

Firstly, and most importantly, Copilot is getting a new brain, or we should say an upgraded brain in the form of GPT-4 Turbo. That’s the latest model of GPT from OpenAI which makes various advances in terms of being generally better and more accurate.

Another beefy upgrade is an updated engine for Dall-E 3, the chatbot’s image creation feature, which produces higher quality results that are more closely aligned with what’s requested by the user. This is actually in Copilot right now.

Thirdly, Microsoft promises that Copilot will do better with image searches, returning better results when you sling a picture at the AI in order to learn more about it.

Another addition is Deep Search which uses GPT-4 to “deliver optimized search results for complex topics” as Microsoft puts it. What this means is that if you have a query for Copilot, it can produce a more in-depth search request to produce better results. Furthermore, if the terms of your query are vague and could potentially relate to multiple topics, Deep Search will follow up on what those topics might be and offer suggestions to allow you to refine the query.

The fifth upgrade Microsoft has planned is Code Interpreter, which as the name suggests will help perform complex tasks including coding, data analysis and so forth. That’s not something the average user will benefit from, but there are those who will, of course.

Finally, Copilot in Microsoft’s Edge browser has a rewrite feature (for inline text composition) coming soon. This allows you to select text on a website and get the AI to rewrite it for you.


Analysis: Something for Google to worry about

Dall-E 3

(Image credit: Future)

There are some really useful changes inbound here. Getting GPT-4 Turbo is an upgrade (from GPT-4) that a lot of Copilot users have been clamoring for, and Microsoft tells us that it’s now being tested with select users. (We previously heard it still had a few kinks to iron out, so presumably that’s what’s currently going on).

GPT-4 Turbo will be rolling out in the “coming weeks” so it should be here soon enough, with any luck, and you’ll be able to see the difference it makes for you in terms of a greater level of accuracy for the chatbot when responding to your queries.

It’s great to see Dall-E 3 getting an upgrade, too, as it’s already an excellent image creation engine, frankly. (Recall the rush to use the feature when it was first launched, due to the impressive results being shared online).

The search query improvements, both the Deep Search capabilities and refined image searching, will also combine with the above upgrades to make Copilot a whole lot better across multiple fronts. (Although we do worry somewhat about the potential for abuse with that inline rewrite feature for Edge).

All of this forward momentum for Copilot comes as we just heard news of Google delaying its advances on the AI front, pushing some major launches back to the start of 2024. Microsoft isn’t hanging around when it comes to Copilot, that’s for sure, and Google has to balance keeping up, without pushing so hard that mistakes are made.

You might also like …

TechRadar – All the latest technology news

Read More