Meta AR glasses: everything we know about the AI-powered AR smart glasses

After a handful of rumors and speculation suggested Meta was working on a pair of AR glasses, it unceremoniously confirmed that Meta AR glasses are on the way – doing so via a short section at the end of a blog post celebrating the 10th anniversary of Reality Labs (the division behind its AR/VR tech).

While not much is known about them, the glasses were described as a product merging Meta’s XR hardware with its developing Meta AI software to “deliver the best of both worlds” in a sleek wearable package.

We’ve collected all the leaks, rumors, and some of our informed speculation in this one place so you can get up to speed on everything you need to know about the teased Meta AR glasses. Let’s get into it.

Meta AR glasses: Price

We’ll keep this section brief as right now it’s hard to predict how much a pair of Meta AR glasses might cost because we know so little about them – and no leakers have given a ballpark estimate either.

Current smart glasses like the Ray-Ban Meta Smart Glasses, or the Xreal Air 2 AR smart glasses will set you back between $ 300 to $ 500 / £300 to £500 / AU$ 450 to AU$ 800; Meta’s teased specs, however, sound more advanced than what we have currently.

Lance Ulanoff showing off Google Glass

Meta’s glasses could cost as much as Google Glass (Image credit: Future)

As such, the Meta AR glasses might cost nearer $ 1,500 (around £1,200 / AU$ 2300)  – which is what the Google Glass smart glasses launched at.

A higher price seems more likely given the AR glasses novelty, and the fact Meta would need to create small yet powerful hardware to cram into them – a combo that typically leads to higher prices.

We’ll have to wait and see what gets leaked and officially revealed in the future.

Meta AR glasses: Release date

Unlike price, several leaks have pointed to when we might get our hands – or I suppose eyeballs – on Meta’s AR glasses. Unfortunately, we might be waiting until 2027.

That’s according to a leaked Meta internal roadmap shared by  The Verge back in March 2023. The document explained that a precursor pair of specs with a display will apparently arrive in 2025, with ‘proper’ AR smart glasses due in 2027.

RayBan Meta Smart Glasses close up with the camera flashing

(Image credit: Meta)

In February 2024  Business Insider cited unnamed sources who said a pair of true AR glasses could be shown off at this year’s Meta Connect conference. However, that doesn’t mean they’ll launch sooner than 2027. While Connect does highlight soon-to-release Meta tech, the company takes the opportunity to show off stuff coming further down the pipeline too. So, its demo of Project Orion (as those who claim to be in the know call it) could be one of those ‘you’ll get this when it’s ready’ kind of teasers.

Obviously, leaks should be taken with a pinch of salt. Meta could have brought the release of its specs forward, or pushed it back depending on a multitude of technological factors – we won’t know until Meta officially announces more details. Considering it has teased the specs suggests their release is at least a matter of when not if.

Meta AR glasses: Specs and features

We haven't heard anything about the hardware you’ll find in Meta’s AR glasses, but we have a few ideas of what we’ll probably see from them based on Meta’s existing tech and partnerships.

Meta and LG recently confirmed that they’ll be partnering to bring OLED panels to Meta’s headsets, and we expect they’ll bring OLED screens to its AR glasses too. OLED displays appear in other AR smart glasses so it would make sense if Meta followed suit.

Additionally, we anticipate that Meta’s AR glasses will use a Qualcomm Snapdragon chipset just like Meta’s Ray-Ban smart glasses. Currently, that’s the AR1 Gen 1, though considering Meta’s AR specs aren’t due until 2027 it seems more likely they’d be powered by a next-gen chipset – either an AR2 Gen 1 or an AR1 Gen 2.

A Meta Quest 3 player sucking up Stay Puft Marshmallow Men from Ghostbusters in mixed reality using virtual tech extending from their controllers

The AR glasses could let you bust ghost wherever you go (Image credit: Meta)

As for features, Meta’s already teased the two standouts: AR and AI abilities.

What this means in actual terms is yet to be seen but imagine virtual activities like being able to set up an AR Beat Saber jam wherever you go, an interactive HUD when you’re navigating from one place to another, or interactive elements that you and other users can see and manipulate together – either for work or play.

AI-wise, Meta is giving us a sneak peek of what's coming via its current smart glasses. That is you can speak to its Meta AI to ask it a variety of questions and for advice just as you can other generative AI but in a more conversational way as you use your voice.

It also has a unique ability, Look and Ask, which is like a combination of ChatGPT and Google Lens. This allows the specs to snap a picture of what’s in front of you to inform your question, allowing you to ask it to translate a sign you can see, for a recipe using ingredients in your fridge, or what the name of a plant is so you can find out how best to care for it.

The AI features are currently in beta but are set to launch properly soon. And while they seem a little imperfect right now, we’ll likely only see them get better in the coming years – meaning we could see something very impressive by 2027 when the AR specs are expected to arrive.

Meta AR glasses: What we want to see

A slick Ray-Ban-like design 

RayBan Meta Smart Glasses

The design of the Ray-Ban Meta Smart Glasses is great (Image credit: Meta)

While Meta’s smart specs aren't amazing in every way – more on that down below – they are practically perfect in the design department. The classic Ray-Ban shape is sleek, they’re lightweight, super comfy to wear all day, and the charging case is not only practical, it's gorgeous.

While it’s likely Ray-Ban and Meta will continue their partnership to develop future smart glasses – and by extension the teased AR glasses – there’s no guarantee. But if Meta’s reading this, we really hope that you keep working with Ray-Ban so that your future glasses have the same high-quality look and feel that we’ve come to adore.

If the partnership does end, we'd like Meta to at least take cues from what Ray-Ban has taught it to keep the design game on point.

Swappable lenses 

Orange RayBan Meta Smart Glasses in front of a wall of colorful lenses including green, blue, yellow and pink

We want to change our lenses Meta! (Image credit: Meta)

While we will rave about Meta’s smart glasses design we’ll admit there’s one flaw that we hope future models (like the AR glasses) improve on; they need easily swappable lenses.

While a handsome pair of shades will be faultless for your summer vacations, they won’t serve you well in dark and dreary winters. If we could easily change our Meta glasses from sunglasses to clear lenses as needed then we’d wear them a lot more frequently – as it stands, they’re left gathering dust most months because it just isn’t the right weather.

As the glasses get smarter, more useful, and pricier (as we expect will be the case with the AR glasses) they need to be a gadget we can wear all year round, not just when the sun's out.

Speakers you can (quietly) rave too 

JBL Soundgear Sense

These open ear headphones are amazing, Meta take notes (Image credit: Future)

Hardware-wise the main upgrade we want to see in Meta’s AR glasses is better speakers. Currently, the speakers housed in each arm of the Ray-Ban Meta Smart Glasses are pretty darn disappointing – they can leak a fair amount of noise, the bass is practically nonexistent and the overall sonic performance is put to shame by even basic over-the-ears headphones.

We know open-ear designs can be a struggle to get the balance right with. But when we’ve been spoiled by open-ear options like the JBL SoundGear Sense – that have an astounding ability to deliver great sound and let you hear the real world clearly (we often forget we’re wearing them) – we’ve come to expect a lot and are disappointed when gadgets don’t deliver.

The camera could also get some improvements, but we expect the AR glasses won’t be as content creation-focused as Meta’s existing smart glasses – so we’re less concerned about this aspect getting an upgrade compared to their audio capabilities.

You might also like

TechRadar – All the latest technology news

Read More

Confused about Google’s Find My Device? Here are 7 things you need to know

It took a while, but Google has released the long-awaited upgrade to its Find My Device network. This may come as a surprise. The update was originally announced back in May 2023, but was soon delayed with apparent launch date. Then, out of nowhere, Google decided to release the software on April 8 without major fanfare. As a result, you may feel lost, but we can help you find your way.

Here's a list of the seven most important things you need to know about the Find My Device update. We cover what’s new in the update as well as the devices that are compatible with the network, because not everything works and there’s still work to be done.

1. It’s a big upgrade for Google’s old Find My Device network 

Google's Find My Device feature

(Image credit: Google)

The previous network was very limited in what it could do. It was only able to detect the odd Android smartphone or Wear OS smartwatch. However, that limitation is now gone as Find My Device can sniff other devices; most notably Bluetooth location trackers. 

Gadgets also don’t need to be connected to the internet or have location services turned on, since the software can detect them so long as they’re within Bluetooth range. However, Find My Device won’t tell you exactly where the devices are. You’ll instead be given an approximate location on your on-screen map. You'll ultimately have to do the legwork yourself.

Find My Device functions similarly to Apple’s Find My network, so “location data is end-to-end encrypted,” meaning no one, not even Google, can take a peek.

2. Google was waiting for Apple to add support to iPhones 

iPhone 15 from the front

(Image credit: Future)

The update was supposed to launch in July 2023, but it had to be delayed because of Apple. Google was worried about unwanted location trackers, and wanted Apple to introduce “similar protections for iOS.” Unfortunately, the iPhone manufacturer decided to drag its feet when it came to adding unknown tracker alerts to its own iPhone devices.

The wait may soon be over as the iOS 17.5 beta contains lines of code suggesting that the iPhone will soon get these anti-stalking measures. Soon, iOS devices might encourage users to disable unwanted Bluetooth trackers uncertified for Apple’s Find My network. It’s unknown when this feature will roll out as the features in the Beta don’t actually do anything when enabled. 

Given the presence of unwanted location tracker software within iOS 17.5, Apple's release may be imminent. Apple may have given Google the green light to roll out the Find My Device upgrade ahead of time to prepare for their own software launch.

3. It will roll out globally

Android

(Image credit: Future)

Google states the new Find My Device will roll out to all Android devices around the world, starting in the US and Canada. A company representative told us other countries will receive the same update within the coming months, although they couldn’t give us an exact date.

Android devices do need to meet a couple of requirements to support the network. Luckily, they’re not super strict. All you need is a smartphone running Android 9 with Bluetooth capabilities.

If you own either a Pixel 8 or Pixel 8 Pro, you’ll be given an exclusive feature: the ability to find a phone through the network even if the phone is powered down. Google reps said these models have special hardware that allows them to pour power into their Bluetooth chip when they're off. Google is working with other manufacturers in bringing this feature to other premium Android devices.

4. You’ll receive unwanted tracker alerts

Apple AirTags

(Image credit: Apple)

Apple AirTags are meant to be attached to frequently lost items like house keys or luggage so you can find them easily. Unfortunatley, several bad eggs have utilized them as an inexpensive way to stalk targets. Google would eventually update Android by giving users a way to detect unwanted AirTags.

For nearly a year, the OS could only seek out AirTags, but now with the upgrade, Android phones can locate Bluetooth trackers from other third-party brands such as Tile, Chipolo, and Pebblebee. It is, by far, the most single important feature in the update as it'll ensure your privacy and safety.

You won’t be able to find out who placed a tracker on you. According to a post on the company’s Security blog, only the owner can view that information. 

5. Chipolo and Pebblebee are launching new trackers for it soon

Chipolo's new trackers

(Image credit: Chipolo)

Speaking of Chipolo and Pebblebee, the two brands have announced new products that will take full advantage of the revamped network. Google reps confirmed to us they’ll be “compatible with unknown tracker alerts across Android and iOS”.

On May 27th, we’ll see the introduction of the Chipolo ONE Point item tracker as well as the Chipolo CARD Point wallet finder. You’ll be able to find the location of whatever item they’re attached to via the Find My Device app. The pair will also sport speakers on them to ring out a loud noise letting you where they are. What’s more, Chipolo’s products have a long battery life: Chipolo says the CARD finder lasts as long as two years on a single charge.

Pebblebee is achieving something similar with their Tag, Card, and Clip trackers. They’re small and lightweight and attachable to larger items, Plus, the trio all have a loud buzzer for easy locating. These three are available for pre-order right now although no shipping date was given. 

6. It’ll work nicely with your Nest products

Google Nest Wifi

(Image credit: Google )

For smart home users, you’ll be able to connect the Find My Device app to a Google Nest device to find lost items. An on-screen animation will show a sequence of images displaying all of the Nest hardware in your home as the network attempts to find said missing item. Be aware the tech won’t give you an exact location.

A short video on the official announcement shows there'll be a message stating where it was last seen, at what time, and if there was another smart home device next to it. Next to the text will be a refresh option in case the lost item doesn’t show up.

Below the message will be a set of tools to help you locate it. You can either play a sound from the tracker’s speakers, share the device, or mark it as lost.

7. Headphones are invited to the tracking party too

Someone wearing the Sony WH-1000XM5 headphones against a green backdrop

(Image credit: Gerald Lynch/TechRadar/Future)

Believe it or not, some insidious individuals have used earbuds and headphones to stalk people. To help combat this, Google has equipped Find My Device with a way to detect a select number of earbuds. The list of supporting hardware is not large as it’ll only be able to locate three specific models. They are the JBL Tour Pro 2, the JBL Tour One M2, and the high-end Sony WH-1000XM5. Apple AirPods are not on the list, although support for these could come out at a later time.

Quite the extensive list as you can see but it's all important information to know. Everything will work together to keep you safe. 

Be sure to check out TechRadar's list of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Meta Quest Pro 2: everything we know about the Apple Vision Pro competitor

Meta’s Quest 3 may be less than a year old, but Meta appears to be working on a few follow-ups. Leaks and rumors point to the existence of a Meta Quest 3 Lite – a cheaper version of the Meta Quest 3 – and a Meta Quest Pro 2 – a follow-up to the high-end Meta Quest Pro.

The original Meta Quest Pro doesn’t seem to have been all that popular – evidenced by the fact its price was permanently cut by a third less than six months after its launch – but the Apple Vision Pro seems to have fueled a renaissance of high-end standalone VR hardware. This means we’re getting a Samsung XR headset (developed in partnership with Google), and mostly likely a Meta Quest Pro 2 of some kind.

While one leak suggested the Meta Quest Pro 2 had been delayed – after Meta cancelled a project that the leak suggested was set to be the next Quest Pro – there’s more than a little evidence that the device is on the way. Here’s all of the evidence, as well as everything you need to know about the Meta Quest Pro 2 – including some of our insight, and the features we’d most like to see it get.

Meta Quest Pro 2: Price

Because the Meta Quest Pro 2 hasn’t been announced we don’t know exactly how much it’ll cost, but we expect it’ll be at least as pricey as the original which launched at $ 1,499.99 / £1,499.99 / AU$ 2,449.99.

The Meta Quest Pro being worn by a person in an active stance

(Image credit: Meta)

The Meta Quest Pro was permanently discounted to $ 999.99 / £999.99 / AU$ 1729.99 five months after it launched, but we expect this was Meta attempting to give the Quest Pro a much-needed sales boost rather than an indication of the headsets actual cost. So we expect this is much cheaper than Quest Pro 2 will be.

What’s more, given that the device is expected to be more of an Apple Vision Pro competitor — which costs $ 3,500 or around £2,800 / AU$ 5,350 – with powerful specs, LG-made OLED panels, and could boast next-gen mixed reality capabilities there’s a good chance it could cost more than its predecessor.

As such we’re expecting it to come in at nearer $ 2,000 / £2,000 / AU$ 3,000. Over time, and as more leaks about the hardware come out, we should start to get a better idea of its price – though as always we won’t know for certain how much it’ll cost until Meta says something officially.

Meta Quest Pro 2: Release date

The Meta Quest 3 on a notebook surrounded by pens and school supplies on a desk

The Meta Quest 3 (Image credit: Meta)

Meta hasn’t announced the Quest Pro 2 yet – or even teased it. Given its usual release schedule this means the earliest we’re likely to see a Pro model is October 2025; that’s because it would tease the device at this year’s Meta Connect in September/October 2024, and then launch it the following year’s event as it did with the original Quest Pro and Quest 3.

But there are a few reasons we could see it launch sooner or later. On the later release date side of things we have the rumored Meta Quest 3 Lite – a cheaper version of the Meta Quest 3. Meta may want to push this affordable model out the gate sooner rather than later, meaning that it might need to take a release slot that could have been used by the Quest Pro 2.

Alternatively, Meta may want to push a high-end model out ASAP so as to not let the Apple Vision Pro and others like the Samsung XR headset corner the high-end VR market. If this is the case it could forgo its usual tease then release strategy and just release the headset later this year – or tease it at Connect 2024 then launch it in early 2025 rather than a year later in late 2025 as it usually would.

This speculation all assumes a Meta Quest Pro 2 is even on the way – though Meta has strongly suggested that another Pro model would come in the future; we’ll just have to wait and see what’s up its sleeve.

Meta Quest Pro 2: Specs

Based on LG and Meta’s announcement of their official partnership to bring OLED displays to Meta VR headsets in the future, it’s likely that the Meta Quest Pro 2 would feature OLED screens. While these kind of displays are typically pricey, the Quest Pro 2 is expected to be a high-end model (with a high price tag), and boasting OLED panels would put it on par with other high-end XR products like the Apple Vision Pro.

Key Snapdragon XR2 Plus Gen 2 specs, including that it has support fo 4.3k displays, 8x better AI performance, and 2.5x better GPU performance

(Image credit: Qualcomm)

It also seems likely the Meta Quest Pro 2 will boast a Snapdragon XR2 Plus Gen 2 chipset – the successor to the Gen 1 used by the Quest Pro. If it launches further in the future than we expect it would instead boast a currently unannounced Gen 3 model.

While rumors haven’t teased any other specs, we also assume the device would feature full-color mixed reality like Meta’s Quest 3 and Quest Pro – though ideally the passthrough would be higher quality than either of these devices (or at least, better than the Quest Pro’s rather poor mixed reality).

Beyond this, we predict the device would have specs at least as good as its predecessor. By that we mean we expect the base Quest Pro 2 would come with 12GB of RAM, 256GB of storage and a two-hour minimum battery life.

Meta Quest Pro 2: What we want to see

We’ve already highlighted in depth what we want to see from the Meta Quest Pro 2 – namely it should ditch eye-tracking and replace it with four different features. But we’ll recap some of those points here, and make a few new ones of things we want to see from the Quest Pro 2.

Vastly better mixed-reality passthrough, more entertainment apps and, 4K OLED displays would go a long way to making the Meta Quest Pro 2 feel a lot more like a Vision Pro competitor – so we hope to see them on the Quest Pro 2. 

Eye-tracking could also help, but Meta really needs to prove it’s worthwhile. So far every instance of the tech feels like an expensive tech demo for a feature that’s neat, but not all that useful.

The Meta Quest Pro being worn by Hamish Hector, his cheeks are puffed up

What we want from the next Quest Pro (Image credit: Meta)

Ignoring specs and design for a second, our most important hope is that the Quest Pro 2 isn’t as prohibitively expensive as the Apple Vision Pro. While the Vision Pro is great, $ 3,500 is too much even for a high-end VR headset when you consider the realities of how and how often the device will be used. Ideally the Quest Pro 2 would be at most $ 2,000 / £2,000 / AU$ 3,000, though until we know more about its specs we won’t know how realistic our request is.

Lastly we hope the device is light, perhaps with a removable battery pack like the one seen in the HTC Vive XR Elite. This would allow someone who wants to work at their desk or sit back and watch a film in VR wear a much lighter device for the extended period of time (provided their near a power source). Alternatively they can plug the battery in and enjoy a typical standalone VR experience – to us this would be a win-win.

TechRadar – All the latest technology news

Read More

Meta Quest 3 Lite: everything we know about the rumored cheap VR headset

Based on the leaks and rumors it seems increasingly likely that Meta is working on a cheaper version of the Meta Quest 3 – expected to be called the Meta Quest 3 Lite or Meta Quest 3s. 

It’s not yet been confirmed, but the gadget is expected to be a more affordable version of the Quest 3 – at a price closer to the Quest 2 – that would see the Meta fully phase out its last-gen VR hardware. The trade-off would be the device wouldn’t have all the capabilities of the Quest 3 – likely sporting lower-resolution displays, less RAM, a worse chipset, or dropping mixed reality support (though that last point seems unlikely).

While we’re not convinced the gadget will look exactly like what’s been rumored so far, as the saying goes: where there's smoke there’s fire. The fact that several independent leaks have come out suggests Meta is definitely working on something.

We’ve collected the latest news and rumors here so this page can serve as your one-stop shop for all things Meta Quest 3 Lite. As we learn more about the device we’ll be sure to update the page and keep you in the loop with all the latest information.

Meta Quest 3 Lite: Latest news

We’ve seen not one, but two distinct Meta Quest 3 Lite leaks – one render called the Meta Quest 3 Lite and one with more details that the leaker called the Quest 3s.

The Oculus Quest 2 was also at a record low price ($ 200 / £200) as part of this year's Amazon Spring Sale, following a permanent price cut to $ 249.99 / £249.99 / AU$ 439.99 earlier this year. This could be a sign Meta and retailers are trying to shift stock ahead of the last-gen device being phased out before a Quest 3 Lite release.

Oculus Quest 2 on a white background

Is the Quest 3 Lite the true Quest 2 replacement? (Image credit: Shutterstock / Boumen Japet)

Meta Quest 3 Lite: Price

As the Meta Quest 3 Lite isn’t yet official – meaning Meta itself hasn’t confirmed (or denied) its existence – we can’t say for certain how much it’ll cost or when it will be released.

But based on rumors and previous Meta hardware releases, we can make some reasoned predictions on what the gadget might cost and when we could see it in action.

Price-wise, we can reasonably expect it’ll cost around the same as Meta’s last-gen headset, given the Lite is billed as a super-affordable model meant to fully replace the Oculus Quest 2. It’ll certainly cost less than the Meta Quest 3.

This would likely see it released at around $ 299 / £299 / AU$ 479, which is where the Quest 2 started life. Honestly, we’d be more than a little disappointed if it was more expensive.

A man using his Zenni customized Meta Quest 3 headset

The Meta Quest 3 could soon have a sibling (Image credit: Zenni)

Meta Quest 3 Lite: Release date

As for the Quest 3 Lite’s release date, Meta usually likes to release new hardware in October. However, it might decide to mix things up with this budget-friendly gadget to avoid confusing it with its main line Quest and Quest Pro lines.

We predict the Quest 3 Lite will be announced and released as part of this year’s Meta Quest Gaming Showcase, which should be around June based on previous years. 

If Meta sticks to its usual hardware release schedule, though, then a launch after this year’s Meta Connect – which we expect will land in September or October – could be on the cards.

Of course, this assumes the Meta Quest 3 Lite even launches at all.

The Meta Quest 3 in action

The Meta Quest 3 Lite will likely look a little different to the Quest 3 (Image credit: Meta)

Meta Quest 3 Lite: Specs and design

So far we haven’t heard many specs for the Meta Quest 3 Lite. The main leaks so far have been renders showing off its possible design.

See more

These leaks suggest it’ll be bulkier than the Quest 3, likely because the Lite would adopt the fresnel lens system used by the Quest 2. This makes some sense as fresnel lenses are cheaper, partly because the alternative pancake lenses require brighter displays. However, considering pancake lenses lead not only to a slimmer headset design but also better image quality (and we’ve seen cheap headsets like the Pico 4 use pancake lenses) we’d be surprised if Meta didn't use them in the Lite.

One of the leaks went into more detailed specs, suggesting it’ll have 128GB or 256GB of storage (instead of the 128GB or 512GB in the Quest 3) and 1,832 x 1,920 pixel displays (one per eye). Something seems off about the leak, though, in terms of the assets shared and the included info that could help identify the leaker (which seems like a bad idea for anyone trying to avoid the wrath of Meta’s well-funded legal team).  

See more

As such, color us skeptical when it comes to the details highlighted in the post.

Meta Quest 3 Lite: Software

Assuming the Meta Quest 3 Lite has the same or similar mixed-reality capabilities as the Meta Quest 3, we expect it’ll have access to all of the same software – which is to say, everything available on the Quest platform’s Store (and many other games and apps available through sideloading via third-party digital storefronts).

If it has significantly worse specs – such as the Quest 2’s Snapdragon XR2 Gen 1 chipset – there may be some software that launches in the future that would be exclusive to the full Quest 3. But we expect the Quest 3 Lite would use a Snapdragon XR2 Gen 2 so, hopefully, this won’t be an issue.

We’ll have to wait and see what Meta announces.

Girl wearing Meta Quest 3 headset interacting with a jungle playset

The Meta Quest 3 Lite needs to have mixed reality (Image credit: Meta)

Meta Quest 3 Lite: What we want to see

As for what we want to see from the Quest 3 Lite VR headset – acknowledging that its lower price will necessitate lower specs than the Meta Quest 3 proper – our ideal setup would boast the same Snapdragon XR2 Gen 2 chipset and 8GB of RAM as the Quest 3, though 6GB of RAM like the Quest 2 is, admittedly, a lot more likely. 

Storage options would start at 64GB – as frankly, you don’t need a lot of storage space for VR apps, especially if you’re willing to download and delete them as necessary – and the displays would be a lower resolution than the Quest 3. A leak suggested the 1,832 x 1,920 pixels per eye option, and considering this is what’s used by the Quest 2 it does make some sense.

Pancake lenses seem like an easy win from a design and image-quality perspective (especially if Meta opts for poorer displays), and mixed-reality passthrough that’s at least as high-quality as the Quest 3 is also a must.

Beyond this, one rogue cost-cutting measure could see Meta scrap or change its Quest 3 controllers. However, given how much developers have emphasized to us the importance of VR handsets having a standard design, and the fact that many Quest titles don’t support hand-tracking, this might be a step too far.

TechRadar – All the latest technology news

Read More

WhatsApp’s new security label will let you know if future third-party chats are safe

WhatsApp is currently testing a new in-app label letting you know whether or not a chat room has end-to-end encryption (E2EE).

WABetaInfo discovered the caption in the latest Android beta. According to the publication, it’ll appear underneath the contact and group name but only if the conversation is encrypted by the company’s “Signal Protocol” (Not to be confused with the Signal messaging app; the two are different.) The line is meant to serve as a “visual confirmation” informing everyone that outside forces cannot read what they’re talking about or listen to phone calls. WABetaInfo adds that the text will disappear after a few seconds, allowing the Last Seen indicator to take its place. At this moment, it’s unknown if the two lines will change back and forth or if Last Seen will permanently take the E2EE label’s place.

This may not seem like a big deal since it’s just four words with a lock icon. However, this small change is important because it indicates Meta is willing to embrace third-party interoperability.

See more

Third-party compatibility

On March 6, the tech giant published a report on its Engineering at Meta blog detailing how interoperability will work in Europe. The EU passed the Digital Markets Act in 2022 which, among other things, implemented new rules forcing major messaging platforms to let users communicate with third-party services. 

Meta’s post gets into the weeds explaining how interoperability will work. The main takeaway is the company wants partners to use their Signal Protocol. The standard serves as the basis for E2EE on WhatsApp and Messenger, so they want everyone to be on the same playing field.

Other services don’t have to use Signal. They can use their compatible protocols, although they must demonstrate they offer “the same security guarantees”. 

The wording here is pretty cut and dry: if a service doesn’t have the same level of protection, then WhatsApp won’t communicate with it. However, the beta suggests Meta is willing to be flexible. They may not completely shut out non-Signal-compliant platforms. At the very least, the company will inform its users that certain chat rooms may not be as well protected as the ones with E2EE enabled.

Interested Android owners can install the update from the Google Play Beta Program although there is a chance you may not receive the feature. WABetaInfo states it’s only available to a handful of testers. No word if WhatsApp on iOS will see the same patch.

While we have you, be sure to join TechRadar's official WhatsApp channel to get all the latest reviews on your phone.

You might also like

TechRadar – All the latest technology news

Read More

Google Gemini explained: 7 things you need to know the new Copilot and ChatGPT rival

Google has been a sleeping AI giant, but this week it finally woke up. Google Gemini is here and it's the tech giant's most powerful range of AI tools so far. But Gemini is also, in true Google style, really confusing, so we're here to quickly break it all down for you.

Gemini is the new umbrella name for all of Google's AI tools, from chatbots to voice assistants and full-blown coding assistants. It replaces both Google Bard – the previous name for Google's AI chatbot – and Duet AI, the name for Google's Workspace-oriented rival to CoPilot Pro and ChatGPT Plus.

But this is also way more than just a rebrand. As part of the launch, Google has released a new free Google Gemini app for Android (in the US, for now. For the first time, Google is also releasing its most powerful large language model (LLM) so far called Gemini Ultra 1.0. You can play with that now as well, if you sign up for its new Google One AI Premium subscription (more on that below).

This is all pretty head-spinning stuff, and we haven't even scratched the surface of what you can actually do with these AI tools yet. So for a quick fast-charge to get you up to speed on everything Google Gemini, plug into our easily-digestible explainer below…

1. Gemini replaces Google Bard and Duet AI

In some ways, Google Gemini makes things simpler. It's the new umbrella name for all of Google's AI tools, whether you're on a smartphone or desktop, or using the free or paid versions.

Gemini replaces Google Bard (the previous name for Google's “experimental” AI chatbot) and Duet AI, the collection of work-oriented tools for Google Workspace. Looking for a free AI helper to make you images or redraft emails? You can now go to Google Gemini and start using it with a standard Google account.

But if you want the more powerful Gemini Advanced AI tools – and access to Google's newest Gemini Ultra LLM – you'll need to pay a monthly subscription. That comes as part of a Google One AI Premium Plan, which you can read more about below.

To sum up, there are three main ways to access Google Gemini:   

2. Gemini is also replacing Google Assistant

Two phones on an orange background showing the Google Gemini app

(Image credit: Google)

As we mentioned above, Google has launched a new free Gemini app for Android. This is rolling out in the US now and Google says it'll be “fully available in the coming weeks”, with more locations to “coming soon”. Google is known for having a broad definition of “soon”, so the UK and EU may need to be patient.

There's going to be a similar rollout for iOS and iPhones, but with a different approach. Rather than a separate standalone app, Gemini will be available in the Google app.

The Android app is a big deal in particular because it'll let you set Gemini as your default voice assistant, replacing the existing Google Assistant. You can set this during the app's setup process, where you can tap “I agree” for Gemini to “handle tasks on your phone”.

Do this and it'll mean that whenever you summon a voice assistant on your Android phone – either by long-pressing your home button or saying “Hey Google” – you'll speak to Gemini rather than Google Assistant. That said, there is evidence that you may not want to do that just yet…

3. You may want to stick with Google Assistant (for now)

An Android phone on an orange background showing the Google Gemini app

(Image credit: Google)

The Google Gemini app has only been out for a matter of days – and there are early signs of teething issues and limitations when it comes to using Gemini as your voice assistant.

The Play Store is filling up with complaints stating that Gemini asks you to tap 'submit' even when using voice commands and that it lacks functionality compared to Assistant, including being unable to handle hands-free reminders, home device control and more. We've also found some bugs during our early tests with the app.

Fortunately, you can switch back to the old Google Assistant. To do that, just go the Gemini app, tap your Profile in the top-right corner, then go to Settings > Digital assistants from Google. In here you'll be able to choose between Gemini and Google Assistant.

Sissie Hsiao (Google's VP and General Manager of Gemini experiences) claims that Gemini is “an important first step in building a true AI assistant – one that is conversational, multimodal and helpful”. But right now, it seems that “first step” is doing a lot of heavy lifting.

4. Gemini is a new way to quiz Google's other apps

Two phones on an orange background showing the Google Gemini app

(Image credit: Google)

Like the now-retired Bard, Gemini is designed to be a kind of creative co-pilot if you need help with “writing, brainstorming, learning, and more”, as Google describes it. So like before, you can ask it to tell you a joke, rewrite an email, help with research and more. 

As always, the usual caveats remain. Google is still quite clear that “Gemini will make mistakes” and that, even though it's improving by the day, Gemini “can provide inaccurate information, or it can even make offensive statements”.

This means its other use case is potentially more interesting. Gemini is also a new way to interact with Google's other services like YouTube, Google Maps and Gmail. Ask it to “suggest some popular tourist sites in Seattle” and it'll show them in Google Maps. 

Another example is asking it to “find videos of how to quickly get grape juice out of a wool rug”. This means Gemini is effectively a more conversational way to interact with the likes of YouTube and Google Drive. It can also now generate images, which was a skill Bard learnt last week before it was renamed.

5. The free version of Gemini has limitations

Two phones on an orange background showing the Google Gemini Android app

(Image credit: Future)

The free version of Gemini (which you access in the Google Gemini app on Android, in the Google app on iOS, or on the Gemini website) has quite a few limitations compared to the subscription-only Gemini Advanced. 

This is partly because it's based on a simpler large language model (LLM) called Gemini Pro, rather than Google's new Gemini Ultra 1.0. Broadly speaking, the free version is less creative, less accurate, unable to handle multi-step questions, can't really code and has more limited data-handling powers.

This means the free version is best for basic things like answering simple questions, summarizing emails, making images, and (as we discussed above) quizzing Google's other services using natural language.

Looking for an AI assistant that can help with advanced coding, complex creative projects, and also work directly within Gmail and Google Docs? Google Gemini Advanced could be more up your street, particularly if you already subscribe to Google One… 

6. Gemini Advanced is tempting for Google One users

The subscription-only Gemini Advanced costs $ 19.99 / £18.99 / AU$ 32.99 per month, although you can currently get a two-month free trial. Confusingly, you get Advanced by paying for a new Google One AI Premium Plan, which includes 2TB of cloud storage.

This means Gemini Advanced is particularly tempting if you already pay for a Google One cloud storage plan (or are looking to sign up for it anyway). With a 2TB Google One plan already costing $ 9.99 / £7.99 / AU$ 12.49 per month, that means the AI features are effectively setting you back an extra $ 10 / £11 / AU$ 20 a month.

There's even better news for those who already have a Google One subscription with 5TB of storage or more. Google says you can “enjoy AI Premium features until July 21, 2024, at no extra charge”.

This means that Google, in a similar style to Amazon Prime, is combining its subscriptions offerings (cloud storage and its most powerful AI assistant) in order to make them both more appealing (and, most likely, more sticky too).

7. The Gemini app could take a little while to reach the UK and EU

Two phones on an orange background showing the Google Gemini app

(Image credit: Future)

While Google has stated that the Gemini Android app is “coming soon” to “more countries and languages”, it hasn't given any timescale for when that'll happen – and a possible reason for the delay is that it's waiting for the EU AI Act to become clearer.

Sissie Hsiao (Google's VP and General Manager of Gemini experiences) told the MIT Technology Review “we’re working with local regulators to make sure that we’re abiding by local regime requirements before we can expand.”

While that sounds a bit ominous, Hsiao added that “rest assured, we are absolutely working on it and I hope we’ll be able to announce expansion very, very soon.” So if you're in the UK or EU, you'll need to settle for tinkering with the website version for now.

Given the early reviews of the Google Gemini Android app, and its inconsistencies as a Google Assistant replacement, that might well be for the best anyway.

You might also like

TechRadar – All the latest technology news

Read More

Don’t know what’s good about Copilot Pro? Windows 11 users might soon find out, as Microsoft is testing Copilot ads for the OS

Windows 11 might be getting ads for Copilot Pro, or at least this possibility is being explored in testing right now it seems.

Copilot Pro, for those who missed it, was recently revealed as Microsoft’s powered-up version of the AI assistant that you have to pay for (via a monthly subscription). And if you haven’t heard about it, well, you might do soon via the Settings panel in Windows 11.

PhantomOfEarth on X (formerly Twitter) spotted the new move from Microsoft, with the introduction of a card for Copilot Pro on the Home page of the Settings app. It provides a brief explanation of what the service is alongside links to find out more (or to get a subscription there and then).

See more

Note that the leaker had to dig around to uncover the Copilot Pro advert, and it was only displayed after messing about with a configuration tool (in Dev and Beta builds). However, two other Windows 11 testers in the Beta channel have responded to say that they have this Copilot Pro card present without doing anything.

In other words, taking those reports at face value, it seems this Copilot Pro ad is on some kind of limited rollout to some testers. At any rate, it’s certainly present in the background of Windows 11 (Beta and Dev) and can be enabled.


Analysis: Adding more ads

The theory, then, is that this will be appearing more broadly to testers, before following with a rollout to everyone using Windows 11. Of course, ideas in testing can be abandoned, particularly if they get criticized a lot, so we’ll just have to watch this space (or rather, the space on the Home page of Settings).

Does it seem likely Microsoft will try to push ahead with a Copilot Pro advert? Yes, it does, frankly. Microsoft isn’t shy about promoting its own services within its products, that’s for sure. Furthermore, AI is set to become a huge part of the Windows 11 experience, and other Microsoft products for that matter, so monetizing it is going to be a priority in all likelihood.

So, a nudge to raise the profile of the paid version of Copilot seems to likely, if not inevitable. Better that it’s tucked away in Settings, we guess, than somewhere more in-your-face like the Start menu.

If you’re wondering what benefits Copilot Pro confers, they include faster performance and responses, along with more customization and options – but this shouldn’t take anything away from the free version of Copilot (or it doesn’t yet, anyway). What it does mean is that the very latest upgrades will likely be reserved for the Pro AI, as we’ve seen initially with GPT-4 Turbo coming to Copilot Pro and not the basic free Copilot.

Via Neowin

You might also like…

TechRadar – All the latest technology news

Read More

Samsung XR/VR headset – everything we know so far and what we want to see

We know for certain that a new Samsung XR/VR headset is in the works, with the device being made in partnership with Google. But much of the XR product’s details (XR, or extended reality, is a catchall for virtual, augmented, and mixed reality) are still shrouded in mystery. 

This so-called Apple Vision Pro rival (an XR headset from Apple) will likely have impressive specs – Qualcomm has confirmed its new Snapdragon XR2 Plus Gen 2 chip will be in the headset, and Samsung Display-made screens will probably be featured. It'll also likely have an equally premium price tag. Unfortunately, until Samsung says anything officially, we won’t know exactly how much it will cost, or when it will be released.

But using the few tidbits of official info, as well as our industry knowledge and the rumors out there, we can make some educated guesses that can clue you into the Samsung XR/VR headset’s potential price, release date, and specs – and we’ve got them down below. We’ve also highlighted a few of the features we’d like to see when it’s eventually unveiled to the public.

Samsung XR/VR headset: Price

The Samsung Gear VR headset on a red desk

The Samsung Gear VR, you needed a phone to operate it (Image credit: samsung)

We won’t know how much Samsung and Google’s new VR headset will cost until the device is officially announced, but most rumors point to it boasting premium specs – so expect a premium price.

Some early reports suggested Samsung was looking at something in the $ 1,000 / £1,000 / AU$ 1,500 range (just like the Meta Quest Pro) though it may have changed its plans. After the Apple Vision Pro reveal, it’s believed Samsung delayed the device most likely to make it a better Vision Pro rival in Samsung’s eyes – the Vision Pro is impressive, as you can find out from our hands-on Apple Vision Pro review.

If that’s the case, the VR gadget might not only more closely match the Vision Pro’s specs it might adopt the Vision Pro’s $ 3,499 (about £2,725 / AUS$ 5,230) starting price too, or something close to it.

Samsung XR/VR headset: Release date

Much like its price, we don’t know anything concrete about the incoming Samsung VR headset's release date yet. But a few signs point to a 2024 announcement – if not a 2024 release.

Firstly, there was the teaser Samsung revealed in February 2023 when it said it was partnering with Google to develop an XR headset. It didn’t set a date for when we’d hear more, but Samsung likely wouldn’t make this teasing announcement if the project was still a long way from finishing. Usually, a more full reveal happens a year or so from the teaser – so around February 2024.

There was a rumor that Samsung’s VR headset project was delayed after the Vision Pro announcement, though the source maintained that the headset would still arrive in 2024 – just mid-to-late 2024, rather than February.

Three people on stage at Samsung Unpacked 2023 teasing Samsung's future of XR

The Samsung Unpacked 2023 XR headset teaser (Image credit: Samsung)

Then there’s the Snapdragon XR2 Plus Gen 2 chipset announcement. Qualcomm was keen to highlight Samsung and Google as partners that would be putting the chipset to use. 

It would be odd to highlight these partners if its headset was still a year or so from launching. Those partners may have preferred to work with a later next-gen chip, if the XR/VR headset was due in 2025 or later. So this would again point to a 2024 reveal, if not a precise date this year.

Lastly, there have also been suggestions that the Samsung VR headset might arrive alongside the Galaxy Z Flip 6 – Samsung's folding phone that's also due to arrive in 2024.

Samsung XR/VR headset: Specs

A lot of the new Samsung VR headset’s specs are still a mystery. We can assume it’ll use Samsung-made displays (it would be wild if Samsung used screens from one of its competitors) but the type of display tech (for example, QLED, OLED or LCD), resolution, and size are still unknown.

We also don’t know what size battery it’ll have, or its storage space, or its RAM. Nor what design it will adopt – will it look like the Vision Pro with an external display, like the Meta Quest 3 or Quest Pro, or something all-new?

Key Snapdragon XR2 Plus Gen 2 specs, including that it has support fo 4.3k displays, 8x better AI performance, and 2.5x better GPU performance

(Image credit: Qualcomm)

But we do know one thing. It’ll run (as we predicted) on a brand-new Snapdragon XR2 Plus Gen 2 chip from Qualcomm – an updated version of the chipset used by the Meta Quest Pro, and slightly more powerful than the XR2 Gen 2 found in the Meta Quest 3.

The upshot is that this platform can now support two displays at 4.3K resolution running at up to 90fps. It can also manage over 12 separate camera inputs that VR headsets will rely on for tracking – including controllers, objects in the space, and face movements – and it has more advanced AI capabilities, 2.5x better GPU performance, and Wi-Fi 7 (as well as 6 and 6E).

What we want to see from the new Samsung XR/VR headset

1. Samsung’s XR/VR headset to run on the Quest OS 

Girl wearing Meta Quest 3 headset interacting with a jungle playset

We’d love to see the best Quest apps on Samsung’s VR headset (Image credit: Meta)

This is very much a pipe dream. With Google and Samsung already collaborating on the project it’s unlikely they’d want to bring in a third party – especially if this headset is intended to compete with Apple and Meta hardware.

But the Quest platform is just so good; by far the best we’ve seen on standalone VR headsets. It’s clean, feature-packed, and home to the best library of VR games and apps out there. The only platform that maybe beats it is Steam, but that’s only for people who want to be tethered to a PC rig.

By partnering with Meta, Samsung’s headset would get all of these benefits, and Meta would have the opportunity to establish its OS as the Windows or Android of the spatial computing space – which might help its Reality Labs division to generate some much-needed revenue by licensing the platform to other headset manufacturers.

2. A (relatively) affordable price tag

Oculus Quest 2 on a white background

The Quest 2 is the king of VR headsets, because it’s affordable  (Image credit: Shutterstock / Boumen Japet)

There’s only been one successful mainstream VR headset so far: the Oculus Quest 2. The Meta-made device has accounted for the vast, vast majority of VR headset sales over the past few years (eclipsing the total lifetime sales of all previous Oculus VR headsets combined in just five months) and that’s down to one thing; it’s so darn cheap.

Other factors (like a pandemic that forced everyone inside) probably helped a bit. But fundamentally, getting a solid VR headset for $ 299 / £299 / AU$ 479 is a very attractive offer. It could be better specs-wise but it’s more than good enough and offers significantly more bang for your buck than the PC-VR rigs and alternative standalone headsets that set you back over $ 1,000 when you factor in everything you need.

Meta’s Quest Pro, the first headset it launched after the Quest 2 that has a much more premium $ 999 / £999 / AU$ 1,729 price (it launched at $ 1,500 / £1,500 / AU$ 2,450) has seemingly sold significantly worse. We don’t have exact figures but using the Steam Hardware Survey figures for December 2023 we can see that while 37.87% of Steam VR players use a Quest 2 (making it the most popular option, and more than double the next headset) only 0.44% use a Quest Pro – that’s about 86 times less.

The Apple Vision Pro headset on a grey background

The Apple Vision Pro is too pricey (Image credit: Apple)

So by making its headset affordable, Samsung would likely be in a win-win situation. We win because its headset isn’t ridiculously pricey like the $ 3,499 (around £2,800 / AU$ 5,300) Apple Vision Pro. Samsung wins because its headset has the best chance of selling super well.

We’ll have to wait and see what’s announced by Samsung, but we suspect we’ll be disappointed on the price front. A factor that could keep this device from becoming one of the best VR headsets out there.

3. Controllers and space for glasses 

We’ve combined two smaller points into one for this last ‘what we want to see’.

Hand tracking is neat, but ideally it’ll just be an optional feature on the upcoming Samsung VR headset rather than the only way to operate it – which is the case with the Vision Pro. 

Most VR apps are designed with controllers in mind, and because most headsets now come with handsets that have similar button layouts it’s a lot easier to port software to different systems. 

Meta Quest 3 controllers floating in a turquoise colored void.

The Meta Quest 3’s controllers are excellent, copy these Samsung (Image credit: Meta )

There are still challenges, but if your control scheme doesn’t need to be reinvented, developers have told us that’s a massive time-saver. So having controllers with this standard layout could help Samsung get a solid library of games and apps on its system by making it easier for developers to bring their software to it.

We’d also like it to be easy for glasses wearers to use the new Samsung VR headset. The Vision Pro’s prescription lenses solution is needlessly pricey when headsets like the Quest 2 and Quest 3 have a free in-built solution for the problem – an optional spacer or way to slightly extend the headset so it’s further from your face leaving room for specs.

Ideally, Samsung’s VR headset would also have a free and easy solution for glasses wearers, too.

You might also like

TechRadar – All the latest technology news

Read More

We may finally know when Apple’s Vision Pro will launch – but big questions remain

If you’re an Apple fan and your new year’s resolution is to save your money this January, we’ve got some bad news: a new rumor says Apple’s Vision Pro headset will go on sale in just a few weeks’ time. However – and perhaps fortunately for your finances – there are some serious questions floating around the rumor.

The mooted January launch date comes from Wall Street Insights, a news outlet for Chinese investors (via MacRumors). According to a machine-translated version of the report, “Apple Vision Pro is expected to be launched in the United States on January 27, 2024.”

The report adds that “Supply chain information shows that Sony is currently the first supplier of silicon-based OLEDs for the first-generation Vision Pro, and the second supplier is from a Chinese company, which will be the key to whether Vision Pro can expand its production capacity.”

With the supposed launch date just 25 days away, it might not be long before we see Apple’s most significant new product in years. Yet, despite the apparent certainty in the report, there are reasons to be skeptical about its accuracy.

Date uncertainty

For one thing, January 27 is a Saturday, an unlikely day for an Apple product launch. It could be that Wall Street Insights is referring to January 27 in China which, thanks to time zone differences, aligns with Friday January 26 in the United States. That’s a much more probable release date, as it doesn't coincide with the weekend, when many of the media outlets that would cover the Vision Pro will be providing reduced news coverage. Yet the report specifically mentions the date in the US, meaning that questions remain.

Moving past the specific date, an early 2024 launch date has been put forward by a number of reputable Apple analysts. Ming-Chi Kuo, for example, has suggested a late January or early February timeframe, while Bloomberg reporter Mark Gurman has zeroed in on February as the release month.

Either way, it’s clear that the Vision Pro is almost upon us. Apple has reportedly been training retail staff how to use the device, which implies that the company is almost ready to pull the trigger.

We’ll see how accurate the Wall Street Insights report is in a few weeks’ time. Regardless of whether or not it has the correct date, we’re undoubtedly on the brink of seeing Apple’s most anticipated new product in recent memory.

TechRadar – All the latest technology news

Read More

What is Google Bard? Everything you need to know about the ChatGPT rival

Google finally joined the AI race and launched a ChatGPT rival called Bard – an “experimental conversational AI service” earlier this year. Google Bard is an AI-powered chatbot that acts as a poet, a crude mathematician and a even decent conversationalist.

The chatbot is similar to ChatGPT in many ways. It's able to answer complex questions about the universe and give you a deep dive into a range of topics in a conversational, easygoing way. The bot, however, differs from its rival in one crucial respect: it's connected to the web for free, so – according to Google – it gives “fresh, high-quality responses”.

Google Bard is powered by PaLM 2. Like ChatGPT, it's a type of machine learning called a 'large language model' that's been trained on a vast dataset and is capable of understanding human language as it's written.

Who can access Google Bard?

Bard was announced in February 2023 and rolled out for early access the following month. Initially, a limited number of users in the UK and US were granted access from a waitlist. However, at Google I/O – an event where the tech giant dives into updates across its product lines – Bard was made open to the public.

It’s now available in more than 180 countries around the world, including the US and all member states of the European Union. As of July 2023, Bard works with more than 40 languages. You need a Google account to use it, but access to all of Bard’s features is entirely free. Unlike OpenAI’s ChatGPT, there is no paid tier.

The Google Bard chatbot answering a question on a computerscreen

(Image credit: Google)

Opening up chatbots for public testing brings great benefits that Google says it's “excited” about, but also risks that explain why the search giant has been so cautious to release Bard into the wild. The meteoric rise of ChatGPT has, though, seemingly forced its hand and expedited the public launch of Bard.

So what exactly will Google's Bard do for you and how will it compare with ChatGPT, which Microsoft appears to be building into its own search engine, Bing? Here's everything you need to know about it.

What is Google Bard?

Like ChatGPT, Bard is an experimental AI chatbot that's built on deep learning algorithms called 'large language models', in this case one called LaMDA. 

To begin with, Bard was released on a “lightweight model version” of LaMDA. Google says this allowed it to scale the chatbot to more people, as this “much smaller model requires significantly less computing power”.

The Google Bard chatbot answering a question on a phone screen

(Image credit: Google)

At I/O 2023, Google launched PaLM 2, its next-gen language model trained on a wider dataset spanning multiple languages. The model is faster and more efficient than LamDA, and comes in four sizes to suit the needs of different devices and functions.

Google is already training its next language model, Gemini, which we think is one of its most exciting projects of the next 25 years. Built to be multi-modal, Gemini is touted to deliver yet more advancements in the arena of generative chatbots, including features such as memory.

What can Google Bard do?

In short, Bard is a next-gen development of Google Search that could change the way we use search engines and look for information on the web.

Google says that Bard can be “an outlet for creativity” or “a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills”.

Unlike traditional Google Search, Bard draws on information from the web to help it answer more open-ended questions in impressive depth. For example, rather than standard questions like “how many keys does a piano have?”, Bard will be able to give lengthy answers to a more general query like “is the piano or guitar easier to learn”?

The Google Bard chatbot answering a question on a computer screen

An example of the kind or prompt that Google’s Bard will give you an in-depth answer to. (Image credit: Google)

We initially found Bard to fall short in terms of features and performance compared to its competitors. But since its public deployment earlier this year, Google Bard’s toolkit has come on leaps and bounds. 

It can generate code in more than 20 programming languages, help you solve text-based math equations and visualize information by generating charts, either from information you provide or tables it includes in its responses. It’s not foolproof, but it’s certainly a lot more versatile than it was at launch.

Further updates have introduced the ability to listen to Bard’s responses, change their tone using five options (simple, long, short, professional or casual), pin and rename conversations, and even share conversations via a public link. Like ChatGPT, Bard’s responses now appear in real-time, too, so you don’t have to wait for the complete answer to start reading it.

Google Bard marketing image

(Image credit: Google)

Improved citations are meant to address the issue of misinformation and plagiarism. Bard will annotate a line of code or text that needs a citation, then underline the cited part and link to the source material. You can also easily double-check its answers by hitting the ‘Google It’ shortcut.

It works with images as well: you can upload pictures with Google Lens and see Google Search image results in Bard’s responses.

Bard has also been integrated into a range of Google apps and services, allowing you deploy its abilities without leaving what you’re working on. It can work directly with English text in Gmail, Docs and Drive, for example, allowing you to summarize your writing in situ.

Similarly, it can interact with info from the likes of Maps and even YouTube. As of November, Bard now has the limited ability to understand the contents of certain YouTube videos, making it quicker and easier for you to extract the information you need.

What will Google Bard do in future?

A huge new feature coming soon is the ability for Google Bard to create generative images from text. This feature, a collaborative effort between Google and Adobe, will be brought forward by the Content Authenticity Initiative, an open-source Content Credentials technology that will bring transparency to images that are generated through this integration.

The whole project is made possible by Adobe Firefly, a family of creative generative AI models that will make use of Bard's conversational AI service to power text-to-image capabilities. Users can then take these AI-generated images and further edit them in Adobe Express.

Otherwise, expect to see Bard support more languages and integrations with greater accuracy and efficiency, as Google continues to train its ability to generate responses.

Google Bard vs ChatGPT: what’s the difference?

Fundamentally the chatbot is based on similar technology to ChatGPT, with even more tools and features coming that will close the gap between Google Bard and ChatGPT.

Both Bard and ChatGPT are chatbots that are built on 'large language models', which are machine learning algorithms that have a wide range of talents including text generation, translation, and answering prompts based on the vast datasets that they've been trained on.

A laptop screen showing the landing page for ChatGPT Plus

(Image credit: OpenAI)

The two chatbots, or “experimental conversational AI service” as Google calls Bard, are also fine-tuned using human interactions to guide them towards desirable responses. 

One difference between the two, though, is that the free version of ChatGPT isn't connected to the internet – unless you use a third-party plugin. That means it has a very limited knowledge of facts or events after January 2022. 

If you want ChatGPT to search the web for answers in real time, you currently need to join the waitlist for ChatGPT Plus, a paid tier which costs $ 20 a month. Besides the more advanced GPT-4 model, subscribers can use Browse with Bing. OpenAI has said that all users will get access “soon”, but hasn't indicated a specific date.

Bard, on the other hand, is free to use and features web connectivity as standard. As well as the product integrations mentioned above, Google is also working on Search Generative Experience, which builds Bard directly into Google Search.

Does Google Bard only do text answers?

Until recently Google's Bard initially only answered text prompts with its own written replies, similar to ChatGPT. But one of the biggest changes to Bard is its multimodal functionality. This allows the chatbot to answer user prompts and questions with both text and images.

Users can also do the same, with Bard able to work with Google Lens to have images uploaded into Bard and Bard responding in text. Multimodal functionality is a feature that was hinted at for both GPT-4 and Bing Chat, and now Google Bard users can actually use it. And of course, we also have Google Bard's Adobe-powered AI image generator, which will be powered by Adobe Firefly.

TechRadar – All the latest technology news

Read More