These new AI smart glasses are like getting a second pair of ChatGPT-powered eyes

The Ray-Ban Meta glasses have a new rival for the title of best smart glasses, with the new Solos AirGo Visions letting you quiz ChatGPT about the objects and people you're looking at.

Unlike previous Solos glasses, the AirGo Vision boast a built-in camera and support for OpenAI's latest GPT-4o model. These let the glasses identify what you're looking at and respond to voice prompts. For example, you could simply ask, “what am I looking at?” or give the AirGo Visions a more specific request like “give me directions to the Eiffel Tower.”

Another neat feature of the new Solos glasses is their modular frame design, which means you can change some parts – for example, the camera or lenses – to help them suit different situations. These additional frames start from $ 89 (around £70 / AU$ 135).   

If talking to a pair of camera-equipped smart glasses is a little too creepy, you can also use the camera to simply take holiday snaps. The AirGo Visions also feature built-in speakers to answer your questions or play music.

While there's no official price or release date for the full version of the AirGo Visions, Solos will release a version without the camera for $ 249 (around £200 / AU$ 375) in July. That means we can expect a camera-equipped pair to cost at least as much as the Ray-Ban Meta glasses, which will set you back $ 299 / £299 / AU$ 449.

How good are AI-powered smart glasses?

While we haven't yet tried the Solos AirGo Visions, it's fair to say that smart glasses with AI assistants are a work in progress. 

TechRadar's Senior Staff Writer Hamish Hector recently tried the Meta AI's 'Look and Ask' feature on his Ray-Ban smart glasses and found the experience to be mixed. He stated that “the AI is – when it works – fairly handy,” but that “it wasn’t 100% perfect, struggling at times due to its camera limitations and an overload of information.”

The smart glasses failed in some tests, like identifying trees, but their ability to quickly summarize a confusing, information-packed sign about the area’s parking restrictions showed how useful they can be in some situations.

As always, with any AI-powered responses, you'll want to corroborate any answers to filter out errors and so-called hallucinations. But there's undoubtedly some potential in the concept, particularly for travelers or anyone who is visually impaired.

The Solos AirGo Visions' support for OpenAI's latest GPT-4o model should make for an interesting comparison with the Ray-Ban Meta smart glasses when the camera-equipped version lands. Until then, you can check out our guide to the best smart glasses you can buy right now.

You might also like

TechRadar – All the latest technology news

Read More

Apple’s next accessibility features let you control your iPhone and iPad with just your eyes

Ahead of Global Accessibility Day on May 16, 2024, Apple unveiled a number of new accessibility features for the iPhone, iPad, Mac, and Vision Pro. Eye tracking is leading a long list of new functionality which will let you control your iPhone and iPad by moving your eyes. 

Eye Tracking, Music Haptics, Vocal Shortcuts, and Vehicle Motion Cues will arrive on eligible Apple gadgets later this year. These new accessibility features will most likely be released with iOS 18, iPadOS 18, VisionOS 2, and the next version of macOS. 

These new accessibility features have become a yearly drop for Apple. The curtain is normally lifted a few weeks before WWDC, aka the Worldwide Developers Conference, which kicks off on June 10, 2024. That should be the event where we see Apple show off its next generation of main operating systems and AI chops. 

Eye-Tracking looks seriously impressive 

Eye Tracking demoed on an iPad.

(Image credit: Apple)

Eye-tracking looks seriously impressive and is a key way to make the iPhone and iPad even more accessible. As noted in the release and captured in a video, you can navigate iPadOS – as well as iOS – open apps, and even control elements all with just your eyes, and it uses the front-facing camera, artificial intelligence, and local machine learning throughout the experience. 

You can look around the interface and use “Dwell Control” to engage with a button or element. Gestures will also be handled through just eye movement. This means that you can first look at Safari, Phone, or another app, hold that view, and it will open. 

Most critically, all setup and usage data is kept local on the device, so you’ll be set with just your iPhone. You won’t need an accessory to use eye tracking. It’s designed for people with physical disabilities and builds upon other accessible ways to control an iPhone or iPad.

Vocal Shortcuts, Music Haptics, and Live Captions on Vision Pro

Apple's new Vocal Shortcuts for iPhone and iPad.

(Image credit: Apple)

Another new accessibility feature is Vocal Shortcuts, designed for iPad and iPhone users with ALS (amyotrophic lateral sclerosis), cerebral palsy, stroke, or “acquired or progressive conditions that affect speech.” This will let you set up a custom sound that Siri can learn and identify to launch a specific shortcut or run through a task. It lives alongside Listen for Atypical Speech, designed for the same users, to open up speech recognition for a wider set. 

These two features build upon some introduced within iOS 17, so it’s great to see Apple continue to innovate. With Atypical Speech, specifically, Apple is using artificial intelligence to learn and recognize different types of speech. 

Music Haptics on the iPhone is designed for users who are hard of hearing or deaf to experience music. The built-in taptic engine, which powers the iPhone’s haptics, will play different vibrations, like taps and textures, that resemble a song's audio. At launch, it will work across “millions of songs” within Apple Music, and there will be an open API for developers to implement and make music from other sources accessible.

Additionally, Apple has previews of a few other features and updates. Vehicle Motion Cues will be available on iPhone and iPad and aim to reduce motion sickness with animated dots on that screen that change as vehicle motion is detected. It's designed to help reduce motion sickness without blocking whatever you view on the screen.

A look at Live Captions in visionOS at Apple Vision Pro

(Image credit: Apple)

One major addition arriving for VisionOS – aka the software that powers Apple Vision Pro – will be Live Captions across the entire system. This will allow for captions for spoken dialogue within conversations from FaceTime and audio from apps to be seen right in front of you. Apple’s release notes that it was designed for users who are deaf or hard of hearing, but like all accessibility features, it can be found in Settings.

Since this is Live Captions on an Apple Vision Pro, you can move the window containing the captions around and adjust the size like any other window. Vision accessibility within VisosOS will also gain reduced transparency, smart inverting, and dim flashing light functionality.

Regarding when these will ship, Apple notes in the release that the “new accessibility features [are] coming later this year.” We’ll keep a close eye on this and imagine that these will ship with the next generation of OS’ like iOS 18 and iPadOS 18, meaning folks with a developer account may be able to test these features in forthcoming beta releases.

Considering that a few of these features are powered by on-device machine learning and artificial intelligence, aiding with accessibility features is just one way that Apple believes AI has the potential to make an impact. We’ll likely hear the technology giant share more of its thoughts around AI and consumer-ready features at WWDC 2024.

You Might Also Like

TechRadar – All the latest technology news

Read More

OpenAI’s CEO to get $100 million to scan everyone’s eyes for new crypto project

After going mainstream with ChatGPT, OpenAI's CEO is now embarking on a new challenge online.

Sam Altman actually co-founded Worldcoin in 2019 with the mission of “building the world’s largest identity and financial network.” Now, he seems to be close to securing $ 100 million of funds to kickstart the next step of the project: scanning everyone's eyeball to grant them free access to the new global cryptocurrency.

Some commentators have already expressed concerns about the ethical and privacy issues that could arise from it. So, will this end up being another privacy nightmare very much like his AI-powered bot?  

Iris-scanning ID verification system

According to the official website, Worldcoin is a new global cryptocurrency that aims to “create universal access to the global economy regardless of country or background, accelerating the transition to an economic future that welcomes and benefits every person on the planet.”

Quite an ambitious mission, but how do its founders plan to do that?

The key to the whole project seems to be what they refer to as the Orb. This is software that “uses iris biometrics to establish an individual’s unique personhood.” Once users have been verified, they can create their digital World ID and start receiving the crypto tokens. 

The company ensures that the World ID, which was released last week in Beta together with the World App, “can be used pseudonymously in a wide variety of everyday applications without revealing the user’s identity.”

This technology, the so-called proof of personhood protocol, is also believed to tackle some of the biggest issues raised by the quick development of AI-powered tools. It will discern between a real person and a bot, for example. Developers even believe that it could help provide a universal basic income to those affected by job cuts caused by AI.

Not everyone seems to be thrilled by the idea, though. Famous US whistleblower Edward Snowden raised concerns about the practice back in 2021. At the time, he pointed out how Worldcoin would de-facto build a global database of people's iris scans, keeping them in the form of hashes able to “match with future scans.”

See more

The company ensures that it will not store eye scans. It also says that the device is safe to use and will not hurt people's irises. 

Three people with knowledge of the deal have said to the Financial Times that Wordlcoin is now in “advanced talks to raise fresh cash as it prepares to launch in the next few weeks.”

The startup seems to be attracting new investors, too, alongside previous names like FTX founder Sam Bankman-Fried and internet entrepreneur Reid Hoffman.

Despite still operating on Beta, Worldcoin counts over 1.7 million sign-ups across the world so far, but the numbers are very likely to get higher soon.  

TechRadar – All the latest technology news

Read More