My jaw hit the floor when I watched an AI master one of the world’s toughest physical games in just six hours

An AI just mastered Labyrinth in six hours, and I am questioning my own existence.

I started playing Labyrinth in the 1970s. While it may look deceptively simple and is fully analog, Labyrinth is an incredibly difficult, nearly 60-year-old physical board game that challenges you to navigate a metal ball through a hole-riddled maze by changing the orientation of the game platform using only the twistable nobs on two adjacent sides of the game's box frame.

I still remember my father bringing Labyrinth home to our Queens apartment, and my near-total obsession with mastering it. If you've never played, then you have no idea how hard it is to keep a metal ball on a narrow path between two holes just waiting to devour it.

It's not like you get past a few holes and you're home free; there are 60 along the whole meandering path. One false move and the ball is swallowed, and you have to start again. It takes fine motor control, dexterity, and a lot of real-time problem-solving to make it through unscathed. I may have successfully navigated the treacherous route a few times.

It sometimes ignored the path and took shortcuts. That’s called cheating.

In the intervening years, I played sporadically (once memorably with a giant labyrinth at Google I/O), but mostly I forgot about the game, though I guess I never really forgot the challenge.

Perhaps that's why my mouth dropped open as I watched CyberRunner learn and beat the game in just six hours.

In a recently released video, programmers from the public research university ETH Zurich showed off their bare-bones AI robot, which uses a pair of actuators that act as the 'hands' to twist the Labyrinth nobs, an overhead camera to watch the action, and a computer running an AI algorithm to learn and, eventually, beat the game.

In the video, developers explain that “CyberRunner exploits recent advances in model-based reinforcement learning and its ability to make informed decisions about potentially successful behaviors by planning into the future.”

Initially, CyberRunner was no better than me or any other average human player. It dumped the metal ball into holes less than a tenth of the way through the path, and then less than a fifth of the way through. But with each attempt, CyberRunner got better – and not just a little better, but exponentially so.

In just six hours, according to the video, “CyberRunner's able to complete the maze faster than any previously recorded time.” 

The video is stunning. The two motors wiggle the board at a super-human rate, and manage to keep the ball so perfectly on track that it's never in danger of falling into any of the holes. CyberRunner's eventual fasted time was a jaw-dropping 14.8 seconds. I think my best time was… well, it could often take many minutes.

I vividly recall playing, and how I would sometimes park the ball in the maze, taking a break mid-challenge to prepare myself for the remainder of the still-arduous journey ahead. No so with CyberRunner. Its confidence is the kind that's only possible with an AI. It has no worries about dropping its metal ball into a hole; no fear of failure.

It also, initially, had no fear of getting caught cheating.

As CyberRunner was learning, it did what computers do and looked for the best and fastest path through the maze, which meant it sometimes ignored the path and took shortcuts. That's called cheating. Thankfully, the researchers caught CyberRunner, and reprogrammed it so it was forced to follow the full maze.

Of course, CyberRunner's accomplishment is not just about beating humans at a really difficult game. This is a demonstration of how an AI can solve physical-world problems based on vision, physical interaction, and machine learning. The only question is, what real-world problems will this open-source project solve next?

As for me, I need to go dig my Labyrinth out of my parent's closet.

You might also like

TechRadar – All the latest technology news

Read More

New Apple Vision Pro video gives us a taste of escaping to its virtual worlds

The promise of Apple’s Vision Pro headset – or any of the best virtual reality headsets, for that matter – is that it can transport you to another world, at least for a while. Now, we’ve just gained a preview of how Apple’s device will do this in a whole new way.

That’s because the M1Astra account on X (formerly known as Twitter) has begun posting videos showing the Vision Pro’s Yosemite Environment in action, complete with sparkling snow drifts, imposing mountains and beautiful clear blue skies.

It looks like a gorgeous way to relax and shut out the world around you. You’ll be able to focus on the calm and tranquillity of one of the world’s most famous national parks, taking in the majestic surroundings as you move and tilt your head.

This is far from the only location that comes as part of the Vision Pro’s Environments feature – users will be able to experience environs from a sun-dappled beach and a crisp autumnal scene to the dusty plains of the Moon in outer space.

Immersive environments

See more

The Environments feature is designed to be a way for you to not only tune out the real world, but to add a level of calmness and focus to your workstation. That’s because the scenes they depict can be used as backgrounds for a large virtual movie screen, or as a backdrop to your apps, video calls and more.

But as shown in one video posted by M1Astra, you'll also be able to walk around in the environment. As the poster strolled through the area, sun glistened off the snow and clouds trailed across the sky, adding life and movement to the virtual world.

To activate an environment, you’ll just need to turn the Vision Pro’s Digital Crown. This toggles what you see between passthrough augmented reality and immersive virtual reality. That sounds like it should be quick and easy, but we’ll know more when we get to test out the device after it launches.

Speaking of which, Apple’s Vision Pro is still months away from hitting store shelves (the latest estimates are for a March 2024 release date), which means there’s plenty of time for more information about the Environments feature to leak out. What’s clear already, though, is that it could be a great thing to try once the headset is out in the wild.

You might also like

TechRadar – All the latest technology news

Read More

Apple wants the Vision Pro to be the world’s most expensive in-flight accessory

The first beta for the Apple Vision Pro headset’s operating system – visionOS – has launched and we’re finding out a bunch of interesting details about the Apple VR headset, including that Apple wants it to be the ultimate travel companion.

Apple’s Vision Pro isn’t expected to launch until next year, but that hasn’t stopped Apple from releasing the OS early so app creators can start bringing their software to the system. This way, by the time the headset is publicly available it should have a solid library of content that’ll help justify its exceptionally high price of $ 3,499 (around £2,800 / AU$ 5,300). But the beta isn’t just giving us an idea of what third-party developers are working on for the Apple headset, it’s giving us a clear picture of the direction Apple wants to take the Vision Pro.

Previously (in our round-up of six Vision Pro details the visionOS beta has revealed) it was discovered that Apple isn’t keen for people to use its headset for VR fitness – with its guidance for app makers being they should “avoid encouraging people to move too much.” Now we’ve learned that the Vision Pro will have a dedicated Travel Mode designed for using the headset on an airplane (discovered by MacRumors).

The Apple Vision Pro headset on a stand at the Apple headquarters

(Image credit: Future)

Travel Mode is more than just the typical airplane mode you’d find on your smartphone. Instead, it apparently adapts how the Vision Pro operates so that the experience is better suited to being crammed like a sardine next to people in Economy. According to code found in the visionOS beta, the headset will do this by switching off some of its awareness features and asking you to stay stationary while in Travel Mode.

Both of these make sense. The Vision Pro’s awareness features alert the wearer if a person or an object gets close to them while they’re wearing the headset. On a plane, where people are around you all of the time this could make the sensors go haywire and be a major distraction to your in-flight VR movie. As for moving around, if you have people sitting on either side of you then they likely won’t appreciate it if you start flailing your arms around.

So you won’t be getting the full Vision Pro experience during your flight, but the idea of making your travel better with VR certainly sounds appealing. The beta code doesn’t go into much more detail, but we can turn to the Apple Vision Pro introduction video shown at WWDC 2023 to get an idea of how Travel mode functions. TL;DR, you can use your headset as a private movie theatre and enjoy a 4K film of your choice (that you likely had to download before you boarded) on a massive virtual display – a much larger and higher-quality image than a plane’s built-in video screens.

A model wearing the Nreal Air glasses, looking cool

The Nreal Air AR Glasses (Image credit: Nreal)

That said, if you don’t want to splash out $ 3,500 for a piece of travel tech, there are much more budget-friendly AR glasses that can achieve a similar effect to the Vision Pro’s private movie theatre. The Xreal Air AR glasses (formerly Nreal Air) won’t offer you 4K visuals and have a fair few faults – namely, we feel they’re pricey for what you get and the battery life leaves something to be desired – but if you’re a frequent flier these could be just what you need and they only cost $ 379 / £400 (around AU$ 570). And when the Xreal Beam launches it looks like many of the AR glasses’ faults could be solved. 

TechRadar – All the latest technology news

Read More

Meta Builder Bot concept happily builds virtual worlds based on voice description

The Metaverse, that immersive virtual world where Meta (née Facebook) imagines we'll work, play, and interact with friends and family is also where we may someday build entire worlds with nothing but our voice.

During an online AI development update delivered, in part, by Meta/Facebook Founder and CEO Mark Zuckerberg on Wednesday (February 23), the company offered a glimpse of Builder Bot, an AI concept that allows the user to build entire virtual experiences using their voice.

Standing in what looked like a stripped-down version of Facebook's Horizon Worlds' Metaverse, Zuckerberg's and a co-worker's avatars asked a virtual bot to add an island, some furniture, clouds, a catamaran, and even a boombox that could pay real music to the environment. In the demonstration, the command phrasing was natural and the 3D virtual imagery appeared instantly, though it did look a bit like the graphics you'd find in Nintendo's Animal Crossing: New Horizons.

The development of Builder Bot is part of a larger AI initiative called Project CAIRaeoke, which is an end-to-end neural model for building on-device assistance. 

Meta's Builder Bot concept

Mark Zuckerberg’s legless avatar and Builder Bot. (Image credit: Future)

Zuckerberg explained that current technology is not yet equipped to help us explore an immersive version of the internet that will ultimately live in the Metaverse. While that will require updates across a whole range of hardware and software, Meta believes AI is the key to unlocking advancement that will lead to, as Zukerberg put it, “a new generation of assistants that will help us explore new worlds”.

“When we’re wearing [smart] Glasses, it will be the first time an AI system will be able to see the world from our perspective,” he added. A key goal here is for the AI they're developing to see as we do and, more importantly, learn about the world as we do, as well.

It's unclear if Builder Bot will ever become a true part of the burgeoning Metaverse, but its skill with real-time language processing and understanding how parts of the environment should go together is clearly informed by the work Meta is doing.

Mark Zuckerberg talks AI translation

Mark Zuckerberg talks AI translation (Image credit: Future)

Zuckerberg outlined a handful of other related AI projects, all of which will eventually feed into a Metaverse that can be accessed and used by anyone in the world.

These include “No Language Left Behind,” which, unlike traditional translation that often uses English as a mid-translation point, can translate languages directly from the source to the translation language. There's also the very Star Trek-like “Universal Speech Translator”, which would provide instantaneous speech-to-speech translation across all languages, including spoken languages.

“AI is going to deliver that in our lifetimes,” said Zuckerberg.

Mark Zuckerberg talks image abstraction

Mark Zuckerberg talks image abstraction (Image credit: Future)

Meta is also investing heavily in self-supervised learning (SSL) to build human-like cognition into AI systems. Instead of training with tons of images to help the AI identify patterns, the system is fed raw data and then asked to predict the missing parts. Eventually, the AI learns how to build abstract representations.

An AI that can understand abstraction could complete an image just from a few pieces of visual information, or generate the next frame of a video it's never seen. It could also build a visually pleasing virtual world with only your words to guide it.

For those full-on freaked out by Meta's Metaverse ambitions, Zuckerberg said that the company is building the Metaverse for everyone and they are “committed to build openly and responsibly” while protecting privacy and preventing harm.

It's unlikely anyone will take his word for it, but we look forward to watching the Metaverse's development.

TechRadar – All the latest technology news

Read More