6 new things we’ve learned about the Apple Vision Pro as its first video ad lands

We've had quite the wait for the Apple Vision Pro, considering it was unveiled back in June at Apple's annual WWDC event. Yesterday we finally got the news that the Vision Pro will be going on sale on Friday, February 2, with preorders open on Friday, January 19 – and some other new bits of information have now emerged, alongside its first video ad (below).

As Apple goes into full sales mode for this pricey mixed reality headset, it's answering some of the remaining questions we had about the device, and giving us a better idea of what it's capable of. Considering one of these will cost you $ 3,499 (about £2,750 / AU$ 5,225) and up, you're no doubt going to want all of the details you can get.

Here at TechRadar we've already had some hands-on time with the Vision Pro, and checked out how 3D spatial videos will look on it (which got a firm thumbs up). Here's what else we've found out about the Vision Pro over the last 24 hours.

1. Apple thinks it deserves to be in a sci-fi movie

Take a look at this brand new advert for the Apple Vision Pro and see how many famous movies you can name. There's a definite sci-fi angle here, with films like Back to the Future and Star Wars included, and Apple clearly wants to emphasize the futuristic nature of the device (and make strapping something to your face seem cool rather than nerdy).

If you've got a good memory then you might remember that one of the first adverts for the iPhone also made use of short clips cut from a multitude of films, featuring stars such as Marilyn Monroe, Michael Douglas, and Steve McQueen. Some 16 years on, Apple is once again using the power of the movies to push a next-gen piece of hardware.

2. The battery won't last for the whole of Oppenheimer

Apple Vision Pro

(Image credit: Apple)

Speaking of movies, you're going to need a recharge if you want to watch all of Oppenheimer on the Apple Vision Pro. Christopher Nolan's epic film runs for three hours and one minute, whereas the Vision Pro product page (via MacRumors) puts battery life at 2.5 hours for watching 2D videos.

That's when you're watching a video in the Apple TV app, and in one of the virtual environments that the Vision Pro is able to conjure up. Interestingly, the product page text saying that the device could run indefinitely as long as it was plugged into a power source has now been quietly removed.

3. The software is still a work in progress

Apple Vision Pro on a person's head

Preorders for the Vision Pro open this month (Image credit: Apple)

Considering the high price of the Apple Vision Pro, and talk of limited availability, this doesn't really feel like a mainstream device that Apple is expecting everyone to go out and buy. It's certainly no iPhone or Apple Watch – though a cheaper Vision Pro, rumored to be in the pipeline, could certainly change that dynamic somewhat.

With that in mind, the software still seems to be a work in progress. As 9to5Mac spotted in the official Vision Pro press release, the Persona feature is going to have a beta label attached for the time being – that's where you're represented in video calls by a 3D digital avatar that doesn't have a bulky mixed reality headset strapped on.

4. Here's what you'll be getting in the box

Apple Vision Pro

(Image credit: Apple)

As per the official press release from Apple, if you put down the money for a Vision Pro then you'll get two different bands to choose from and wrap around your head: they are the Solo Knit Band and the Dual Loop Band, though it's not immediately clear what the differences are between them.

Also in the box we've got a light seal, two light seal cushions, what's described as an “Apple Vision Pro Cover” for the front of the headset, an external battery back, a USB-C charging cable, a USB-C power adapter, and the accessory that we've all been wanting to see included – an official Apple polishing cloth.

5. Apple could release an app to help you fit the headset

Two hands holding the Apple Vision Pro headset

(Image credit: Apple)

When it comes to fitting the Apple Vision Pro snugly to your head, we think that Apple might encourage buyers to head to a physical store so that they can be helped out by an expert. However, it would seem that Apple also has plans for making sure you get the best possible fit at home.

As spotted by Patently Apple, a new patent filed by Apple mentions a “fit guidance” system inside an iPhone app. It will apparently work with “head-mountable devices” – very much like the Vision Pro – and looks designed to ensure that the user experience isn't spoiled by having the headset badly fitted.

6. There'll be plenty of content to watch

A person views an image on a virtual screen while wearing an Apple Vision Pro headset.

(Image credit: Apple)

Another little nugget from the Apple Vision Pro press release is that users will be able to access “more than 150 3D titles with incredible depth”, all through the Apple TV app. Apple is also introducing a new Immersive Video format, which promises 180-degree, three-dimensional videos in 8K quality.

This 3D video could end up being one of the most compelling reasons to buy an Apple Vision Pro – we were certainly impressed when we got to try it out for ourselves, and you can even record your own spatial video for playing back on the headset if you've got an iPhone 15 Pro or an iPhone 15 Pro Max.

You might also like

TechRadar – All the latest technology news

Read More

Seeing your own spatial video on Vision Pro is an immersive trip – and I highly recommend it

Every experience I have with Apple's Vision Pro mixed reality headset is the same as the last and yet also quite different. I liken it to peeling an onion: I think I understand the feel and texture of it, but each time I notice new gradations and even flavors that remind me that I still don't fully know Apple's cutting-edge wearable technology.

For my third go around wearing the Vision Pro I had the somewhat unique experience of viewing my own content through the powerful and pricey ($ 3,499 when it ships next year) headset.

A few weeks ago, Apple dropped a beta for iOS 17.2, which added Spatial Video capture to the iPhone 15 Pro and iPhone 15 Pro Max (the full version landed this week). It's a landscape-only mode video format that uses the 48MP main and 12MP Ultrawide cameras to create a stereo video image. I started capturing videos in that format almost immediately, but with the caveat that not every video is worthy of this more immersive experience (you can't be too far away from your subject, and keeping the phone level and steady helps). Still, I had a solid nine clips that I brought with me for my second and by far more personal Vision Pro Spatial Video experience.

I tried, during this third Vision Pro trial, to pay more attention to some of the headset's setup and initialization details. As I've mentioned previously, the Vision Pro is one of Apple's more bespoke hardware experiences. If you wear glasses, you will need to pay extra for a pair of custom-made Zeiss lens inserts – I provided my prescription details in advance of this test run. It's not clear how long consumers might have to wait for their own inserts (could Apple have an express optician service in the back of each Apple Store? Doubtful).

Not everyone will need those lenses, or have to endure that extra cost and wait. If you don't wear glasses, you're ahead of people like me, and likewise if you're a contact lens wearer.

Man using Apple Vision Pro

Not me wearing the Vision Pro, because Apple still won’t allow me to photograph myself wearing them. That said, pressing the digital crown is part of the initial setup process (Image credit: Apple)

Getting the custom experience right

Still, there are other customizations that I didn't pay attention to until now. The face cushion that rests on your face and connects magnetically to the main part of Vision Pro comes in a few different curve styles to accommodate the differing contours of of a range of typical human faces. I don't know how many different options Apple will offer.

One thing that's critical for a comfortable AR and VR experience is matching your eye's pupillary distance – the distance between the centers of your eyes. This was the first time I paid attention to one of the first steps in my Vision Pro setup. After I long-pressed the headset's digital crown, a pair of large green shapes appeared before my eyes. They measured the space between my eyes and inside the Vision Pro, and then the dual micro-LED displays and their 23 million pixels of imagery moved to match the space between my eyes. If you listen carefully, you might be able to hear the mechanics doing their job.

I also noted how the Vision Pro runs me through three distinct sets of eye-tracking tests, where I looked at a ring of dots and, for each one, pinched my index finger and thumb together to select them. It might feel tedious to do this three times (okay, it did) but it's a critical step that ensures the Vision Pro's primary interaction paradigm works perfectly every time.

Now, at my third wearing, I've become quite an expert at the looking and pinching thing. A gold star for me.

The Apple Vision Pro headset on a grey background

This cushion is magnetic, and detaches so you can get one that better fits your face. The band also detaches when you pull on a small, bright orange tab (Image credit: Apple)

Spatial computing is kind of familiar

Las Vegas panorama

Can you find me in this photo? (Image credit: Lance Ulanoff)

We AirDropped my spatial video and panorama shots from a nearby phone. It was nice to see how smoothly AirDrop works on the Vision Pro – I saw that someone was trying to AirDrop the content and simply looked at 'Accept' and then pinched my thumb and finger. Within seconds, the content was in my Photos library (spatial video gets its own icon).

When Apple's panorama photography was new in iOS 6, I took a lot of panoramic photos. I was tickled by the torn humans who moved too fast in the shot, and the ability to have someone appear twice in one trick panoramic photo. Apple has mostly cleared up the first issue – I noticed that fewer of my recent panos feature people with two heads. These days, though, I take very few panos and only had four decent ones to try with the Vision Pro.

Even with just a few samples, though, I was startled by the quality and immersive nature of the images. My favorite by far was the photo I took earlier this year from my CES 2023 hotel room with an iPhone 14 Pro. Taking these shots is something of a ritual. I like to see what the view and weather are like in Las Vegas, and usually share something on social media to remind people that I'm back at CES.

It would not be an exaggeration to say that this one shot, taken from fairly high up at the Planet Hollywood Hotel, was a revelation. Not just because the vista which virtually wrapped almost around my head was gorgeous, but for the first time I noticed when I looked at the far-right side of the image a complete reflection of me taking the photo. It's a detail I never noticed when looking at the pano on my phone, and there's something incredibly weird about unexpectedly spotting yourself in an immersive environment like that.

A vista from Antigua was similarly engaging. The clarity and detail overall, which is a credit to iPhone 14 Pro and iPhone 15 Pro Max photography, is impressive. I viewed most of my panos in immersive mode, but could, by using a pinch-and-push gesture with both hands, put the panoramic image back in a windowed view.

Spatial view

Train spatial video

I promise you, this is much cooler when viewed on the Vision Pro (Image credit: Lance Ulanoff)

In preparation for my spatial video experience, I shot videos of Thanksgiving dinner, Dickensian carollers, walks in the park, model trains, and interactions with a friend's four-year-old.

Each of these videos hit me a little differently, and all of them in immersive mode shared a few key features. You can view spatial video on the Vision Pro in a window, but I preferred the immersive style, which erases the borders and delivers each video in almost a cloud. Instead of hard edges, each 3D video fades away at the borders, so there's no clear delineation between the real world and the one floating in front of your face. This does reduce the field of view a bit, especially the vertical height and depth – when I viewed the spatial videos on my iPhone (on which they look like regular, flat videos), I could see everything I captured from edge to edge, while in immersive mode on the Vision Pro, some of the details got lost to the top and bottom of the ether.

With my model train videos, the 3D spatial video effect reminded me of the possibly apocryphal tale of early cinema audiences who, upon seeing a film of an oncoming train, ran screaming from the theater. I wouldn't say my video was that intense, but my model train did look like it was about to ride right into my lap.

I enjoyed every video, and while I did not feel as if I was inside any of them, each one felt more real, and whatever emotions I had watching them were heightened. I suspect that when consumers start experiencing the Vision Pro and spatial videos for themselves they might be surprised at the level of emotion they experience from family videos – it can be quite intense.

It was yet another short and seated experience, and I'm sure I didn't press the endurance of the Vision Pro's external two-hour battery pack. I did notice that if I were about to, say, work a full day, watch multiple two-hour movies, or go through a vast library of spatial videos, I could plug a power-adapter-connected cable right into the battery pack's available USB-C port.

I still don't know if the Apple Vision Pro is for everyone, but the more I use it, and the more I learn about it, the more I'm convinced that Apple is set to trigger a seismic shift in our computing experience. Not everyone will end up buying Vision Pro, but most of us will feel its impact.

You might also like

TechRadar – All the latest technology news

Read More

New Apple Vision Pro video gives us a taste of escaping to its virtual worlds

The promise of Apple’s Vision Pro headset – or any of the best virtual reality headsets, for that matter – is that it can transport you to another world, at least for a while. Now, we’ve just gained a preview of how Apple’s device will do this in a whole new way.

That’s because the M1Astra account on X (formerly known as Twitter) has begun posting videos showing the Vision Pro’s Yosemite Environment in action, complete with sparkling snow drifts, imposing mountains and beautiful clear blue skies.

It looks like a gorgeous way to relax and shut out the world around you. You’ll be able to focus on the calm and tranquillity of one of the world’s most famous national parks, taking in the majestic surroundings as you move and tilt your head.

This is far from the only location that comes as part of the Vision Pro’s Environments feature – users will be able to experience environs from a sun-dappled beach and a crisp autumnal scene to the dusty plains of the Moon in outer space.

Immersive environments

See more

The Environments feature is designed to be a way for you to not only tune out the real world, but to add a level of calmness and focus to your workstation. That’s because the scenes they depict can be used as backgrounds for a large virtual movie screen, or as a backdrop to your apps, video calls and more.

But as shown in one video posted by M1Astra, you'll also be able to walk around in the environment. As the poster strolled through the area, sun glistened off the snow and clouds trailed across the sky, adding life and movement to the virtual world.

To activate an environment, you’ll just need to turn the Vision Pro’s Digital Crown. This toggles what you see between passthrough augmented reality and immersive virtual reality. That sounds like it should be quick and easy, but we’ll know more when we get to test out the device after it launches.

Speaking of which, Apple’s Vision Pro is still months away from hitting store shelves (the latest estimates are for a March 2024 release date), which means there’s plenty of time for more information about the Environments feature to leak out. What’s clear already, though, is that it could be a great thing to try once the headset is out in the wild.

You might also like

TechRadar – All the latest technology news

Read More

I tried the iPhone 15 Pro’s new spatial video feature, and it will be the Vision Pro’s killer app

I’ve had exactly two Apple Visio Pro experiences: one six months ago, on the day Apple announced its mixed reality headset, and the other just a few hours ago. And where with the first experience I felt like I was swimming across the surface of the headset’s capabilities, today I feel like I’m qualified as a Vision Pro diver. I mean, how else am I expected to feel after not only experiencing spatial video on the Vision Pro, but also shooting this form of video for the headset with a standard iPhone 15 Pro?

By now, you probably know that iOS 17.2, which Apple released today as a public beta, will be the first time most of us will gain experience with spatial video. Granted, initially it will only be half the experience. Your iPhone 15 Pro and iPhone 15 Pro Max will, with the iOS 17 update, add a new videography option that you can toggle under Camera Formats in Settings. Once the Vision Pro ships, sometime next year, the format will turn on automatically for Vision Pro owners who have connected the mixed reality device to their iCloud accounts.

I got a sneak peek at not only the new iPhone 15 Pro capabilities, but at what the life-like content looks like viewed on a $ 3,499 Apple Vision Pro headset – and I now realize that spatial video could be the Vision Pro’s killer app.

A critical iPhone design tweak

Apple Vision Pro spatial video

(Image credit: Apple)

To understand how Apple has been playing the long game with its product development, you need only look at your iPhone 15 Pro or iPhone 15 Pro Max, where you’ll find a subtle design and functional change that you likely missed, but which is obviously all about the still unreleased Vision Pro. It turns out Apple designed the iPhone 15 Pro and Pro Max with the Vision Pro's spatial needs in mind, taking the 13mm ultrawide camera and moving it from its position (on the iPhone 14 Pro) diagonally opposite the 48MP main camera to the spot vertically below with the main camera, which on the 14 Pro was occupied by the telephoto camera; the telephoto camera moves to the ultrawide's old slot.

By repositioning these two lenses, Apple makes it possible to shoot stereoscopic or spatial video, but only when you hold the iPhone 15 Pro and iPhone 15 Pro Max in landscape mode.

It is not, I learned, just a matter of recording video through both lenses at once and shooting slightly different angles of the same scene to create the virtual 3D effect. Since the 13mm ultrawide camera shoots a much larger frame, Apple’s computational photography must crop and scale the ultrawide video to match the frames coming from the main camera.

To simplify matters, Apple is only capturing two 1080p/30fps video streams in HEVC (high-efficiency video coding) format. Owing to the dual stream, the file size is a bit larger, creating a 130MB file for about one minute of video.

Even though these spatial files are ostensibly a new media format, they will appear like any other 2D video file on your iPhone or Mac. However, there will be limits. You can trim one of these videos, but you can’t apply any other edits, lest you break the perfect synchronization between the two streams.

The shoot

Apple Vision Pro spatial video

Spatial video capture arrives on the iPhone 15 Pro and 15 Pro Max with the iOS 17.2 public beta update, which anyone can download today (you have to change your settings to accept beta updates). Note that you’ll only be shooting horizontal spatial video (Image credit: Apple)

For my test, I used a standard iPhone 15 Pro running the iOS 17 developer beta. We had already enabled Spatial Video for Apple Vision Pro under Settings in Camera/Formats. In the camera app's video capture mode, I could select a tiny icon that, naturally, looks just like the Vision Pro to shoot in Spatial Video mode.

When I selected that, the phone guided me to rotate the phone 90 degrees so it was in landscape orientation (the Vision Pro icon rotates as soon as you tap it). I also noticed that the image level tool, which is optional for all formats, is on by default when you use spatial video. This is because spatial videos are best when shot level. In fact, shooting them in situations where you know you might not be able to keep the phone level, like an action shot, could be a bad idea. Mostly this is about what it will feel like to watch the finished product in the Vision Pro headset – lots of movement in a 3D video a few centimeters from your face might induce discomfort.

Similarly, I found that it’s best to keep between three and eight feet from your subject, so they don’t end up appearing like giants in the final spatial video.

I shot a couple of short spatial videos of a woman preparing sushi. I tried to put the sushi in the foreground and her in the background to give the scene some depth. Nothing about shooting the video felt different from any others I’ve shot, though I probably overthought it a bit as I was trying to create a pair of interesting spatial videos.

Even though the iPhone is jumping through a bunch of computational hoops to create Spatial Video out of what you shoot, you should be able to play the video back instantly. We handed over our phones and then, a few minutes later, we were ready to view our videos in the Vision Pro.

Hello, my old friend

Apple Vision Pro spatial video

(Image credit: Apple)

While I was worried that after all these months, I wouldn’t remember how to use the Vision Pro, it really only took me a moment or two to reorient myself to its collection of gaze, gesture, and Digital Crown-based controls. It remains a stunningly intuitive piece of bleeding-edge tech. I still needed to hand over my glasses for a prescription measurement so we could make sure Apple inserted the right Zeiss lenses (you don’t wear glasses when using the headset). It’s a reminder that, unlike an iPhone, the Vision Pro will be a somewhat bespoke experience.

For this second wear session, I did not have the optional over-the-head strap, which meant that, for the first time, I felt the full weight of the headgear. I did my best to adjust the headband using a control knob near the back of the headset while being careful not to over-tighten it, but I’m not sure I ever found that sweet spot (note to self: get the extra headband when you do finally get to review one of these headsets).

There were some new controls since I last tried the Vision Pro – for example, I could now resize windows by looking over at the edge of a window and then by virtually pinching and pulling the white curve that appears right below it. I got this on the second try, and then it became second nature.

I finally got a good look at the Vision Pro Photos app, which was easy to navigate using my gaze and finger taps – you pinch and pull with either hand to swipe through photos and galleries. I usually kept my hands in or near my lap when performing these gestures. I looked at photos shot with the iPhone 15 Pro at 24MP and 48 MP. It was fun to zoom into those photos, so they filled my field of view, and then pinch and drag to move around the images and see some of the exquisite detail in, for instance, the lace on a red dress.

I got a look at some incredible panorama shots, including one from Monument Valley in Arizona and another from Iceland, which featured a frozen waterfall, and which virtually wrapped all the way around me. As I noted in my original Vision Pro experience, there’s finally a reason to take panoramic photos with your iPhone.

Head spatial

Apple Vision Pro spatial video

This spatial video scene was one of the most effective. Those bubbles appeared to float right by my face (Image credit: Apple)

Inside the Vision Pro Photos app is a new media category called Spatial. This is where I viewed some canned spatial videos and, finally, the pair of spatial videos I shot on the iPhone 15 Pro. There was the campfire scene I saw during my WWDC 2023 experience, a birthday celebration, an intimate scene of a family camping, another of a family cooking in a kitchen, and, my favorite, a mother and child playing with bubbles.

You can view these spatial videos in a window or full-screen, where the edges blend with either your passthrough view or your immersive environment (a new environment is Joshua Tree) that replaces your real world with a 360-degree wraparound image. In the bubble video, the bubbles appeared to be floating both in the scene and closer to my face; I had the impulse to reach out and touch them.

In the kitchen scene, where the family is sitting around a kitchen island eating and the father is in the background cooking, the 3D effect initially makes the father look like a tiny man. When he turned and moved closer to his family, the odd effect disappeared.

It’s not clear how spatial video shot on iPhone 15 Pro is handling focal points, and if it’s defaulting to a long depth of field or using something different for the 3D effect. You can, by tapping your iPhone's screen during a spatial video shoot, set the focus point but you can't change this in editing.

My two short videos were impressive, if I do say so myself. During the shoot, I did my best to put one piece of sushi the chef held up to me in the foreground, and in the final result, I got exactly the effect I was hoping for. The depth is interesting, and not overbearing or jarring. Instead, the scene looks exactly as I remember it, complete with that lifelike depth. That’s not possible with traditional videography.

What I did not do was stand up and move closer to the spatial videos. Equally, these are not videos you can step into and move around. You're still only grabbing two slightly different videos to create the illusion of depth.

In case you’re wondering, the audio is captured too, and this sounded perfectly normal. I didn't notice any sort of spatial effect, but these videos were not shot with audio sources that spanned the distance of a room.

Apple Vision Pro example

In this sample provided by Apple, you can see how the candle smoke appears to float toward you – it’s a trippier effect when you’re wearing the Vision Pro headset (Image credit: Apple)

What’s next?

Because you’ll have spatial video shooting capabilities when you install the iOS 17.2 public beta you could be shooting a lot of spatial video between now and when Apple finally starts selling the Vision Pro to consumers. These videos will look perfectly normal – but imagine having a library of spatial video to swipe through when you do finally buy the Vision Pro. That, and the fact that your panoramas will look stunning on the device, may finally be the reason you buy Apple's headset.

Naturally, the big stumbling factor here is price. Apple plans on charging $ 3,499 (around £2,800 / AU$ 5,300) for the Vision Pro, not including the head strap accessory, which as mentioned, you’ll probably need. That means that while millions may own iPhone 15 Pros and be able to shoot spatial video, a precious few will be able to watch them on a Vision Pro.

Perhaps Apple will make the Vision Pro part of one of its financing plans, so that people can pay it off with a monthly fee. There might also be discounts if you buy an iPhone 15 Pro. Maybe not. Whatever Apple does, spatial video may make the most compelling case yet for, if not owning a Vision Pro, then at least wishing you did.

You might also like

TechRadar – All the latest technology news

Read More

Your iPhone 15 Pro can now capture spatial video for the Vision Pro

When Apple unveiled the Vision Pro headset, it explained that you’d be able to capture so-called spatial videos for the device using an iPhone 15 Pro or iPhone 15 Pro Max. Yet you’ve not been able to do that with either of the best iPhones currently available – until now.

That’s because Apple has just launched iOS 17.2 beta 2, and with it comes the ability to record 3D spatial videos. That means you can start prepping videos for the headset using just your iPhone and its main and ultra wide cameras; no fancy equipment necessary.

Of course, you can’t actually see these videos in their intended 3D environment yet, because the Vision Pro hasn’t launched – it’s not expected until some time in early 2024.

But what you can do is start filming videos ready to be used in 3D apps built using Apple’s frameworks, like RealityKit. So, if you’ve got your heart set on building a Vision Pro app that integrates 3D video, you can get started more or less right away.

A taste of things to come

Apple Vision Pro spatial video

(Image credit: Apple)

To enable spatial video capture on an iPhone, you’ll obviously need to be running iOS 17.2 beta 2. Once you are, open the Settings app and select Camera, then enable the Spatial Video for Apple Vision Pro toggle.

Now, the next time you open the Camera app, there will be a Spatial option for video recording. Just start filming and your iPhone will do all the necessary legwork to make the video 3D-enabled.

As spotted by 9to5Mac, Apple says video captured in this way will be recorded at 1080p and 30fps, and that a minute of spatial footage filmed this way will take up around 130MB of space. Better make sure you have plenty of free storage before you start.

When the Vision Pro eventually makes it onto shelves, you’ll also be able to capture videos using the headset itself, too. For now, though, you’re limited to a recent high-end iPhone, but it seems to be a taste of something greater.

You might also like

TechRadar – All the latest technology news

Read More

Meta Quest 3 teardown video shows lower price doesn’t mean low-quality

We just got a good look at the guts inside a Quest 3 headset. iFixit tore down the VR gear into its individual parts to find out if the device offers good performance for its price point. Short answer: Yes, it does although there are some design flaws that make it difficult to repair.

What’s notable about the Quest 3 is that it has better “mixed-reality capabilities” than the Quest Pro. It's able to automatically map out a room as well as accurately keep track of the distance between objects without needing a “safe space”. The former is made possible by a depth sensor while the latter is thanks to the “time of flight sensor”. iFixit makes the interesting observation that the time of flight components could fit perfectly in the Quest Pro. 

It’s worth mentioning Andrew Bosworth, Meta’s Chief Technology Officer, once stated the sensors were removed from the pro model because it added extra “cost and weight” without providing enough benefits.” The Quest 3 is much slimmer, clocking at 512g. 

Meta Quest 3 breakdown

(Image credit: iFixit)

Hardware improvements

Digging deeper into the headset, iFixit offered a zoomed-in look at the LCD panels through a powerful microscope. The screens output a resolution of 2,064 x 2,208 pixels per eye with a refresh rate of 120Hz. This is greater than the Quest Pro’s peak resolution of 1,920 x 1,800 pixels. The video explains that the Quest 3 can manipulate the intensity of color clusters, mixing everything into the high-quality visuals we see. Combining the LCD panels with the time of flight sensor results in a “much better [full-color] passthrough experience” than before.

Additionally, the headset has greater power behind it since it houses the Qualcomm Snapdragon 8 XR2 Gen 2 chipset.

Of course, iFixit took the time to judge the Quest 3 on its repairability and Meta did a good job on that front – for the most part. The controllers are easy to repair as their construction is relatively simple. They’re held together by a few screws, a magnet, and a series of ribbon cables at the top. Replacing the battery is also pretty easy as each half takes a single AA battery.

Awkward repairs

On the headset, it's a slightly different story. The battery on the main unit is replaceable, too. However, it’s located at the center of the device behind 50 screws, multiple coax cables, various connectors, a heatsink, and the mainboard. If you like to do your own repairs on your electronics, it may take you a while to fix the Quest 3.

Funnily enough, iFixit really makes a good case for why and how the Quest 3 is a better headset than the Quest Pro. Granted, it lacks face and eye tracking, but when you have a more immersive mixed reality, are people really going to miss them? Plus, it's half the price. If the Quest 3 is the new standard moving forward, it makes you wonder how Meta is going to improve on the Quest Pro 2 (assuming it’s in the works).

While we have you check out TechRadar’s list of the best VR headsets for 2023

You might also like

TechRadar – All the latest technology news

Read More

Forget ChatGPT – NExT-GPT can read and generate audio and video prompts, taking generative AI to the next level

2023 has felt like a year dedicated to artificial intelligence and its ever-expanding capabilities, but the era of pure text output is already losing steam. The AI scene might be dominated by giants like ChatGPT and Google Bard, but a new large language model (LLM), NExT-GPT, is here to shake things up – offering the full bounty of text, image, audio, and video output. 

NExT-GPT is the brainchild of researchers from the National University of Singapore and Tsinghua University. Pitched as an ‘any-to-any’ system, NExT-GPT can accept inputs in different formats and deliver responses according to the desired output in video, audio, image, and text responses. This means that you can put in a text prompt and NExT-GPT can process that prompt into a video, or you can give it an image and have that converted to an audio output. 

ChatGPT has only just announced the capability to ‘see, hear and speak’ which is similar to what NExT-GPT is offering – but ChatGPT is going for a more mobile-friendly version of this kind of feature, and is yet to introduce video capabilities. 

We’ve seen a lot of ChatGPT alternatives and rivals pop up over the past year, but NExT-GPT is one of the few LLMs we’ve seen so far that can match the text-based output of ChatGPT but also provide outputs beyond what OpenAI’s popular chatbot can currently do. You can head over to the GitHub page or the demo page to try it out for yourself. 

So, what is it like?

I’ve fiddled around with NExT-GPT on the demo site and I have to say I’m impressed, but not blown away. Of course, this is not a polished product that has the advantages of public feedback, multiple updates, and so on – but it is still very good. 

I asked it to turn a photo of my cat Miso into an image of him as a librarian, and I was pretty happy with the result. It may not be at the same level of quality as established image generators like Midjourney or Stable Diffusion, but it was still an undeniably very cute picture.

Cat in a library wearing glasses

This is probably one of the least cursed images I’ve personally generated using AI. (Image credit: Future VIA NExT-GPT)

I also tested out the video and audio features, but that didn't go quite as well as the image generation. The videos that were generated were again not awful, but did have the very obvious ‘made by AI’ look that comes with a lot of generated images and videos, with everything looking a little distorted and wonky. It was uncanny. 

Overall, there’s a lot of potential for this LLM to fill the audio and video gaps within big AI names like OpenAI and Google. I do hope that as NExT-GPT gets better and better, we’ll be able to see a higher quality of outputs and make some excellent home movies out of our cats seamlessly in no time. 

You might also like…

TechRadar – All the latest technology news

Read More

Microsoft axes Video Editor in latest Windows 10 Photos app update, and users aren’t happy

Coming in hot on the heels of a freshly updated Photos app in Windows 10, which has sparked discussion about its merit among users, Microsoft seems intent on stoking the fire. 

The new Photos app is missing some of the editing tools of its predecessor, has some new ones, and now no longer has a built-in Video Editor. Instead, the Editor will be replaced with a web-based app called Clipchamp.

According to Windows Latest, you may be able to open the old Video Editor, but if it’s been updated (probably through the most recent Windows 10 update), you’ll be met with a pop-up saying the following: 

“Microsoft Video Editor is no longer available in the Photos app. Your previous video projects can be accessed by downloading the Photos Legacy app in Settings. For new videos, unleash your creativity with Clipchamp.“

So, what can you do now?

You can still download the Photos Legacy app in the Microsoft Store, like the pop-up says, and restore the original Video Editor. Yet Windows Latest speculates that this might signal the beginning of the end for this generation of the Photos app and its editing capabilities. Eventually, we may not even have a Photos Legacy app at all (along with its Video Editor feature).  

The Photos Legacy app is similar to the Windows 11 version of the app, and it differs from the previous Windows 10 Photos app. Some of the changes that angered users are the removal of the Clarity slider and the Spot fix feature. This change was warned about shortly before it happened as Windows 10 users were notified ahead of the changes.

The move is presumably because Microsoft wants to usher users away from the Video Editor feature and over to the web-based Clipchamp, which was acquired by Microsoft back in 2021. Windows 11’s Photos and Windows 10’s Photos will still include video editing for now, as confirmed by an engineer at Microsoft to Windows Latest. 

Microsoft Store in Windows 10

(Image credit: Microsoft)

The new video editor in town: Clipchamp

So what’s Clipchamp? It’s a free video editor that allows users to make as many videos as they like in high definition (1080p). It’s a browser-based app that you can access at clipchamp.com and to access it, all you need is a Microsoft account and to log in on the website. You can find our review of Clipchamp here.

This app might remind you of a relic of the recent past – Windows Movie Maker. Movie Maker is also no more – officially decommissioned back in 2017 – and Microsoft is propping up Clipchamp as a replacement for it. 

Clipchamp is a more capable video-editing app, and allows any user to make a video that looks pretty professional. It also has a user-friendly interface and quick setup process. However, many still liked the old Video Editor, perhaps for its even more straightforward simplicity. 

Clipchamp

(Image credit: Sofia Wyciślik-Wilson)

What's the actual problem?

Not just known for its simple approach, Windows 10’s Video Editor could also encode much smaller-sized videos than those of Clipchamp. In Microsoft’s Feedback Hub, where users give feedback directly to Microsoft as outlined by Windows Latest, one user asked: “Why is the Clipchamp exported video 5 times the size of the photo “legacy” video editor?”

Yikes. 

The user details their complaint and outlines their comparison between Clipchamp and Photos Legacy’s Video Editor, and they aren’t happy. I understand why; there's a big difference, especially if you’re making a video for personal reasons instead of commercial purposes. File storage isn’t free, after all!

It makes you think – does Microsoft have plans to present a repackaged Video Editor elsewhere? Maybe it could enjoy a new lease on life as a paid download if it still maintains such popularity.

If you have similar thoughts or your own opinion you’d like to share, Microsoft does often repeat that they’d like to hear users’ thoughts on the matter. The uproar was so loud when it tried to do something similar with Paint that the beloved app was brought back as a optional download via the Microsoft Store, so maybe the tech giant will listen to users this time around too. 

TechRadar – All the latest technology news

Read More

Meta Quest 3 video leak shows off thinner design and new controllers

The Meta Quest 3 (aka the Oculus Quest 3) is now official, but isn't due to go on sale until September or October time. If you're keen for an earlier look at the virtual reality headset before then, an unboxing video has made its way online.

This comes from @ZGFTECH on X/Twitter (via Android Authority), and we get a full look at the new device and the controllers that come with it. Meta has already published promo images of the headset, but it's interesting to see it in someone's hands.

As revealed by Meta chief Mark Zuckerberg, the Meta Quest 3 is some 40% thinner than the Oculus Quest 2 that came before it. From this video it looks like the Quest 2's silicone face pad and cloth strap have been carried over to the new piece of hardware.

You may recall that the Quest 2 originally shipped with foam padding, before Meta responded to complaints of skin irritation by replacing the foam with silicone. That lesson now appears to have been learned with this new device and the Meta Quest Pro.

See more

Take control

The controllers that come with the Meta Quest 3 look a lot like the ones supplied with the Meta Quest Pro, though these don't have built-in cameras. The ring design of the Oculus Quest 2 has been ditched, with integrated sensors and predictive AI taking over tracking duties, according to Meta.

As for the outer packaging, it's not particularly inspiring, featuring just the name of the device on the top. Presumably something a bit more eye-catching will be put together before the headset actually goes on sale.

It's not clear where the headset has been sourced from, but the device has clearly been in testing for a while. This is becoming something of a running theme too, because the Meta Quest Pro was leaked in similar fashion after being left behind in a hotel room.

We should get all the details about the Meta Quest 3, including the date when we'll actually be able to buy it, on September 27 at the Meta Connect event. TechRadar will of course bring you all the news from the show, and any further leaks that may emerge between then and now.

You might also like

TechRadar – All the latest technology news

Read More

YouTube video translation is getting an AI-powered dubbing tool upgrade

YouTube is going to help its creators reach an international audience as the platform plans on introducing a new AI-powered dubbing tool for translating videos into other languages.

Announced at VidCon 2023, the goal of this latest endeavor is to provide a quick and easy way for creators to translate “at no cost” their content into languages they don’t speak. This can help out smaller channels as they may not have the resources to hire a human translator. To make this all possible, Amjad Hanif, vice president of Creator Products at YouTube, revealed the tool will utilize the Google-created Aloud plus the platform will be bringing over the team behind the AI from Area 120, a division of the parent company that frequently works on experimental tech.

Easy translation

The way the translation system works, according to the official Aloud website, is the AI will first transcribe a video into a script. You then edit the transcription to get rid of any errors, make clarifications, or highlight text “where timing is critical.” From there, you give the edited script back to Aloud where it will automatically translate your video into the language of your choice. Once done, you can publish the newly dubbed content by uploading any new audio tracks onto their original video.

A Google representative told us “creators do not have to [actually] understand any of the languages that they are dubbing into.” Aloud will handle all of the heavy lifting surrounding complex tasks like “translation, timing, and speech synthesis.” Again, all you have to do is double-check the transcription. 

Future changes

It’s unknown when the Aloud update will launch. However, YouTube is already working on expanding the AI beyond what it’s currently possible. Right now, Aloud can only translate English content to either Spanish or Portuguese. But there are plans to expand into other languages from Hindi to Indonesian plus support for different dialects.

Later down the line, the platform will introduce a variety of features such as “voice preservation, better emotion transfer, and even lip reanimation” to improve enunciation. Additionally, YouTube is going to build in some safeguards ensuring only the creators can “dub their own content”.

The same Google representative from earlier also told us the platform is testing the Aloud AI with “hundreds of [YouTube] creators” with plans to add more over time. As of June 2023, over 10,000 videos have been dubbed in over 70 languages. 

You can join the early access program by filling out the official Google Docs form. If you want to know what an Aloud dub sounds like, go watch the channel trailer for the Amoeba Sisters channel on YouTube. Click the gear icon, go to Audio Track, then select Spanish. The robotic voice you’ll hear is what the AI will create. 

TechRadar – All the latest technology news

Read More