I tried the iPhone 15 Pro’s new spatial video feature, and it will be the Vision Pro’s killer app

I’ve had exactly two Apple Visio Pro experiences: one six months ago, on the day Apple announced its mixed reality headset, and the other just a few hours ago. And where with the first experience I felt like I was swimming across the surface of the headset’s capabilities, today I feel like I’m qualified as a Vision Pro diver. I mean, how else am I expected to feel after not only experiencing spatial video on the Vision Pro, but also shooting this form of video for the headset with a standard iPhone 15 Pro?

By now, you probably know that iOS 17.2, which Apple released today as a public beta, will be the first time most of us will gain experience with spatial video. Granted, initially it will only be half the experience. Your iPhone 15 Pro and iPhone 15 Pro Max will, with the iOS 17 update, add a new videography option that you can toggle under Camera Formats in Settings. Once the Vision Pro ships, sometime next year, the format will turn on automatically for Vision Pro owners who have connected the mixed reality device to their iCloud accounts.

I got a sneak peek at not only the new iPhone 15 Pro capabilities, but at what the life-like content looks like viewed on a $ 3,499 Apple Vision Pro headset – and I now realize that spatial video could be the Vision Pro’s killer app.

A critical iPhone design tweak

Apple Vision Pro spatial video

(Image credit: Apple)

To understand how Apple has been playing the long game with its product development, you need only look at your iPhone 15 Pro or iPhone 15 Pro Max, where you’ll find a subtle design and functional change that you likely missed, but which is obviously all about the still unreleased Vision Pro. It turns out Apple designed the iPhone 15 Pro and Pro Max with the Vision Pro's spatial needs in mind, taking the 13mm ultrawide camera and moving it from its position (on the iPhone 14 Pro) diagonally opposite the 48MP main camera to the spot vertically below with the main camera, which on the 14 Pro was occupied by the telephoto camera; the telephoto camera moves to the ultrawide's old slot.

By repositioning these two lenses, Apple makes it possible to shoot stereoscopic or spatial video, but only when you hold the iPhone 15 Pro and iPhone 15 Pro Max in landscape mode.

It is not, I learned, just a matter of recording video through both lenses at once and shooting slightly different angles of the same scene to create the virtual 3D effect. Since the 13mm ultrawide camera shoots a much larger frame, Apple’s computational photography must crop and scale the ultrawide video to match the frames coming from the main camera.

To simplify matters, Apple is only capturing two 1080p/30fps video streams in HEVC (high-efficiency video coding) format. Owing to the dual stream, the file size is a bit larger, creating a 130MB file for about one minute of video.

Even though these spatial files are ostensibly a new media format, they will appear like any other 2D video file on your iPhone or Mac. However, there will be limits. You can trim one of these videos, but you can’t apply any other edits, lest you break the perfect synchronization between the two streams.

The shoot

Apple Vision Pro spatial video

Spatial video capture arrives on the iPhone 15 Pro and 15 Pro Max with the iOS 17.2 public beta update, which anyone can download today (you have to change your settings to accept beta updates). Note that you’ll only be shooting horizontal spatial video (Image credit: Apple)

For my test, I used a standard iPhone 15 Pro running the iOS 17 developer beta. We had already enabled Spatial Video for Apple Vision Pro under Settings in Camera/Formats. In the camera app's video capture mode, I could select a tiny icon that, naturally, looks just like the Vision Pro to shoot in Spatial Video mode.

When I selected that, the phone guided me to rotate the phone 90 degrees so it was in landscape orientation (the Vision Pro icon rotates as soon as you tap it). I also noticed that the image level tool, which is optional for all formats, is on by default when you use spatial video. This is because spatial videos are best when shot level. In fact, shooting them in situations where you know you might not be able to keep the phone level, like an action shot, could be a bad idea. Mostly this is about what it will feel like to watch the finished product in the Vision Pro headset – lots of movement in a 3D video a few centimeters from your face might induce discomfort.

Similarly, I found that it’s best to keep between three and eight feet from your subject, so they don’t end up appearing like giants in the final spatial video.

I shot a couple of short spatial videos of a woman preparing sushi. I tried to put the sushi in the foreground and her in the background to give the scene some depth. Nothing about shooting the video felt different from any others I’ve shot, though I probably overthought it a bit as I was trying to create a pair of interesting spatial videos.

Even though the iPhone is jumping through a bunch of computational hoops to create Spatial Video out of what you shoot, you should be able to play the video back instantly. We handed over our phones and then, a few minutes later, we were ready to view our videos in the Vision Pro.

Hello, my old friend

Apple Vision Pro spatial video

(Image credit: Apple)

While I was worried that after all these months, I wouldn’t remember how to use the Vision Pro, it really only took me a moment or two to reorient myself to its collection of gaze, gesture, and Digital Crown-based controls. It remains a stunningly intuitive piece of bleeding-edge tech. I still needed to hand over my glasses for a prescription measurement so we could make sure Apple inserted the right Zeiss lenses (you don’t wear glasses when using the headset). It’s a reminder that, unlike an iPhone, the Vision Pro will be a somewhat bespoke experience.

For this second wear session, I did not have the optional over-the-head strap, which meant that, for the first time, I felt the full weight of the headgear. I did my best to adjust the headband using a control knob near the back of the headset while being careful not to over-tighten it, but I’m not sure I ever found that sweet spot (note to self: get the extra headband when you do finally get to review one of these headsets).

There were some new controls since I last tried the Vision Pro – for example, I could now resize windows by looking over at the edge of a window and then by virtually pinching and pulling the white curve that appears right below it. I got this on the second try, and then it became second nature.

I finally got a good look at the Vision Pro Photos app, which was easy to navigate using my gaze and finger taps – you pinch and pull with either hand to swipe through photos and galleries. I usually kept my hands in or near my lap when performing these gestures. I looked at photos shot with the iPhone 15 Pro at 24MP and 48 MP. It was fun to zoom into those photos, so they filled my field of view, and then pinch and drag to move around the images and see some of the exquisite detail in, for instance, the lace on a red dress.

I got a look at some incredible panorama shots, including one from Monument Valley in Arizona and another from Iceland, which featured a frozen waterfall, and which virtually wrapped all the way around me. As I noted in my original Vision Pro experience, there’s finally a reason to take panoramic photos with your iPhone.

Head spatial

Apple Vision Pro spatial video

This spatial video scene was one of the most effective. Those bubbles appeared to float right by my face (Image credit: Apple)

Inside the Vision Pro Photos app is a new media category called Spatial. This is where I viewed some canned spatial videos and, finally, the pair of spatial videos I shot on the iPhone 15 Pro. There was the campfire scene I saw during my WWDC 2023 experience, a birthday celebration, an intimate scene of a family camping, another of a family cooking in a kitchen, and, my favorite, a mother and child playing with bubbles.

You can view these spatial videos in a window or full-screen, where the edges blend with either your passthrough view or your immersive environment (a new environment is Joshua Tree) that replaces your real world with a 360-degree wraparound image. In the bubble video, the bubbles appeared to be floating both in the scene and closer to my face; I had the impulse to reach out and touch them.

In the kitchen scene, where the family is sitting around a kitchen island eating and the father is in the background cooking, the 3D effect initially makes the father look like a tiny man. When he turned and moved closer to his family, the odd effect disappeared.

It’s not clear how spatial video shot on iPhone 15 Pro is handling focal points, and if it’s defaulting to a long depth of field or using something different for the 3D effect. You can, by tapping your iPhone's screen during a spatial video shoot, set the focus point but you can't change this in editing.

My two short videos were impressive, if I do say so myself. During the shoot, I did my best to put one piece of sushi the chef held up to me in the foreground, and in the final result, I got exactly the effect I was hoping for. The depth is interesting, and not overbearing or jarring. Instead, the scene looks exactly as I remember it, complete with that lifelike depth. That’s not possible with traditional videography.

What I did not do was stand up and move closer to the spatial videos. Equally, these are not videos you can step into and move around. You're still only grabbing two slightly different videos to create the illusion of depth.

In case you’re wondering, the audio is captured too, and this sounded perfectly normal. I didn't notice any sort of spatial effect, but these videos were not shot with audio sources that spanned the distance of a room.

Apple Vision Pro example

In this sample provided by Apple, you can see how the candle smoke appears to float toward you – it’s a trippier effect when you’re wearing the Vision Pro headset (Image credit: Apple)

What’s next?

Because you’ll have spatial video shooting capabilities when you install the iOS 17.2 public beta you could be shooting a lot of spatial video between now and when Apple finally starts selling the Vision Pro to consumers. These videos will look perfectly normal – but imagine having a library of spatial video to swipe through when you do finally buy the Vision Pro. That, and the fact that your panoramas will look stunning on the device, may finally be the reason you buy Apple's headset.

Naturally, the big stumbling factor here is price. Apple plans on charging $ 3,499 (around £2,800 / AU$ 5,300) for the Vision Pro, not including the head strap accessory, which as mentioned, you’ll probably need. That means that while millions may own iPhone 15 Pros and be able to shoot spatial video, a precious few will be able to watch them on a Vision Pro.

Perhaps Apple will make the Vision Pro part of one of its financing plans, so that people can pay it off with a monthly fee. There might also be discounts if you buy an iPhone 15 Pro. Maybe not. Whatever Apple does, spatial video may make the most compelling case yet for, if not owning a Vision Pro, then at least wishing you did.

You might also like

TechRadar – All the latest technology news

Read More

Your iPhone 15 Pro can now capture spatial video for the Vision Pro

When Apple unveiled the Vision Pro headset, it explained that you’d be able to capture so-called spatial videos for the device using an iPhone 15 Pro or iPhone 15 Pro Max. Yet you’ve not been able to do that with either of the best iPhones currently available – until now.

That’s because Apple has just launched iOS 17.2 beta 2, and with it comes the ability to record 3D spatial videos. That means you can start prepping videos for the headset using just your iPhone and its main and ultra wide cameras; no fancy equipment necessary.

Of course, you can’t actually see these videos in their intended 3D environment yet, because the Vision Pro hasn’t launched – it’s not expected until some time in early 2024.

But what you can do is start filming videos ready to be used in 3D apps built using Apple’s frameworks, like RealityKit. So, if you’ve got your heart set on building a Vision Pro app that integrates 3D video, you can get started more or less right away.

A taste of things to come

Apple Vision Pro spatial video

(Image credit: Apple)

To enable spatial video capture on an iPhone, you’ll obviously need to be running iOS 17.2 beta 2. Once you are, open the Settings app and select Camera, then enable the Spatial Video for Apple Vision Pro toggle.

Now, the next time you open the Camera app, there will be a Spatial option for video recording. Just start filming and your iPhone will do all the necessary legwork to make the video 3D-enabled.

As spotted by 9to5Mac, Apple says video captured in this way will be recorded at 1080p and 30fps, and that a minute of spatial footage filmed this way will take up around 130MB of space. Better make sure you have plenty of free storage before you start.

When the Vision Pro eventually makes it onto shelves, you’ll also be able to capture videos using the headset itself, too. For now, though, you’re limited to a recent high-end iPhone, but it seems to be a taste of something greater.

You might also like

TechRadar – All the latest technology news

Read More

Meta Quest 3 teardown video shows lower price doesn’t mean low-quality

We just got a good look at the guts inside a Quest 3 headset. iFixit tore down the VR gear into its individual parts to find out if the device offers good performance for its price point. Short answer: Yes, it does although there are some design flaws that make it difficult to repair.

What’s notable about the Quest 3 is that it has better “mixed-reality capabilities” than the Quest Pro. It's able to automatically map out a room as well as accurately keep track of the distance between objects without needing a “safe space”. The former is made possible by a depth sensor while the latter is thanks to the “time of flight sensor”. iFixit makes the interesting observation that the time of flight components could fit perfectly in the Quest Pro. 

It’s worth mentioning Andrew Bosworth, Meta’s Chief Technology Officer, once stated the sensors were removed from the pro model because it added extra “cost and weight” without providing enough benefits.” The Quest 3 is much slimmer, clocking at 512g. 

Meta Quest 3 breakdown

(Image credit: iFixit)

Hardware improvements

Digging deeper into the headset, iFixit offered a zoomed-in look at the LCD panels through a powerful microscope. The screens output a resolution of 2,064 x 2,208 pixels per eye with a refresh rate of 120Hz. This is greater than the Quest Pro’s peak resolution of 1,920 x 1,800 pixels. The video explains that the Quest 3 can manipulate the intensity of color clusters, mixing everything into the high-quality visuals we see. Combining the LCD panels with the time of flight sensor results in a “much better [full-color] passthrough experience” than before.

Additionally, the headset has greater power behind it since it houses the Qualcomm Snapdragon 8 XR2 Gen 2 chipset.

Of course, iFixit took the time to judge the Quest 3 on its repairability and Meta did a good job on that front – for the most part. The controllers are easy to repair as their construction is relatively simple. They’re held together by a few screws, a magnet, and a series of ribbon cables at the top. Replacing the battery is also pretty easy as each half takes a single AA battery.

Awkward repairs

On the headset, it's a slightly different story. The battery on the main unit is replaceable, too. However, it’s located at the center of the device behind 50 screws, multiple coax cables, various connectors, a heatsink, and the mainboard. If you like to do your own repairs on your electronics, it may take you a while to fix the Quest 3.

Funnily enough, iFixit really makes a good case for why and how the Quest 3 is a better headset than the Quest Pro. Granted, it lacks face and eye tracking, but when you have a more immersive mixed reality, are people really going to miss them? Plus, it's half the price. If the Quest 3 is the new standard moving forward, it makes you wonder how Meta is going to improve on the Quest Pro 2 (assuming it’s in the works).

While we have you check out TechRadar’s list of the best VR headsets for 2023

You might also like

TechRadar – All the latest technology news

Read More

Forget ChatGPT – NExT-GPT can read and generate audio and video prompts, taking generative AI to the next level

2023 has felt like a year dedicated to artificial intelligence and its ever-expanding capabilities, but the era of pure text output is already losing steam. The AI scene might be dominated by giants like ChatGPT and Google Bard, but a new large language model (LLM), NExT-GPT, is here to shake things up – offering the full bounty of text, image, audio, and video output. 

NExT-GPT is the brainchild of researchers from the National University of Singapore and Tsinghua University. Pitched as an ‘any-to-any’ system, NExT-GPT can accept inputs in different formats and deliver responses according to the desired output in video, audio, image, and text responses. This means that you can put in a text prompt and NExT-GPT can process that prompt into a video, or you can give it an image and have that converted to an audio output. 

ChatGPT has only just announced the capability to ‘see, hear and speak’ which is similar to what NExT-GPT is offering – but ChatGPT is going for a more mobile-friendly version of this kind of feature, and is yet to introduce video capabilities. 

We’ve seen a lot of ChatGPT alternatives and rivals pop up over the past year, but NExT-GPT is one of the few LLMs we’ve seen so far that can match the text-based output of ChatGPT but also provide outputs beyond what OpenAI’s popular chatbot can currently do. You can head over to the GitHub page or the demo page to try it out for yourself. 

So, what is it like?

I’ve fiddled around with NExT-GPT on the demo site and I have to say I’m impressed, but not blown away. Of course, this is not a polished product that has the advantages of public feedback, multiple updates, and so on – but it is still very good. 

I asked it to turn a photo of my cat Miso into an image of him as a librarian, and I was pretty happy with the result. It may not be at the same level of quality as established image generators like Midjourney or Stable Diffusion, but it was still an undeniably very cute picture.

Cat in a library wearing glasses

This is probably one of the least cursed images I’ve personally generated using AI. (Image credit: Future VIA NExT-GPT)

I also tested out the video and audio features, but that didn't go quite as well as the image generation. The videos that were generated were again not awful, but did have the very obvious ‘made by AI’ look that comes with a lot of generated images and videos, with everything looking a little distorted and wonky. It was uncanny. 

Overall, there’s a lot of potential for this LLM to fill the audio and video gaps within big AI names like OpenAI and Google. I do hope that as NExT-GPT gets better and better, we’ll be able to see a higher quality of outputs and make some excellent home movies out of our cats seamlessly in no time. 

You might also like…

TechRadar – All the latest technology news

Read More

Microsoft axes Video Editor in latest Windows 10 Photos app update, and users aren’t happy

Coming in hot on the heels of a freshly updated Photos app in Windows 10, which has sparked discussion about its merit among users, Microsoft seems intent on stoking the fire. 

The new Photos app is missing some of the editing tools of its predecessor, has some new ones, and now no longer has a built-in Video Editor. Instead, the Editor will be replaced with a web-based app called Clipchamp.

According to Windows Latest, you may be able to open the old Video Editor, but if it’s been updated (probably through the most recent Windows 10 update), you’ll be met with a pop-up saying the following: 

“Microsoft Video Editor is no longer available in the Photos app. Your previous video projects can be accessed by downloading the Photos Legacy app in Settings. For new videos, unleash your creativity with Clipchamp.“

So, what can you do now?

You can still download the Photos Legacy app in the Microsoft Store, like the pop-up says, and restore the original Video Editor. Yet Windows Latest speculates that this might signal the beginning of the end for this generation of the Photos app and its editing capabilities. Eventually, we may not even have a Photos Legacy app at all (along with its Video Editor feature).  

The Photos Legacy app is similar to the Windows 11 version of the app, and it differs from the previous Windows 10 Photos app. Some of the changes that angered users are the removal of the Clarity slider and the Spot fix feature. This change was warned about shortly before it happened as Windows 10 users were notified ahead of the changes.

The move is presumably because Microsoft wants to usher users away from the Video Editor feature and over to the web-based Clipchamp, which was acquired by Microsoft back in 2021. Windows 11’s Photos and Windows 10’s Photos will still include video editing for now, as confirmed by an engineer at Microsoft to Windows Latest. 

Microsoft Store in Windows 10

(Image credit: Microsoft)

The new video editor in town: Clipchamp

So what’s Clipchamp? It’s a free video editor that allows users to make as many videos as they like in high definition (1080p). It’s a browser-based app that you can access at clipchamp.com and to access it, all you need is a Microsoft account and to log in on the website. You can find our review of Clipchamp here.

This app might remind you of a relic of the recent past – Windows Movie Maker. Movie Maker is also no more – officially decommissioned back in 2017 – and Microsoft is propping up Clipchamp as a replacement for it. 

Clipchamp is a more capable video-editing app, and allows any user to make a video that looks pretty professional. It also has a user-friendly interface and quick setup process. However, many still liked the old Video Editor, perhaps for its even more straightforward simplicity. 

Clipchamp

(Image credit: Sofia Wyciślik-Wilson)

What's the actual problem?

Not just known for its simple approach, Windows 10’s Video Editor could also encode much smaller-sized videos than those of Clipchamp. In Microsoft’s Feedback Hub, where users give feedback directly to Microsoft as outlined by Windows Latest, one user asked: “Why is the Clipchamp exported video 5 times the size of the photo “legacy” video editor?”

Yikes. 

The user details their complaint and outlines their comparison between Clipchamp and Photos Legacy’s Video Editor, and they aren’t happy. I understand why; there's a big difference, especially if you’re making a video for personal reasons instead of commercial purposes. File storage isn’t free, after all!

It makes you think – does Microsoft have plans to present a repackaged Video Editor elsewhere? Maybe it could enjoy a new lease on life as a paid download if it still maintains such popularity.

If you have similar thoughts or your own opinion you’d like to share, Microsoft does often repeat that they’d like to hear users’ thoughts on the matter. The uproar was so loud when it tried to do something similar with Paint that the beloved app was brought back as a optional download via the Microsoft Store, so maybe the tech giant will listen to users this time around too. 

TechRadar – All the latest technology news

Read More

Meta Quest 3 video leak shows off thinner design and new controllers

The Meta Quest 3 (aka the Oculus Quest 3) is now official, but isn't due to go on sale until September or October time. If you're keen for an earlier look at the virtual reality headset before then, an unboxing video has made its way online.

This comes from @ZGFTECH on X/Twitter (via Android Authority), and we get a full look at the new device and the controllers that come with it. Meta has already published promo images of the headset, but it's interesting to see it in someone's hands.

As revealed by Meta chief Mark Zuckerberg, the Meta Quest 3 is some 40% thinner than the Oculus Quest 2 that came before it. From this video it looks like the Quest 2's silicone face pad and cloth strap have been carried over to the new piece of hardware.

You may recall that the Quest 2 originally shipped with foam padding, before Meta responded to complaints of skin irritation by replacing the foam with silicone. That lesson now appears to have been learned with this new device and the Meta Quest Pro.

See more

Take control

The controllers that come with the Meta Quest 3 look a lot like the ones supplied with the Meta Quest Pro, though these don't have built-in cameras. The ring design of the Oculus Quest 2 has been ditched, with integrated sensors and predictive AI taking over tracking duties, according to Meta.

As for the outer packaging, it's not particularly inspiring, featuring just the name of the device on the top. Presumably something a bit more eye-catching will be put together before the headset actually goes on sale.

It's not clear where the headset has been sourced from, but the device has clearly been in testing for a while. This is becoming something of a running theme too, because the Meta Quest Pro was leaked in similar fashion after being left behind in a hotel room.

We should get all the details about the Meta Quest 3, including the date when we'll actually be able to buy it, on September 27 at the Meta Connect event. TechRadar will of course bring you all the news from the show, and any further leaks that may emerge between then and now.

You might also like

TechRadar – All the latest technology news

Read More

YouTube video translation is getting an AI-powered dubbing tool upgrade

YouTube is going to help its creators reach an international audience as the platform plans on introducing a new AI-powered dubbing tool for translating videos into other languages.

Announced at VidCon 2023, the goal of this latest endeavor is to provide a quick and easy way for creators to translate “at no cost” their content into languages they don’t speak. This can help out smaller channels as they may not have the resources to hire a human translator. To make this all possible, Amjad Hanif, vice president of Creator Products at YouTube, revealed the tool will utilize the Google-created Aloud plus the platform will be bringing over the team behind the AI from Area 120, a division of the parent company that frequently works on experimental tech.

Easy translation

The way the translation system works, according to the official Aloud website, is the AI will first transcribe a video into a script. You then edit the transcription to get rid of any errors, make clarifications, or highlight text “where timing is critical.” From there, you give the edited script back to Aloud where it will automatically translate your video into the language of your choice. Once done, you can publish the newly dubbed content by uploading any new audio tracks onto their original video.

A Google representative told us “creators do not have to [actually] understand any of the languages that they are dubbing into.” Aloud will handle all of the heavy lifting surrounding complex tasks like “translation, timing, and speech synthesis.” Again, all you have to do is double-check the transcription. 

Future changes

It’s unknown when the Aloud update will launch. However, YouTube is already working on expanding the AI beyond what it’s currently possible. Right now, Aloud can only translate English content to either Spanish or Portuguese. But there are plans to expand into other languages from Hindi to Indonesian plus support for different dialects.

Later down the line, the platform will introduce a variety of features such as “voice preservation, better emotion transfer, and even lip reanimation” to improve enunciation. Additionally, YouTube is going to build in some safeguards ensuring only the creators can “dub their own content”.

The same Google representative from earlier also told us the platform is testing the Aloud AI with “hundreds of [YouTube] creators” with plans to add more over time. As of June 2023, over 10,000 videos have been dubbed in over 70 languages. 

You can join the early access program by filling out the official Google Docs form. If you want to know what an Aloud dub sounds like, go watch the channel trailer for the Amoeba Sisters channel on YouTube. Click the gear icon, go to Audio Track, then select Spanish. The robotic voice you’ll hear is what the AI will create. 

TechRadar – All the latest technology news

Read More

YouTube Premium’s best video feature might no longer be iPhone-exclusive

It looks like YouTube’s 1080p Premium video quality is finally rolling out to Android devices for paying subscribers, after a brief period of iOS exclusivity.

If you're an active YouTube Premium member – it costs $ 11.99 / £11.99 / AU$ 14.99 per month – and use an iOS device like an iPhone 14, you can currently watch videos in ‘1080p Premium’ quality. These are like regular HD videos, but are streamed using a higher bitrate, which means the video is less compressed, and so should look crisper and more detailed.

It looks like this upgrade won’t be exclusive to the best iPhones for much longer, as Android phone and Google TV users who pay for YouTube Premium are reporting that they can see the 1080p Premium video option (via 9to5Google). Right now the feature doesn’t appear to be widespread, and reportedly the users don’t see the option all the time, but this seemingly inadvertent rollout suggests that 1080p Premium will soon be available for more YouTube users.

Google has yet to say when 1080p Premium will officially roll out for Android, but be on the lookout for an update to the app in the coming days and weeks. If you want to take advantage of the upgrade, remember that you’ll also have to sign up for YouTube Premium.

As for those of you who want to keep using YouTube for free, you’ll still have access to the same 1080p HD-quality videos you had before – just without the added benefits of the higher bitrate.

Should you subscribe to YouTube Premium?

Poeple watching a YouTube video together while in a Google Meet video call.

(Image credit: YouTube)

If you use YouTube a lot then you've probably thought about signing up for Premium, especially as the company has steadily introduced more reasons for you to subscribe.

Higher-bitrate videos, the ability to download videos for offline viewing, and Google Meet group watch-alongs are a few of the upgrades to the YouTube service that await Premium members. You’ll also be able to watch YouTube ad-free (ignoring any ads that the creator bakes into the video).

The ad-free feature is getting better too – although for the wrong reasons. Earlier this year YouTube announced that unskippable ads will be getting longer (they can now be up to 30s) on your Google TV, and it’s playing around with “pause experiences” – adverts that appear around the video whenever you pause it. As ads become more annoying, the ability to switch them off becomes more appealing.

That said, YouTube Premium is pretty darn pricey; $ 11.99 / £11.99 / AU$ 14.99 is more than you’d pay for a number of the best streaming services, so it’ll only be worth it if you use YouTube a lot.

TechRadar – All the latest technology news

Read More

WhatsApp beta now lets you send video messages – here’s how to enable it

WhatsApp is currently rolling out several new features to beta testers across different platforms – chief among them are video messages that will be available exclusively to mobile devices. 

You read that right. On top of sending audio recordings, WhatsApp will soon let you send video messages as well.

The way it currently works on beta, according to WABetaInfo, is users will have to tap the microphone button next to the chat bar where it'll turn into a new camera icon. Pressing that button lets you record a short 60-second clip, which can be shared with a contact for quick communication. 

Once the other person receives the clip, they have to tap the file to enlarge it if they “want to listen to the audio”. Otherwise, it just plays the clip muted. Basically, WhatsApp is working on introducing its version of Snapchat, but unlike Snapchat, it’s unknown if the clips will automatically delete themselves after a certain amount of time has passed or not.

WABetaInfo’s post hints at they will get deleted soon after being sent, though the post also states the videos won’t be sent under view once mode. So there may be some flexibility in how clips are sent. Like a lot of other WhatsApp content, video messages will be protected by the service’s end-to-end encryption ensuring total privacy. Be aware it won’t be possible to forward video messages to other users. They're for your eyes only.

WhatsApp video messages

(Image credit: WABetaInfo)

How to download the beta

To try out video messages, Android users will need to install the beta by joining the Google Play Store Beta Program and downloading the latest update. If you don’t get it, keep an eye for future patches. Only a handful of testers have access at this moment, but Meta will reportedly release the feature to more people over the coming weeks. Oh, and your recipients need to be a part of the program too; otherwise, the video messages won’t work.

The beta is available to iPhone users, but the iOS program is closed to new entrants. If you’re not already a part of Apple’s TestFlight service for WhatsApp, you’ll just have to wait for the official launch. 

Coming to Windows

Besides the smartphone update, WhatsApp is also rolling out some new additions to its beta app on Windows. For one, the desktop version is getting screen-sharing for video calls, something that was first seen on Android. From the looks of it, the Windows rendition functions pretty much the same way with the bottom control panel having a new screen-sharing icon. In addition, WhatsApp is introducing a call-back button for quickly returning missed calls – rather small upgrade, but still a helpful one. 

To try out these two features, all you have to do is install WhatsApp Beta from the Microsoft Store. It's that simple. 

Speaking of added convenience, it appears WhatsApp is planning on giving people the ability to have multiple accounts on a single Android device in a similar fashion to Instagram. Be sure to check out TechRadar's coverage on the future update.

TechRadar – All the latest technology news

Read More

Our favorite free video editing software gets unexpected performance boost from new macOS Sonoma

One of the big announcements at Apple’s WWDC 2023 was macOS Sonoma (we looked it up; it means “Valley of the Moon”). 

Apple claims the new operating system has a sharp focus on productivity and creativity. It says “the Mac experience is better than ever.” To prove it, the company revealed screensavers, iPhone widgets running on Macs, a gaming mode, and fresh video conferencing features. 

But the new macOS has another surprising feature for users of our pick for best free video editing software.  

The final cut 

Beyond WWDC’s bombshell reveal – yes, Snoopy is an Apple fan now – the event served up more than enough meat to keep users happy. There’s a new Macbook Air 15-inch on the way, said to be the “world’s thinnest.” The watchOS 10 beta countdown has started. And the Vision Pro is dividing opinion. Is the VR headset the future or will it lose you friends?

The reveal of the new Mac operating system, meanwhile, feels quieter somehow. Muted. Perhaps new PDF editor functionalities and a host of “significant” updates to the Safari browser aren’t as eye-catching as a pair of futuristic AR/VR ski goggles.  

However, Craig Federighi, Apple’s senior vice president of Software Engineering, said, “macOS is the heart of the Mac, and with Sonoma, we’re making it even more delightful and productive to use.” 

What he didn’t say, but the company later revealed, is that Sonoma adds an extra bonus for video editors. 

Designed for remote and hybrid in-studio workflows, the operating system brings a high-performance mode to the Screen Sharing app. Taking advantage of the media engine in Apple silicon, users are promised responsive remote access with low-latency audio, high frame rates, and support for up to two virtual displays. 

According to Apple, “This mode empowers pros to securely access their content creation workflows from anywhere – whether editing in Final Cut Pro or DaVinci Resolve, or animating complex 3D assets in Maya.” It also enables remote colour workflows that previously demanded the best video editing Macs and video editing PCs

It seems Final Cut Pro is getting a lot of attention lately. May saw the launch of Final Cut Pro for iPad – how did it take so long? – and now better support in the operating system. What next? Perhaps that open-letter from film & TV professionals pleading for improved support really did focus minds at Apple Park.  

TechRadar – All the latest technology news

Read More