YouTube can stream 8K videos to your Meta Quest 3 – even though its displays aren’t 8K

Following news that Meta’s Quest 3’s getting some big mixed reality upgrades including an AI that can recognize furniture and improved passthrough quality, there’s yet another improvement on the way this time for one of my favorite Quest apps: YouTube.

That’s because the VR version of the video-sharing platform now supports 8K video playback on Quest 3 – up from the previous max of 4K.

To turn it on make sure you’re running YouTube VR app version 1.54 or later, then boot up a video that supports 8K, tap on the gear icon, and where it says Quality you want to change the resolution to 4320p – or 4320p60 if you want 8K at 60fps instead of the usual 30fps. If 4320p isn’t an option in this list unfortunately the video you want to watch isn’t streaming in 8K.

There are a few extra caveats. First, you’ll want a strong internet connection, because even if the video supports 8K playback you’ll struggle to stream it over weak WiFi – unless you like waiting for it to buffer. Oh, and one other important detail; the Quest 3 doesn’t have 8K displays. But that's not as big a problem as it might seem.

Method in the 8K madness

The Quest 3 has two displays (one for each eye) that boast 2,064 x 2,208 pixels each; 8K resolution is 7,680 × 4,320 pixels. Even if we combine the two displays they still boast only just over 25% as many pixels as an 8K display.

So is 8K streaming pointless? Well, not entirely. 

A Meta Quest 3 owner watching a spatial video of their husky dog in a field

Spatial video is 3D, but not as immersive as 360 video (Image credit: Meta)

For flat YouTube videos, playing them in 8K probably is worthless on Quest hardware. The only advantage you might find is that you’ll be seeing a downscaled video – the opposite of upscaled, where a higher resolution source is played at a lower resolution – which can sometimes lead to a more detailed image than simply streaming a video at the lower resolution.

The real improvement can be found instead with immersive 360-degree videos. 

To explain things simply: when you see a flat video you see the whole resolution in that 16:9 frame. In 360 videos the resolution is spread across a much larger image, and you only see portions of that image based on where you’re looking. That’s why – if you’ve watched 360 videos in VR – 4K content can look more like HD, and HD content can look like blurry messes.

By bumping things up to 8K you’ll find that immersive 3D video should look a lot more crisp – as the sections you’re looking at are now effectively 4K. So while you're not seeing 8K, you're still getting a higher resolution.

This update may also be a good future-proofing update for the next Meta hardware. With rumors that a Meta Quest Pro 2 could up the display game for Quest hardware, there’s a chance that it'll get closer to having actual 8K displays, though we’ll have to wait and see.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI’s impressive new Sora videos show it has serious sci-fi potential

OpenAI's Sora, its equivalent of image creation but for videos, made huge shockwaves in the swiftly advancing world of AI last month, and we’ve just caught a few new videos which are even more jaw-slackening than what we have already been treated to.

In case you somehow missed it, Sora is a text-to-video AI meaning you can write a simple request and it’ll compose a video (just as image generation previously worked, but obviously a much more complex endeavor).

An eye with the iris being a globe

(Image credit: OpenAI)

Now OpenAI’s Sora research lead Tim Brooks has released some new content generated by Sora on X (formerly Twitter). 

This is Sora’s crack at fulfilling the following request: “Fly through tour of a museum with many paintings and sculptures and beautiful works of art in all styles.”

Pretty impressive to say the least. On top of that, Bill Peebles, also a Sora research lead, showed us a clip generated from the following prompt: “An alien blending in naturally with new york city, paranoia thriller style, 35mm film.”

An alien character walking through a street

(Image credit: OpenAI)

Content creator Blaine Brown then stepped in to embellish the above clip, cutting it to repeat the footage and make it longer, while having the alien rapping, complete with lip-syncing. The music is generated by Suno AI by the way (with the lyrics written by Brown, mind), and lip-syncing is done with Pika Labs AI.

See more

Analysis: Still early days for Sora

Two people having dinner

(Image credit: OpenAI)

It’s worth underlining how fast things seem to be progressing with the capabilities of AI. Image creation powers were one thing – and extremely impressive in themselves – but this is entirely another. Especially when you remember that Sora is still just in testing at OpenAI, with a limited set of ‘red teamers’ (testers hunting out bugs and smoothing over those wrinkles).

The camera work in the museum fly-through flows realistically and feels nicely imaginative in the way it swoops around (albeit with the occasional judder). And the last tweet shows how you can take a base clip and flesh it out with content including AI-generated music.

Of course, AI can write a script as well, and so it begs the question: how long will it be before a blue alien is starring in an AI-generated post-apocalyptic drama. Or an (unintentional) comedy perhaps?

You get the idea, and we’re getting carried away, of course, but still – what AI could be capable of in just a few years is potentially mind-blowing, frankly.

Naturally, we’ll be seeing the cream of the crop of what Sora is capable of in these teasers, and there have been some buggy and weird efforts aired too. (Just as when ChatGPT and other AI chatbots first rolled onto the scene, we saw AI hallucinations and general unhinged behavior and replies).

Perhaps the broader worry with Sora, though, is how this might eventually displace, rather than assist, content creators. But that’s a fear to chew over on another day – not forgetting the potential for misuse with AI-created videos which we recently discussed in more depth here.

You might also like

TechRadar – All the latest technology news

Read More

Google’s impressive Lumiere shows us the future of making short-form AI videos

Google is taking another crack at text-to-video generation with Lumiere, a new AI model capable of creating surprisingly high-quality content. 

The tech giant has certainly come a long way from the days of Imagen Video. Subjects in Lumiere videos are no longer these nightmarish creatures with melting faces. Now things look much more realistic. Sea turtles look like sea turtles, fur on animals has the right texture, and people in AI clips have genuine smiles (for the most part). What’s more, there's very little of the weird jerky movement seen in other text-to-video generative AIs. Motion is largely smooth as butter. Inbar Mosseri, Research Team Lead at Google Research, published a video on her YouTube channel demonstrating Lumiere’s capabilities. 

Google put a lot of work into making Lumiere’s content appear as lifelike as possible. The dev team accomplished this by implementing something called Space-Time U-Net architecture (STUNet). The technology behind STUNet is pretty complex. But as Ars Technica explains, it allows Lumiere to understand where objects are in a video, how they move and change and renders these actions at the same time resulting in a smooth-flowing creation. 

This runs contrary to other generative platforms that first establish keyframes in clips and then fill in the gaps afterward. Doing so results in the jerky movement the tech is known for.

Well equipped

In addition to text-to-video generation, Lumiere has numerous features in its toolkit including support for multimodality. 

Users will be able to upload source images or videos to the AI so it can edit them according to their specifications. For example, you can upload an image of Girl with a Pearl Earring by Johannes Vermeer and turn it into a short clip where she smiles instead of blankly staring. Lumiere also has an ability called Cinemagraph which can animate highlighted portions of pictures.

Google demonstrates this by selecting a butterfly sitting on a flower. Thanks to the AI, the output video has the butterfly flapping its wings while the flowers around it remain stationary. 

Things become particularly impressive when it comes to video. Video Inpainting, another feature, functions similarly to Cinemagraph in that the AI can edit portions of clips. A woman’s patterned green dress can be turned into shiny gold or black. Lumiere goes one step further by offering Video Stylization for altering video subjects. A regular car driving down the road can be turned into a vehicle made entirely out of wood or Lego bricks.

Still in the works

It’s unknown if there are plans to launch Lumiere to the public or if Google intends to implement it as a new service. 

We could perhaps see the AI show up on a future Pixel phone as the evolution of Magic Editor. If you’re not familiar with it, Magic Editor utilizes “AI processing [to] intelligently” change spaces or objects in photographs on the Pixel 8. Video Inpainting, to us, seems like a natural progression for the tech.

For now, it looks like the team is going to keep it behind closed doors. As impressive as this AI may be, it still has its issues. Jerky animations are present. In other cases, subjects have limbs warping into mush. If you want to know more, Google’s research paper on Lumiere can be found on Cornell University’s arXiv website. Be warned: it's a dense read.

And be sure to check out TechRadar's roundup of the best AI art generators for 2024.

You might also like

TechRadar – All the latest technology news

Read More

WhatsApp’s desktop app now lets you send self-destructing photos and videos

Meta is rolling out a View Once feature to WhatsApp on desktop and web allowing users to send out time-sensitive content. 

The update was initially discovered by WABetaInfo as the tech giant has yet to formally announce it. Looking at WABetaInfo's report, it’s basically the exact same feature on the mobile. WhatsApp added View Once to its smartphone app back in 2022 as a new privacy tool. Pictures or videos sent to contacts in this manner cannot be saved. Once the recipient looks at the file, it’s gone forever. This ensures sensitive material is never seen by outside parties, shared with others, or risks being taken by a bad actor. Apparently, this was highly requested as WABetaInfo claims people complained about “the inability to send view once messages” on desktop. It seems Meta heard the outcry, although it did take the company over a year to respond.  

Vital details

There are some minor details you should know about. 

Recipients have 14 days to open the media or it’ll be automatically deleted, according to WhatsApp’s Help Center. The other person cannot take screenshots of the temporary content, but only if they have the latest version of WhatsApp installed. It is possible for others to screenshot a View Once file if they're running outdated software. As a result, the company recommends strictly sending content to trusted individuals. There are plans to rectify the privacy gap, however no word on when Meta will address this issue.

Do note you cannot send multiple temporary images at once. You have to send each file one by one. Plus, as pointed out by Windows Central, you can rewatch temporary videos “as often as you’d like”, but you have to stay in the interface. Clicking play prematurely or leaving the window will lose you access.

The update will be available to both Windows and macOS users. WABetaInfo states the update is being released in waves so only a select group has access to it at the moment. We recommend keeping an eye out for the patch when it arrives.

How to send View Once content

Sending a View Once photo is easy. After launching WhatsApp on desktop and selecting a chat, click the attachment icon next to the text box then choose an image. Above the send button is a number one inside of a disappearing circle. Clicking that icon activates the View Once function. Send the picture to someone and it'll delete the moment they close it. 

WhatsApp on web has a different layout, but luckily the process is exactly the same.

WhatsApp View Once on desktop

(Image credit: Future)

While we have you, be sure to follow TechRadar’s official WhatsApp channel to get our latest reviews sent right to your phone. 

You might also like

TechRadar – All the latest technology news

Read More

Google Bard can now watch YouTube videos for you (sort of)

Google has bolstered the powers of Bard AI regarding YouTube videos, with the AI now capable of tapping into a better level of understanding such content.

Google posted about the latest update for Bard and how these are the ‘first steps’ in allowing the AI to understand YouTube videos, and pull out relevant info from a clip as requested.

The example given is that you’re hunting out a YouTube explainer on how to bake a certain cake, and you can ask Bard how many eggs are required for the recipe in the video that pops up.

Bard is capable of taking in the whole video and summarizing it, or you can ask the AI specific questions as mentioned, with the new feature enabling the user to have ‘richer conversations’ with Bard on any given clip.

Another recent update for Bard improved its maths powers, specifically for equations and helping you solve tricky ones – complete with straightforward step-by-step explanations (just in English to begin with). Those equations can be typed in or supplied to Bard via an uploaded image.


Analysis: YouTube viewing companion

These are some useful new abilities, particularly the addition for YouTube, which builds on Google’s existing extensions for Bard that hook up to the company’s services including the video platform.

It’s going to be pretty handy to have Bard instantly pull up relevant details such as the mentioned quantities for recipes. Or indeed specifics you can’t recall when having just watched a video, to save you having to rewind back through to try and find those details.

The maths and equation-related skills are going to be a boon, too. The broad idea here is not just to show a solution, but teach how that solution was arrived at, thus equipping you to deal with other similar problems down the line.

Via Neowin

You might also like

TechRadar – All the latest technology news

Read More

Google Photos can now make automatic highlight videos of your life – here’s how

Google Photos is already capable of some increasingly impressive photo and video tricks – and now it's learned to create automatic highlight videos of the friends, family, and places you've chosen from your library.

The Google Photos app on Android and iOS already offers video creation tools, but this new update (rolling out from October 25) will let you search the people, places, or activities in your library that you'd like to star in an AI-created video. The app will then automatically rustle up a one-minute highlights video of all your chosen subjects.

This video will include a combination of video clips and photos, but Google Photos will also add music and sync the footage to those tunes. These kinds of auto-created highlight videos, which we've seen in the Google Photos Memories feature and elsewhere from the likes of GoPro, can be a little hit-and-miss in their execution, but we're looking forward to giving Google's new AI director a spin.

Fortunately, if you don't like some of Google Photos' choices, you can also trim or rearrange the clips, and pick some different music. You can see all of this in action in the example video below.

The Google Photos app showing an auto-created family video

(Image credit: Google)

So how will you be able to test-drive this new feature, once it rolls out on Android and iOS from October 25?

At the top of the app, hit the 'plus' icon and you'll see a new menu that includes options to create new Albums, Collages, Cinematic photos, Animations and, yes, Highlight videos.

Three phones on an orange background showing Google Photos video creation tools

(Image credit: Google)

Tap 'Highlight videos' and you'll see a search bar where you can search for your video stars, be that people, places, or even the years that events have taken place. From Google's demo, it looks like the default video length is one minute, but it's here that you can make further tweaks before hitting 'save'.

We've asked Google if this feature is coming to the web version of Google Photos and also Chromebooks, and will update this article when we hear back.

Tip of the AI iceberg

Google's main aim with photos and videos is to automate the kinds of edits that non-professionals have little time or appetite for – so this AI-powered video creator tool isn't a huge surprise.

We recently saw a related tool appear in Google Photos' Memories feature, which now lets you “co-author” Memories albums with friends and family. Collaborators can add their own photos and videos to your Memories, which can then be shared as a standalone video.

So whether you're looking to edit together your own highlights reels or, thanks to this new tool, let Google's algorithms do it for you, Google Photos is increasingly turning into the fuss-free place to do it.

The Google Pixel 8 Pro also recently debuted some impressive cloud-based video features, including Video Boost and Night Sight Video. The only slight shame is that these features require an internet connection rather than working on-device, though AI tools like Magic Eraser and Call Screen do at least work locally on your phone.

You might also like

TechRadar – All the latest technology news

Read More

Adobe Premiere Pro update lets you edit videos like it’s a word doc

Adobe’s latest update to Premiere Pro promises to be an absolute game-changer for video editors with the arrival of text-based video editing. 

According to the company, the feature, first trailed in April 2023, is “an entirely new way of creating rough cuts that are as simple as copying and pasting text.” That means it doesn’t change how videos are produced – it alters who can edit videos. 

Text-based editing isn’t the only new feature now available in one of the best video editing software tools on the market. In a bid to maximize workflows, Adobe has unveiled Background Auto Save, smoother scrolling, and improved language support, too.  

Ctrl + V(ideo) 

Editing videos through text is all about streamlining and simplicity. This is, after all, about making it easier to stitch together rough cuts before fine-tuning. 

Once source footage is transcribed, users can quickly highlight the required text from the transcript and insert it into the timeline. Using the sequence transcript, editors can then copy and paste text to move clips, or delete it to bin them, before refining the cut using Premiere Pro’s trimming tools. 

It’s not the first time Adobe has toyed with text-based video editing. Last year, the company unveiled its Project Blink beta, an AI-powered video editor for browsers, that works in a suspiciously similar manner.

When we reviewed the web-based video editing app, we were impressed  by its overall accessibility. Anyone who’s ever used Microsoft Word or similar will find themselves in somewhat familiar territory. At the time, we said, “it’s fair to say you lack the omniscient control that you’d find in other video editors, and this isn’t exactly an Adobe Premiere Pro alternative. But what would usually take hours in a fully-fledged video editor, Adobe's Project Blink can accomplish in minutes.” 

Adding text-based video editing in Premiere Pro takes that to another level. It not only gives just about everyone the ability to build a rough cut, but makes it an integral part of the workflow for experienced and professional video editors. 

And, like the proliferation of machine-learning neural filters and the ability to  remove objects from an image in one click Photoshop, it’s another example of Adobe simplifying creative processes. We’re all content creators now.  

TechRadar – All the latest technology news

Read More

Microsoft claims ChatGPT 4 will be able to make videos, and this won’t end well

ChatGPT 4 is coming as early as next week and will likely go with a new and potentially dreadful feature: video. 

Currently, ChatGPT and Microsoft’s updated Bing search engine are powered by ChatGPT 3.5 large language models, which allows them to respond to questions in a human-like way. But both AI implementations have had their fair share of problems so far, so what can we expect, or at least hope to see, with a new version on the horizon? 

According to Microsoft Germany’s CTO, Andreas Braun (as reported by Neowin), the company “will introduce GPT 4 next week, where we will have multimodal models that will offer completely different possibilities – for example, videos.” Braun made the comments during an event titled ‘AI in Focus – Digital Kickoff’. 

Essentially, AI is definitely not going away anytime soon. In its current state, we can interact with OpenAI's chatbot strictly through text, providing inputs and controls and getting conversational, mostly helpful, answers.

So the idea of having ChatGPT-powered chatbots, like the one in Bing, being able to reply in other mediums other than plain text is certainly exciting – but it also fills me with a bit of dread.

As I mentioned earlier, ChatGPT’s early days were marked with some strange and controversial responses that the chatbots gave to users. The one in Bing, for example, not only gave out incorrect information, but it then argued with the user who pointed out its mistakes, causing Microsoft to hastily intervene and limit the amount of responses it can provide in a single chat (and which Microsoft is only now slowly increasing again).

If we start seeing a similar streak of weirdness with videos, there could be even more concerning repercussions.

Ethics of AI

In a world where AI-generated ‘deepfake’ videos are an increasing concern for many people, especially those who unwittingly find themselves starring in those movies, the idea of ChatGPT dipping its toes into video creation is a bit worrying.

If people could ask ChatGPT to create a video starring a famous person, that celebrity would likely feel violated. While I’m sure many companies using ChatGPT 4, such as Microsoft, will try to limit or ban pornographic or violent requests, the fact that the ChatGPT code is easily available could mean more unscrupulous users could still abuse it.

There’s also the matter of copyright infringement. AI generated art has come under close scrutiny over where it is taking its samples from, and this will likely be the case with videos as well. Content creators, directors and streamers will likely take a dim view of their works being used in AI generated videos, especially if those videos are controversial or harmful.

AI, especially ChatGPT, which only launched a few months ago, is still in its infancy, and while its potential has yet to be fully realised, so too have the moral implications of what it can achieve. So, while Microsoft’s boasts about video coming soon to ChatGPT is impressive and exciting, the company also needs to be careful and make sure both users and original content creators are looked after.

TechRadar – All the latest technology news

Read More

New Spotify beta adds looping videos to music discovery as part of major updates

Spotify has announced two major updates: a slew of new features coming to its Car Thing device and the launch of Canvas looping videos on its mobile app. 

Both updates have begun rolling out to Spotify users. The Car Thing features will be limited to the U.S. and iOS users will get the update first. Android owners will get everything at a later date. 

Canvas has a greater reach as the videos will release in beta across the U.K., Ireland, Australia, New Zealand, and Canada for the Spotify mobile app.

More hands-off control

Car Thing was designed as a more convenient way to control Spotify while you drive and that core functionality is being expanded. Owners will now be able to see incoming calls on their screen where they can either answer the call or dismiss it.

Another big change is “Add to queue” which Spotify claims is one of its most requested features. It’s essentially the same feature on the mobile app where you can add songs or podcasts to a tracklist, but now you can use your voice.

There’s also going to be a new “Add to queue” icon on the touchscreen to add the song to a playlist or you can press and hold the dial to do the same thing. Other features include the ability to use your voice to ask Spotify for a personalized playlist and to control other media.

Looping recommendations

Canvas videos appear to have been inspired by Tik-Tok as a way to help people discover new types of music. Every day, Spotify will recommend you 15 Canvas loops based on the music that you like. You can scroll through the personalized selection to hear a preview and the Canvas for each song.

If you like what you see and hear, you can add the song to a playlist or follow the artist straight from the Canvas loop. The feature will also allow you to post the Canvas onto a social media app and have it loop in the background of a Story.

Canvas will be right on the mobile app’s home screen and will be created by the artists themselves to offer a sneak peek into the creative process. The full list of artists that will be in the Canvas section is unknown, but Spotify did reveal singer-songwriter Olivia Rodrigo as one of them.

Spotify didn’t say how long Canvas videos will be; whether it’s a 30-second loop or up to a 3-minute stream like TikTok.

TechRadar – All the latest technology news

Read More

New Spotify beta adds looping videos to music discovery as part of major updates

Spotify has announced two major updates: a slew of new features coming to its Car Thing device and the launch of Canvas looping videos on its mobile app. 

Both updates have begun rolling out to Spotify users. The Car Thing features will be limited to the U.S. and iOS users will get the update first. Android owners will get everything at a later date. 

Canvas has a greater reach as the videos will release in beta across the U.K., Ireland, Australia, New Zealand, and Canada for the Spotify mobile app.

More hands-off control

Car Thing was designed as a more convenient way to control Spotify while you drive and that core functionality is being expanded. Owners will now be able to see incoming calls on their screen where they can either answer the call or dismiss it.

Another big change is “Add to queue” which Spotify claims is one of its most requested features. It’s essentially the same feature on the mobile app where you can add songs or podcasts to a tracklist, but now you can use your voice.

There’s also going to be a new “Add to queue” icon on the touchscreen to add the song to a playlist or you can press and hold the dial to do the same thing. Other features include the ability to use your voice to ask Spotify for a personalized playlist and to control other media.

Looping recommendations

Canvas videos appear to have been inspired by Tik-Tok as a way to help people discover new types of music. Every day, Spotify will recommend you 15 Canvas loops based on the music that you like. You can scroll through the personalized selection to hear a preview and the Canvas for each song.

If you like what you see and hear, you can add the song to a playlist or follow the artist straight from the Canvas loop. The feature will also allow you to post the Canvas onto a social media app and have it loop in the background of a Story.

Canvas will be right on the mobile app’s home screen and will be created by the artists themselves to offer a sneak peek into the creative process. The full list of artists that will be in the Canvas section is unknown, but Spotify did reveal singer-songwriter Olivia Rodrigo as one of them.

Spotify didn’t say how long Canvas videos will be; whether it’s a 30-second loop or up to a 3-minute stream like TikTok.

TechRadar – All the latest technology news

Read More