Turns out the viral ‘Air Head’ Sora video wasn’t purely the work of AI we were led to believe

A new interview with the director behind the viral Sora clip Air Head has revealed that AI played a smaller part in its production than was originally claimed. 

Revealed by Patrick Cederberg (who did the post-production for the viral video) in an interview with Fxguide, it has now been confirmed that OpenAI's text-to-video program was far from the only force involved in its production. The 1-minute and 21-second clip was made with a combination of traditional filmmaking techniques and post-production editing to achieve the look of the final picture.

Air Head was made by ShyKids and tells the short story of a man with a literal balloon for a head. While there's human voiceover utilized, from the way OpenAI was pushing the clip on social channels such as YouTube, it certainly left the impression that the visuals were was purely powered by AI, but that's not entirely true. 

As revealed in the behind-the-scenes clip, a ton of work was done by ShyKids who took the raw output from Sora and helped to clean it up into the finished product. This included manually rotoscoping the backgrounds, removing the faces that would occasionally appear on the balloons, and color correcting. 

Then there's the fact that Sora takes a ton of time to actually get things right. Cederberg explains that there were “hundreds of generations at 10 to 20 seconds a piece” which were then tightly edited in what the team described as a “300:1” ratio of what was generated versus what was primed for further touch-ups. 

Such manual work also included editing out the head which would appear and reappear, and even changing the color of the balloon itself which would appear red instead of yellow. While Sora was used to generate the initial imagery with good results, there was clearly a lot more happening behind the scenes to make the finished product look as good as it does, so we're still a long way out from instantly-generated movie-quality productions. 

Sora remains tightly under wraps save for a handful of carefully curated projects that have been allowed to surface, with Air Head among the most popular. The clip has over 120,000 views at the time of writing, with OpenAI touting as “experimentation” with the program, downplaying the obvious work that went into the final product. 

Sora is impressive but we're not convinced

While OpenAI has done a decent job of showcasing what its text-to-video service can do through the large language model, the lack of transparency is worrying. 

Air Head is an impressive clip by a talented team, but it was subject to a ton of editing to get the final product to where it is in the short. 

It's not quite the one-click-and you-'re-done approach that many of the tech's boosters have represented it as. It turns out that it is merely a tool which could be used to enhance imagery instead of create from scratch, which is something that is already common enough in video production, making Sora seem less revolutionary than it first appeared.

You may also like

TechRadar – All the latest technology news

Read More

iPadOS 16: Five features I’d like to see as we head towards WWDC 2022

When iPadOS 15 was announced back at WWDC 2021, I was disappointed to find that it was more of a catch-up to iOS 14, with widgets on the home screen.

While the new Focus feature and better multitasking options were welcome, they didn't go far enough in improving how I used the iPad at the time. As these updates felt so minor to me, I decided to switch to a MacBook Pro 14-inch (2021), and I've been happy with it since.

However, with WWDC 2022 confirmed for June 6, there's a good chance we'll see iPadOS 16. Hopefully, we'll see the operating system set itself apart from iOS, with features that are not only exclusive to the iPad but justifies the 'Pro' in iPad Pro.

With this in mind, here are five features that I'd like to see for iPadOS 16.

iPad home screen with widgets in iPadOS 15

(Image credit: Future)

1. External monitor support

This is a feature that many iPad users have been wanting, myself included when I owned one. While you can connect an iPad to a display, it only mirrors what's being shown on the tablet, and worse, in a resolution that doesn't adapt to the monitor.

We're in a time where completing your work on two or three monitors is normal. You can swap apps and windows between these displays and macOS or Windows 11 handles them fine.

But in iPadOS, that's not possible. Let's see an additional multitasking window show when an iPad is connected to a display. This way, you can swipe an app to another display, and let it display in the full resolution that the monitor is capable of.

2. Redesigned lock screen

There are parts of iPadOS where it looks as though it's an iPhone feature but supersized. Siri was guilty of this for years, where it would cover the entire screen, but thankfully this was resized in a compact menu in iPadOS 13.

The lock screen should be next to benefit from this. While we were given refined notifications in iPadOS 15, there's plenty of space being wasted, especially on the 12.9-inch iPad Pro.

Let's see a widget displayed at least – perhaps weather as the default, followed by the choice of adding another. While you can swipe to the left and have some widgets display, having them show as soon as you wake the screen would be a nice touch.

3. Record more than one person in a call

This has been a bugbear of content creators, especially those who record podcasts. While you're able to take part in calls and group calls thanks to FaceTime, Skype and others, there's been no way to record everyone separately.

This is how many people capture the recordings for a podcast, as it enables audio editors to place separate audio files to make an episode.

Currently, on iPadOS, there's no way of doing this.

So, let's see an easier way to record multiple people on a call and be able to save them all as separate files, ready to edit into a podcast.

This one change could open up the iPad as a portable podcast machine – from recording a guest, to placing the file into Garageband or Ferrite, then saving it as a finished podcast file, ready to upload to a provider.

4. Final Cut

While there are apps like iMovie and Luma Digital that can edit your video projects, some content creators want the extra power and features that an app like Final Cut provides.

This is Apple's pro version of its video editing apps and has only been available on macOS. But with the Mac and iPad both running on Apple Silicon, users have been wishing to see Final Cut on the iPad.

Seeing this as part of iPadOS 16, along with widgets and shortcuts, could really appeal to pro users. And being able to carry on with their Final Cut projects from Mac to iPad would improve workflows, no need to use a different app on an iPad.

5. Better picture-in-picture support

This is a feature that was once exclusive to the iPad, before moving over to macOS, then iOS 15. However, its features have stayed the same since its debut in iOS 9 on iPad. It's time for some improvements.

To have a timeline slider would be a great benefit, as you currently have to go back to the app that's originally playing the video and press the slider to switch to a different part of what's playing.

Another welcome feature would be the ability to place the video anywhere on the display. While you can do that to a point now, the video has been known to place itself below menus or obstructed by an app. On macOS, you can solve this by holding down the command button and dragging the video anywhere on the display.

If these two improvements arrived on iPadOS, there'd be an increase in its use, especially with YouTube's decision to bring the feature to its app for Premium users.

TechRadar – All the latest technology news

Read More