Runway’s new OpenAI Sora rival shows that AI video is getting frighteningly realistic

Just a week on from arrival of Luma AI's Dream Machine, another big OpenAI Sora has just landed – and Runway's latest AI video generator might be the most impressive one yet.

Runway was one of the original text-to-video pioneers, launching its Gen-2 model back in March 2023. But its new Gen-3 Alpha model, which will apparently be “available for everyone over the coming days”, takes things up several notches with new photo-realistic powers and promises of real-world physics.

The demo videos (which you can see below) showcase how versatile Runway's new AI model is, with the clips including realistic human faces, drone shots, simulations of handheld cameras and atmospheric dreamscapes. Runway says that all of them were generated with Gen-3 Alpha “with no modifications”.

Apparently, Gen-3 Alpha is also “the first of an upcoming series of models” that have been trained “on a new infrastructure built for large-scale multimodal training”. Interestingly, Runway added that the new AI tool “represents a significant step towards our goal of building General World Models”, which could create possibilities for gaming and more.

A 'General World Model' is one that effectively simulates an environment, including its physics – which is why one of the sample videos shows the reflections on a woman's face as she looks through a train window.

These tools won't just be for us to level-up our GIF games either – Runway says it's “been collaborating and partnering with leading entertainment and media organizations to create custom versions of Gen-3 Alpha”, which means tailored versions of the model for specific looks and styles. So expect to see this tech powering adverts, shorts and more very soon.

When can you try it?

A middle-aged sad bald man becomes happy as a wig of curly hair and sunglasses fall suddenly on his head

(Image credit: Runway)

Last week, Luma AI's Dream Machine arrived to give us a free AI video generator to dabble with, but Runway's Gen-3 Alpha model is more targeted towards the other end of the AI video scale. 

It's been developed in collaboration with pro video creators with that audience in mind, although Runway says it'll be “available for everyone over the coming days”. You can create a free account to try Runway's AI tools, though you'll need to pay a monthly subscription (starting from $ 12 per month, or around £10 / AU$ 18 a month) to get more credits.

You can create videos using text prompts – the clip above, for example, was made using the prompt “a middle-aged sad bald man becomes happy as a wig of curly hair and sunglasses fall suddenly on his head”. Alternatively, you can use still images or videos as a starting point.

The realism on show is simultaneously impressive and slightly terrifying, but Runway states that the model will be released with a new set of safeguards against misuse, including an “in-house visual moderation system” and C2PA (Coalition for Content Provenance and Authenticity) provenance standards. Let the AI video battles commence.

You might also like…

TechRadar – All the latest technology news

Read More

A new OpenAI Sora rival just landed for AI videos – and you can use it right now for free

The text-to-video AI boom has really kicked off in the past few months, with the only downside being that the likes of OpenAI Sora still aren't available for us to try. If you're tired of waiting, a new rival called Dream Machine just landed – and you can take it for a spin right now.

Dream Machine is made by Luma AI, which has previously released an app that helps you shoot 3D photos with your iPhone. Well, now it's turned its attention to generative video, which has a free tier that you can use right now with a Google account – albeit with some caveats.

The main one is that Dream Machine seems to be slightly overwhelmed at the time of writing. There's currently a banner on the site stating that “generations take 120 seconds” and that “due to high demand, requests will be queued”. Our text prompt took over 20 minutes to be processed, but the results (below) are pretty impressive.

Dream Machine's outputs are more limited in length and resolution compared to the likes of OpenAI's Sora and Kling AI, but it's a good taster of how these services will work. The clips it produces are five seconds long and in 1360×752 resolution. You just type a prompt into its search bar and wait for it to appear in your account, after which you can download a watermarked version. 

While there was a lengthy wait for the results (which should hopefully improve once initial demand has dropped), our prompt of 'a close-up of a dog in sunglasses driving a car through Las Vegas at night' produced a clip that was very close to what we envisaged. 

Dream Machine's free plan is capped at 30 generations a month, but if you need more there are Standard (120 generations, $ 29.99 a month, about £24, AU$ 45), Pro (400 generations, $ 99.99 a month, about £80, AU$ 150) and Premier (2,000 generations, $ 499.99 a month, about £390, AU$ 750).

A taste of AI videos to come

Like most generative AI video tools, questions remain about exactly what data Luma AI's was trained on – which means that its potential outside of personal use or improving your GIF game could be limited. It also isn't the first free text-to-video tool we've seen, with Runway's Gen 2 model coming out of beta last year.

The Dream Machine website also states that the tool does have technical limitations when it comes to handling text and motion, so there's plenty of trial-and-error involved. But as a taster of the more advanced (and no doubt more expensive) AI video generators to come, it's certainly a fun tool to test drive.

That's particularly the case, given that other alternatives like Google Veo currently have lengthy waitlists. Meanwhile, more powerful models like OpenAI's Sora (which can generate videos that are 60-seconds long) won't be available until later this year, while Kling AI is currently China-only.

This will certainly change as text-to-video generation becomes mainstream, but until then, Dream Machine is a good place to practice (if you don't mind waiting a while for the results).

You might also like…

TechRadar – All the latest technology news

Read More

The TikTok of AI video? Kling AI is a scarily impressive new OpenAI Sora rival

It feels like we're at a tipping point for AI video generators, and just a few months on from OpenAI's Sora taking social media by storm with its text-to-video skills, a new Chinese rival is taking social media by storm.

Called Kling AI, the new “video generation model” is made by the Chinese TikTok rival Kuaishou, and it's currently only available as a public demo in China via a waitlist. But that hasn't stopped it from quickly going viral, with some impressive clips that suggest it's at least as capable as Sora.

You can see some of the early demo videos (like the one below) on the Kling AI website, while a number of threads on X (formerly Twitter) from the likes of Min Choi (below) have rounded up what are claimed to be some impressive early creations made by the tool (with some help from editing apps).

A blue parrot turning its head

(Image credit: Kling AI)

As always, some caution needs to be applied with these early AI-generated clips, as they're cherry-picked examples, and we don't yet know anything about the hardware or other software that's been used to create them. 

For example, we later found that an impressive Air Head video seemingly made by OpenAI's Sora needed a lot of extra editing in post-production.

See more

Still, those caveats aside, Kling AI certainly looks like another powerful AI video generator. It lets early testers create 1080/30p videos that are up to two minutes in length. The results, while still carrying some AI giveaways like smoothing and minor artifacts, are impressively varied, with a promising amount of coherence.

Exactly how long it'll be before Kling AI is opened up to users outside China remains to be seen. But with OpenAI suggesting that Sora will get a public release “later this year”, Kling AI best not wait too long if it wants to become the TikTok of AI-generated video.

The AI video war heats up

Now that AI photo tools like Midjourney and Adobe Firefly are hitting the mainstream, it's clear that video generators are the next big AI battleground – and that has big implications for social media, the movie industry, and our ability to trust what we see during, say, major election campaigns.

Other examples of AI generators include Google Veo, Microsoft's VASA-1 (which can make lifelike talking avatars from a single photo), Runway Gen-2, and Pika Labs. Adobe has now even showed how it could soon integrate many of these tools into Premiere Pro, which would be give the space another big boost.

None of them are yet perfect, and it isn't clear how long it takes to produce a clip using the likes of Sora or Kling AI, nor what kind of computing power is needed. But the leaps being made towards photorealism and simulating real-world physics have been massive in the past year, so it clearly won't be long before these tools hit the mainstream.

That battle will become an international one, too – with the US still threatening a TikTok ban, expect there to be a few more twists and turns before the likes of Kling AI roll out worldwide. 

You might also like…

TechRadar – All the latest technology news

Read More

Google’s answer to OpenAI’s Sora has landed – here’s how to get on the waitlist

Among the many AI treats that Google tossed into the crowd during its Google I/O 2024 keynote was a new video tool called Veo – and the waiting list for the OpenAI Sora rival is now open for those who want early access.

From Google's early Veo demos, the generative video tool certainly looks a lot like Sora, which is expected to be released “later this year.” It promises to whip up 1080p resolution videos that “can [be] beyond a minute” and in different cinematic styles, from time-lapses to aerial drone shots. You can see an example further down this page.

Veo, which is the engine behind a broader tool from Google's AI Test Kitchen called VideoFX, can also help you edit existing video clips. For example, you can give it an input video alongside a command, and it'll be able to generate extra scenery – Google's example being the addition of kayaks to an aerial coastal scene.

But like Sora, Veo is also only going to be open to a select few early testers. You can apply to be one of those 'Trusted Testers' now using the Google Labs form. Google says it will “review all submissions on a rolling basis” and some of the questions –including one that asks you to link to relevant work – suggest it could initially only be available to digital artists or filmmakers.

Still, we don't know the exact criteria to be an early Veo tester, so it's well worth applying if you're keen to take it for a spin.

The AI video tipping point

Veo certainly isn't the first generative video tool we've seen. As we noted when the Veo launch first broke, the likes of Synthesia, Colossyan, and Lumiere have been around for a while now. OpenAI's Sora has also hit the mainstream with its early music videos and strange TED Talk promos.

These tools are clearly hitting a tipping point because even the relatively conservative Adobe has shown how it plans to plug generative AI video tools into its industry-standard editor Premiere Pro, again “later this year.”

But the considerable computing power needed to run the likes of Veo's diffusion transformer models and maintain visual consistency across multiple frames, is also a major bottleneck on a wider rollout, which explains why many are still in demo form.

Still, we're now reaching a point where these tools are ready to partially leap into the wild, and being an early beta tester is a good way to get a feel for them before the inevitable monthly subscriptions are defined and rolled out.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI’s Sora just made another brain-melting music video and we’re starting to see a theme

OpenAI's text-to-video tool has been a busy bee recently, helping to make a short film about a man with a balloon for a head and giving us a glimpse of the future of TED Talks – and now it's rustled up its first official music video for the synth-pop artist Washed Out (below).

This isn't the first music video we've seen from Sora – earlier this month we saw this one for independent musician August Kamp – but it is the first official commissioned example from an established music video director and artist.

That director is Paul Trillo, an artist who's previously made videos for the likes of The Shins and shared this new one on X (formerly Twitter). He said the video, which flies through a tunnel-like collage of high school scenes, was “an idea I had almost 10 years ago and then abandoned”, but that he was “finally able to bring it to life” with Sora.

It isn't clear exactly why Sora was an essential component for executing a fairly simple concept, but it helped make the process much simpler and quicker. Trillo points to one of his earlier music videos, The Great Divide for The Shins, which uses a similar effect but was “entirely 3D animated”.

As for how this new Washed Out video was made, it required less non-Sora help than the Shy Kids' Air Head video, which involved some lengthy post-production to create the necessary camera effects and consistency. For this one, Trillo said he used text-to-video prompts in Sora, then cut the resulting 55 clips together in Premiere Pro with only “very minor touch-ups”.

The result is a video that, like Sora's TED Talks creation (which was also created by Trillo), hints at the tool's strengths and weaknesses. While it does show that digital special effects are going to be democratized for visual projects with tight budgets, it also reveals Sora's issues with coherency across frames (as characters morph and change) and its persistent sense of uncanny valley.

Like the TED Talks video, a common technique to get around these limitations is the dreamy fly-through technique, which ensures that characters are only on-screen fleetingly and that any weird morphing is a part of the look rather than a jarring mistake. While it works for this video, it could quickly become a trope if it's over-used.

A music video tradition

Two people sitting on the top deck of a bus

(Image credit: OpenAI / Washed Out)

Music videos have long been pioneers of new digital technology – the Dire Straits video for Money For Nothing in 1985, for example, gave us an early taste of 3D animation, while Michael Jackson's Black Or White showed off the digital morphing trick that quickly became ubiquitous in the early 90s (see Terminator 2: Judgement Day). 

While music videos lack the cultural influence they once did, it looks like they'll again be a playground for AI-powered effects like the ones in this Washed Out creation. That makes sense because Sora, which OpenAI expects to release to the public “later this year”, is still well short of being good enough to be used in full-blown movies.

We can expect to see these kinds of effects everywhere by the end of the year, from adverts to TikTok promos. But like those landmark effects in earlier music videos, they will also likely date pretty quickly and become visual cliches that go out of fashion.

If Sora can develop at the same rate as OpenAI's flagship tool, ChatGPT, it could evolve into something more reliable, flexible, and mainstream – with Adobe recently hinting that the tool could soon be a plug-in for Adobe Premiere Pro. Until then, expect to see a lot more psychedelic Sora videos that look like a mashup of your dreams (or nightmares) from last night.

You might also like…

TechRadar – All the latest technology news

Read More

Turns out the viral ‘Air Head’ Sora video wasn’t purely the work of AI we were led to believe

A new interview with the director behind the viral Sora clip Air Head has revealed that AI played a smaller part in its production than was originally claimed. 

Revealed by Patrick Cederberg (who did the post-production for the viral video) in an interview with Fxguide, it has now been confirmed that OpenAI's text-to-video program was far from the only force involved in its production. The 1-minute and 21-second clip was made with a combination of traditional filmmaking techniques and post-production editing to achieve the look of the final picture.

Air Head was made by ShyKids and tells the short story of a man with a literal balloon for a head. While there's human voiceover utilized, from the way OpenAI was pushing the clip on social channels such as YouTube, it certainly left the impression that the visuals were was purely powered by AI, but that's not entirely true. 

As revealed in the behind-the-scenes clip, a ton of work was done by ShyKids who took the raw output from Sora and helped to clean it up into the finished product. This included manually rotoscoping the backgrounds, removing the faces that would occasionally appear on the balloons, and color correcting. 

Then there's the fact that Sora takes a ton of time to actually get things right. Cederberg explains that there were “hundreds of generations at 10 to 20 seconds a piece” which were then tightly edited in what the team described as a “300:1” ratio of what was generated versus what was primed for further touch-ups. 

Such manual work also included editing out the head which would appear and reappear, and even changing the color of the balloon itself which would appear red instead of yellow. While Sora was used to generate the initial imagery with good results, there was clearly a lot more happening behind the scenes to make the finished product look as good as it does, so we're still a long way out from instantly-generated movie-quality productions. 

Sora remains tightly under wraps save for a handful of carefully curated projects that have been allowed to surface, with Air Head among the most popular. The clip has over 120,000 views at the time of writing, with OpenAI touting as “experimentation” with the program, downplaying the obvious work that went into the final product. 

Sora is impressive but we're not convinced

While OpenAI has done a decent job of showcasing what its text-to-video service can do through the large language model, the lack of transparency is worrying. 

Air Head is an impressive clip by a talented team, but it was subject to a ton of editing to get the final product to where it is in the short. 

It's not quite the one-click-and you-'re-done approach that many of the tech's boosters have represented it as. It turns out that it is merely a tool which could be used to enhance imagery instead of create from scratch, which is something that is already common enough in video production, making Sora seem less revolutionary than it first appeared.

You may also like

TechRadar – All the latest technology news

Read More

OpenAI’s new Sora video is an FPV drone ride through the strangest TED Talk you’ve ever seen – and I need to lie down

OpenAI's new Sora text-to-video generation tool won't be publicly available until later this year, but in the meantime it's serving up some tantalizing glimpses of what it can do – including a mind-bending new video (below) showing what TED Talks might look like in 40 years.

To create the FPV drone-style video, TED Talks worked with OpenAI and the filmmaker Paul Trillo, who's been using Sora since February. The result is an impressive, if slightly bewildering, fly-through of futuristic conference talks, weird laboratories and underwater tunnels.

The video again shows both the incredible potential of OpenAI Sora and its limitations. The FPV drone-style effect has become a popular one for hard-hitting social media videos, but it traditionally requires advanced drone piloting skills and expensive kit that goes way beyond the new DJI Avata 2.

Sora's new video shows that these kind of effects could be opened up to new creators, potentially at a vastly lower cost – although that comes with the caveat that we don't yet know how much OpenAI's new tool itself will cost and who it'll be available to.

See more

But the video (above) also shows that Sora is still quite far short of being a reliable tool for full-blown movies. The people in the shots are on-screen for only a couple of seconds and there's plenty of uncanny valley nightmare fuel in the background.

The result is an experience that's exhilarating, while also leaving you feeling strangely off-kilter – like touching down again after a sky dive. Still, I'm definitely keen to see more samples as we hurtle towards Sora's public launch later in 2024.

How was the video made?

A video created by OpenAI Sora for TED Talks

(Image credit: OpenAI / TED Talks)

OpenAI and TED Talks didn't go into detail about how this specific video was made, but its creator Paul Trillo recently talked more broadly about his experiences of being one of Sora's alpha tester.

Trillo told Business Insider about the kinds of prompts he uses, including “a cocktail of words that I use to make sure that it feels less like a video game and something more filmic”. Apparently these include prompts like “35 millimeter”, “anamorphic lens”, and “depth of field lens vignette”, which are needed or else Sora will “kind of default to this very digital-looking output”.

Right now, every prompt has to go through OpenAI so it can be run through its strict safeguards around issues like copyright. One of Trillo's most interesting observations is that Sora is currently “like a slot machine where you ask for something, and it jumbles ideas together, and it doesn't have a real physics engine to it”.

This means that it's still a long way way off from being truly consistent with people and object states, something that OpenAI admitted in an earlier blog post. OpenAI said that Sora “currently exhibits numerous limitations as a simulator”, including the fact that “it does not accurately model the physics of many basic interactions, like glass shattering”.

These incoherencies will likely limit Sora to being a short-form video tool for some time, but it's still one I can't wait to try out.

You might also like

TechRadar – All the latest technology news

Read More

Watch this: Adobe shows how AI and OpenAI’s Sora will change Premiere Pro and video editing forever

OpenAI's Sora gave us a glimpse earlier this year of how generative AI is going to change video editing – and now Adobe has shown off how that's going to play out by previewing of some fascinating new Premiere Pro tools.

The new AI-powered features, powered by Adobe Firefly, effectively bring the kinds of tricks we've seen from Google's photo-focused Magic Editor – erasing unwanted objects, adding objects and extending scenes – to video. And while it isn't the first piece of software to do that, seeing these tools in an industry standard app that's used by professionals is significant.

For a glimpse of what's coming “this year” to Premiere Pro and other video editing apps, check out the video below. In a new Generative panel, there's a new 'add object' option that lets you type in an object you want to add to the scene. This appears to be for static objects, rather than things like a galloping horse, but it looks handy for b-roll and backgrounds.

Arguably even more helpful is 'object removal', which uses Firefly's AI-based smart masking to help you quickly select an object to remove then make it vanish with a click. Alternatively, you can just combine the two tools to, for example, swap the watch that someone's wearing for a non-branded alternative.

One of the most powerful new AI-powered features in photo editing is extending backgrounds – called Generative Fill in Photoshop – and Premiere Pro will soon have a similar feature for video. Rather than extending the frame's size, Generative Extend will let you add frames to a video to help you, for example, pause on your character's face for a little longer. 

While Adobe hasn't given these tools a firm release date, only revealing that they're coming “later this year”, it certainly looks like they'll change Premiere Pro workflows in a several major ways. But the bigger AI video change could be yet to come… 

Will Adobe really plug into OpenAI's Sora?

A laptop screen showing AI video editing tools in Adobe Premiere Pro

(Image credit: Adobe)

The biggest Premiere Pro announcement, and also the most nebulous one, was Adobe's preview of third-party models for the editing app. In short, Adobe is planning to let you plug generative AI video tools including OpenAI's Sora, Runway and Pika Labs into Premiere Pro to sprinkle your videos with their effects.

In theory, that sounds great. Adobe showed an example of OpenAI's Sora generating b-roll with a text-to-video prompt, and Pika powering Generative Extend. But these “early examples” of Adobe's “research exploration” with its “friends” from the likes of OpenAI are still clouded in uncertainty.

Firstly, Adobe hasn't committed to launching the third-party plug-ins in the same way as its own Firefly-powered tools. That shows it's really only testing the waters with this part of the Premiere Pro preview. Also, the integration sits a little uneasily with Adobe's current stance on generative AI tools.

A laptop screen showing AI video editing tools in Adobe Premiere Pro

(Image credit: Adobe)

Adobe has sought to set itself apart from the likes of Midjourney and Stable Diffusion by highlighting that Adobe Firefly is only trained on Adobe Stock image library, which is apparently free of commercial, branded and trademark imagery. “We’re using hundreds of millions of assets, all trained and moderated to have no IP,” Adobe's VP of Generative AI, Alexandru Costin, told us earlier this year.

Yet a new report from Bloomberg claims that Firefly was partially trained on images generated by Midjourney (with Adobe suggesting that could account for 5% of Firefly's training data). And these previews of new alliances with generative video AI models, which are similarly opaque when it comes to their training data, again sits uneasily with Adobe's stance.

Adobe's potential get-out here is Content Credentials, a kind of nutrition label that's also coming to Premiere Pro and will add watermarks to clarify when AI was used in a video and with which model. Whether or not this is enough for Adobe to balance making a commercially-friendly pro video editor with keeping up in the AI race remains to be seen.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI’s Sora just made its first music video and it’s like a psychedelic trip

OpenAI recently published a music video for the song Worldweight by August Kamp made entirely by their text-to-video engine, Sora. You can check out the whole thing on the company’s official YouTube channel and it’s pretty trippy, to say the least. Worldweight consists of a series of short clips in a wide 8:3 aspect ratio featuring fuzzy shots of various environments. 

You see a cloudy day at the beach, a shrine in the middle of a forest, and what looks like pieces of alien technology. The ambient track coupled with the footage results in a uniquely ethereal experience. It’s half pleasant and half unsettling. 

It’s unknown what text prompts were used on Sora; Kamp didn’t share that information. But she did explain the inspiration behind them in the description. She states that whenever she created the track, she imagined what a video representing Worldweight would look like. However, she lacked a way to share her thoughts. Thanks to Sora, this is no longer an issue as the footage displays what she had always envisioned. It's “how the song has always ‘looked’” from her perspective.

Embracing Sora

If you pay attention throughout the entire runtime, you’ll notice hallucinations. Leaves turn into fish, bushes materialize out of nowhere, and flowers have cameras instead of petals. But because of the music’s ethereal nature, it all fits together. Nothing feels out of place or nightmare-inducing. If anything, the video embraces the nightmares.

We should mention August Kamp isn’t the only person harnessing Sora for content creation. Media production company Shy Kids recently published a short film on YouTube called “Air Head” which was also made on the AI engine. It plays like a movie trailer about a man who has a balloon for a head.

Analysis: Lofty goals

It's hard to say if Sora will see widespread adoption judging by this content. Granted, things are in the early stages, but ready or not, that hasn't stopped OpenAI from pitching its tech to major Hollywood studios. Studio executives are apparently excited at the prospects of AI saving time and money on production. 

August Kamp herself is a proponent of the technology stating, “Being able to build and iterate on cinematic visuals intuitively has opened up categorically new lanes of artistry for me”. She looks forward to seeing “what other forms of storytelling” will appear as artificial intelligence continues to grow.

In our opinion, tools such Sora will most likely enjoy a niche adoption among independent creators. Both Kamp and Shy Kids appear to understand what the generative AI can and cannot do. They embrace the weirdness, using it to great effect in their storytelling. Sora may be great at bringing strange visuals to life, but in terms of making “normal-looking content”, that remains to be seen.

People still talk about how weird or nightmare-inducing content made by generative AI is. Unless OpenAI can surmount this hurdle, Sora may not amount to much beyond niche usage.

It’s still unknown when Sora will be made publicly available. OpenAI is holding off on a launch, citing potential interference in global elections as one of its reasons. Although, there are plans to release the AI by the end of 2024.

If you're looking for other platforms, check out TechRadar's list of the best AI video makers for 2024.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI just gave artists access to Sora and proved the AI video tool is weirder and more powerful than we thought

A man with a balloon for a head is somehow not the weirdest thing you'll see today thanks to a series of experimental video clips made by seven artists using OpenAI's Sora generative video creation platform.

Unlike OpenAI's ChatGPT AI chatbot and the DALL-E image generation platform, the company's text-to-video tool still isn't publicly available. However, on Monday, OpenAI revealed it had given Sora access to “visual artists, designers, creative directors, and filmmakers” and revealed their efforts in a “first impressions” blog post.

While all of the films ranging in length from 20 seconds to a minute-and-a-half are visually stunning, most are what you might describe as abstract. OpenAI's Artist In Residence Alex Reben's 20-second film is an exploration of what could very well be some of his sculptures (or at least concepts for them), and creative director Josephine Miller's video depicts models melded with what looks like translucent stained glass.

Not all the videos are so esoteric.

OpenAI Sora AI-generated video image by Don Allen Stevenson III

OpenAI Sora AI-generated video image by Don Allen Stevenson III (Image credit: OpenAI sora / Don Allen Stevenson III)

If we had to give out an award for most entertaining, it might be multimedia production company shy kids' “Air Head”. It's an on-the-nose short film about a man whose head is a hot-air-filled yellow balloon. It might remind you of an AI-twisted version of the classic film, The Red Balloon, although only if you expected the boy to grow up and marry the red balloon and…never mind.

Sora's ability to convincingly merge the fantastical balloon head with what looks like a human body and a realistic environment is stunning. As shy kids' Walter Woodman noted, “As great as Sora is at generating things that appear real, what excites us is its ability to make things that are totally surreal.” And yes, it's a funny and extremely surreal little movie.

But wait, it gets stranger.

The other video that will have you waking up in the middle of the night is digital artist Don Allen Stevenson III's “Beyond Our Reality,” which is like a twisted National Geographic nature film depicting never-before-seen animal mergings like the Girafflamingo, flying pigs, and the Eel Cat. Each one looks as if a mad scientist grabbed disparate animals, carved them up, and then perfectly melded them to create these new chimeras.

OpenAI and the artists never detail the prompts used to generate the videos, nor the effort it took to get from the idea to the final video. Did they all simply type in a paragraph describing the scene, style, and level of reality and hit enter, or was this an iterative process that somehow got them to the point where the man's balloon head somehow perfectly met his shoulders or the Bunny Armadillo transformed from grotesque to the final, cute product?

That OpenAI has invited creatives to take Sora for a test run is not surprising. It's their livelihoods in art, film, and animation that are most at risk from Sora's already impressive capabilities. Most seem convinced it's a tool that can help them more quickly develop finished commercial products.

“The ability to rapidly conceptualize at such a high level of quality is not only challenging my creative process but also helping me evolve in storytelling. It's enabling me to translate my imagination with fewer technical constraints,” said Josephine Miller in the blog post.

Go watch the clips but don't blame us if you wake up in the middle of the night screaming.

You might also like

TechRadar – All the latest technology news

Read More