The TikTok of AI video? Kling AI is a scarily impressive new OpenAI Sora rival

It feels like we're at a tipping point for AI video generators, and just a few months on from OpenAI's Sora taking social media by storm with its text-to-video skills, a new Chinese rival is taking social media by storm.

Called Kling AI, the new “video generation model” is made by the Chinese TikTok rival Kuaishou, and it's currently only available as a public demo in China via a waitlist. But that hasn't stopped it from quickly going viral, with some impressive clips that suggest it's at least as capable as Sora.

You can see some of the early demo videos (like the one below) on the Kling AI website, while a number of threads on X (formerly Twitter) from the likes of Min Choi (below) have rounded up what are claimed to be some impressive early creations made by the tool (with some help from editing apps).

A blue parrot turning its head

(Image credit: Kling AI)

As always, some caution needs to be applied with these early AI-generated clips, as they're cherry-picked examples, and we don't yet know anything about the hardware or other software that's been used to create them. 

For example, we later found that an impressive Air Head video seemingly made by OpenAI's Sora needed a lot of extra editing in post-production.

See more

Still, those caveats aside, Kling AI certainly looks like another powerful AI video generator. It lets early testers create 1080/30p videos that are up to two minutes in length. The results, while still carrying some AI giveaways like smoothing and minor artifacts, are impressively varied, with a promising amount of coherence.

Exactly how long it'll be before Kling AI is opened up to users outside China remains to be seen. But with OpenAI suggesting that Sora will get a public release “later this year”, Kling AI best not wait too long if it wants to become the TikTok of AI-generated video.

The AI video war heats up

Now that AI photo tools like Midjourney and Adobe Firefly are hitting the mainstream, it's clear that video generators are the next big AI battleground – and that has big implications for social media, the movie industry, and our ability to trust what we see during, say, major election campaigns.

Other examples of AI generators include Google Veo, Microsoft's VASA-1 (which can make lifelike talking avatars from a single photo), Runway Gen-2, and Pika Labs. Adobe has now even showed how it could soon integrate many of these tools into Premiere Pro, which would be give the space another big boost.

None of them are yet perfect, and it isn't clear how long it takes to produce a clip using the likes of Sora or Kling AI, nor what kind of computing power is needed. But the leaps being made towards photorealism and simulating real-world physics have been massive in the past year, so it clearly won't be long before these tools hit the mainstream.

That battle will become an international one, too – with the US still threatening a TikTok ban, expect there to be a few more twists and turns before the likes of Kling AI roll out worldwide. 

You might also like…

TechRadar – All the latest technology news

Read More

ChatGPT shows off impressive voice mode in new demo – and it could be a taste of the new Siri

ChatGPT's highly anticipated new Voice mode has just starred again in new demo that shows off its acting skills – and the video could be a taste of what we can expect from the reported new deal between Apple and OpenAI.

The ChatGPT app already has a Voice mode, but OpenAI showed off a much more impressive version during the launch of its new GPT-4o model in May. Unfortunately, that was then overshadowed by OpenAI's very public spat with Scarlett Johansson over the similarity of ChatGPT's Sky voice to her own in the movie Her. But OpenAI is hyping up the new mode again in the clip below.

The video shows someone writing a story and getting ChatGPT to effectively do improv drama, providing voices for a “majestic lion” and a mouse. Beyond the expressiveness of the voices, what's notable is how easy it is to interrupt the ChatGPT voice for a better conversational flow, and also the lack of latency.     

OpenAI says the new mode will “be rolling out in the coming weeks” and that's a pretty big deal. Not least because, as Bloomberg's Mark Gurman has again reported, Apple is expected to announce a new partnership with OpenAI at WWDC 2024 on June 10.   

Exactly how OpenAI's tech is going to be baked into iOS 18 remains to be seen, but Gurman's report states that Apple will be “infusing its Siri digital assistant with AI”. That means some of its off-device powers could tap into ChatGPT – and if it's anything like OpenAI's new demo, that would be a huge step forward from today's Siri.

Voice assistants finally grow up?

Siri's reported AI overhaul will likely be one of the bigger stories of WWDC 2024. According to Dag Kittlaus, who co-founded and ran Siri before Apple acquired it in 2010, the deal with OpenAI will likely be a “short- to medium-term relationship” while Apple plays catch up. But it's still a major surprise.

It's possible that Siri's AI improvements will be restricted to more minor, on-device functions, with Apple instead using its OpenAI partnership solely for text-based queries. After all, from iOS 15 onwards, Apple switched Siri's audio processing to being on-device by default, which meant you could use it without an internet connection.

But Bloomberg's Gurman claims that Apple has “forged a partnership to integrate OpenAI’s ChatGPT into the iPhone’s operating system”. If so, it's possible that one unlikely move could be followed by another, with Siri leaning on ChatGPT for off-device queries and a more conversational flow. It's already been possible to use ChatGPT with Siri for a while now using Apple's Shortcuts.

It wouldn't be the first time that Apple has integrated third-party software into iOS. Back on the original iPhone, Apple made a pre-installed YouTube app which was later removed once Google had made its own version. Gurman's sources noted that by outsourcing an AI chatbot, “Apple can distance itself from the technology itself, including its occasional inaccuracies and hallucinations.”

We're certainly looking forward to seeing how Apple weaves OpenAI's tech into iOS –and potentially Siri – at WWDC 2024.

You might also like

TechRadar – All the latest technology news

Read More

Apple’s new Final Cut Pro apps turn the iPad into an impressive live multicam studio

At Let Loose 2024, Apple revealed big changes coming to its Final Cut software, ones that effectively turn your iPad into a mini production studio. Chief among these is the launch of Final Cut Pro for iPad 2. It’s a direct upgrade to the current app that is capable of taking full advantage of the new M4 chipset. According to the company, it can render videos up to twice as fast as Final Cut Pro running on an M1 iPad.

Apple is also introducing a feature called Live Multicam. This allows users to connect their tablet to up to four different iPhones or iPads at once and watch a video feed from all the sources in real time. You can even adjust the “exposure, focus, [and] zoom” of each live feed directly from your master iPad.

Looking at Apple’s demo video, selecting a source expands the footage to fill up the entire screen where you can then make the necessary adjustments. Tapping the Minimize icon in the bottom right corner lets creators return to the four-split view. Apple states that previews from external devices are sent to Final Cut Pro so you can quickly begin editing.

Impactful upgrades

You can’t connect your iPhone to the multicam studio using the regular camera app, which won’t support the setup. Users will instead have to install a new app called Final Cut Camera on their mobile device. Besides the Live Multicam compatibility, Apple says you can tweak settings like white balance, shutter speed, and more to obtain professional-grade recordings. The on-screen interface even lets videographers monitor their footage via a zebra stripe pattern tool and an audio meter. 

Final Cut Camera

(Image credit: Apple)

Going back to the Final Cut Pro update, there are other important features we’ve yet to mention. The platform “now supports external projects”. This means you can create a video project on and import media to “an external storage” drive without sacrificing space on an iPad. Apple is also adding more customization tools to the software like 12 additional color-grading presets and more dynamic backgrounds.

Final Cut Pro for Mac is set to receive a substantial upgrade too. Although it won’t support the four iPhone video feeds, version 10.8 does introduce several tools. For example, Enhance Light and Color offers a quick way to improve color balance and contrast in a clip among other things. Users can also give video effects and color corrections a custom name for easy identification. It’s not a total overhaul, but these changes will take some of the headache out of video editing. 

Final Cut Pro on Mac version 10.8

(Image credit: Apple)

Availability

There are different availability dates for the three products. Final Cut Pro for iPad 2 launches this spring and will be a “free update for existing users”. For everyone else, it will be $ 5/£5/$ 8 AUD a month or $ 50/£50/$ 60 AUD a year for access. Final Cut Camera is set to release in the spring as well and will be free for everyone. Final Cut Pro for Mac 10.8 is another free update for existing users. On the Mac App Store, it’ll cost you $ 300/£300/$ 500 AUD.

We don’t blame you if you were totally unaware of the Final Cut Pro changes as they were overshadowed by Apple's new iPad news. Speaking of which, check out TechRadar’s guide on where to preorder Apple’s 2024 iPad Pro and Air tablets

You might also like

TechRadar – All the latest technology news

Read More

OpenAI’s impressive new Sora videos show it has serious sci-fi potential

OpenAI's Sora, its equivalent of image creation but for videos, made huge shockwaves in the swiftly advancing world of AI last month, and we’ve just caught a few new videos which are even more jaw-slackening than what we have already been treated to.

In case you somehow missed it, Sora is a text-to-video AI meaning you can write a simple request and it’ll compose a video (just as image generation previously worked, but obviously a much more complex endeavor).

An eye with the iris being a globe

(Image credit: OpenAI)

Now OpenAI’s Sora research lead Tim Brooks has released some new content generated by Sora on X (formerly Twitter). 

This is Sora’s crack at fulfilling the following request: “Fly through tour of a museum with many paintings and sculptures and beautiful works of art in all styles.”

Pretty impressive to say the least. On top of that, Bill Peebles, also a Sora research lead, showed us a clip generated from the following prompt: “An alien blending in naturally with new york city, paranoia thriller style, 35mm film.”

An alien character walking through a street

(Image credit: OpenAI)

Content creator Blaine Brown then stepped in to embellish the above clip, cutting it to repeat the footage and make it longer, while having the alien rapping, complete with lip-syncing. The music is generated by Suno AI by the way (with the lyrics written by Brown, mind), and lip-syncing is done with Pika Labs AI.

See more

Analysis: Still early days for Sora

Two people having dinner

(Image credit: OpenAI)

It’s worth underlining how fast things seem to be progressing with the capabilities of AI. Image creation powers were one thing – and extremely impressive in themselves – but this is entirely another. Especially when you remember that Sora is still just in testing at OpenAI, with a limited set of ‘red teamers’ (testers hunting out bugs and smoothing over those wrinkles).

The camera work in the museum fly-through flows realistically and feels nicely imaginative in the way it swoops around (albeit with the occasional judder). And the last tweet shows how you can take a base clip and flesh it out with content including AI-generated music.

Of course, AI can write a script as well, and so it begs the question: how long will it be before a blue alien is starring in an AI-generated post-apocalyptic drama. Or an (unintentional) comedy perhaps?

You get the idea, and we’re getting carried away, of course, but still – what AI could be capable of in just a few years is potentially mind-blowing, frankly.

Naturally, we’ll be seeing the cream of the crop of what Sora is capable of in these teasers, and there have been some buggy and weird efforts aired too. (Just as when ChatGPT and other AI chatbots first rolled onto the scene, we saw AI hallucinations and general unhinged behavior and replies).

Perhaps the broader worry with Sora, though, is how this might eventually displace, rather than assist, content creators. But that’s a fear to chew over on another day – not forgetting the potential for misuse with AI-created videos which we recently discussed in more depth here.

You might also like

TechRadar – All the latest technology news

Read More

Google’s impressive Lumiere shows us the future of making short-form AI videos

Google is taking another crack at text-to-video generation with Lumiere, a new AI model capable of creating surprisingly high-quality content. 

The tech giant has certainly come a long way from the days of Imagen Video. Subjects in Lumiere videos are no longer these nightmarish creatures with melting faces. Now things look much more realistic. Sea turtles look like sea turtles, fur on animals has the right texture, and people in AI clips have genuine smiles (for the most part). What’s more, there's very little of the weird jerky movement seen in other text-to-video generative AIs. Motion is largely smooth as butter. Inbar Mosseri, Research Team Lead at Google Research, published a video on her YouTube channel demonstrating Lumiere’s capabilities. 

Google put a lot of work into making Lumiere’s content appear as lifelike as possible. The dev team accomplished this by implementing something called Space-Time U-Net architecture (STUNet). The technology behind STUNet is pretty complex. But as Ars Technica explains, it allows Lumiere to understand where objects are in a video, how they move and change and renders these actions at the same time resulting in a smooth-flowing creation. 

This runs contrary to other generative platforms that first establish keyframes in clips and then fill in the gaps afterward. Doing so results in the jerky movement the tech is known for.

Well equipped

In addition to text-to-video generation, Lumiere has numerous features in its toolkit including support for multimodality. 

Users will be able to upload source images or videos to the AI so it can edit them according to their specifications. For example, you can upload an image of Girl with a Pearl Earring by Johannes Vermeer and turn it into a short clip where she smiles instead of blankly staring. Lumiere also has an ability called Cinemagraph which can animate highlighted portions of pictures.

Google demonstrates this by selecting a butterfly sitting on a flower. Thanks to the AI, the output video has the butterfly flapping its wings while the flowers around it remain stationary. 

Things become particularly impressive when it comes to video. Video Inpainting, another feature, functions similarly to Cinemagraph in that the AI can edit portions of clips. A woman’s patterned green dress can be turned into shiny gold or black. Lumiere goes one step further by offering Video Stylization for altering video subjects. A regular car driving down the road can be turned into a vehicle made entirely out of wood or Lego bricks.

Still in the works

It’s unknown if there are plans to launch Lumiere to the public or if Google intends to implement it as a new service. 

We could perhaps see the AI show up on a future Pixel phone as the evolution of Magic Editor. If you’re not familiar with it, Magic Editor utilizes “AI processing [to] intelligently” change spaces or objects in photographs on the Pixel 8. Video Inpainting, to us, seems like a natural progression for the tech.

For now, it looks like the team is going to keep it behind closed doors. As impressive as this AI may be, it still has its issues. Jerky animations are present. In other cases, subjects have limbs warping into mush. If you want to know more, Google’s research paper on Lumiere can be found on Cornell University’s arXiv website. Be warned: it's a dense read.

And be sure to check out TechRadar's roundup of the best AI art generators for 2024.

You might also like

TechRadar – All the latest technology news

Read More