Watch the AI-produced film Toys”R”Us made using OpenAI’s Sora – and get misty about the AI return of Geoffrey the Giraffe

Toys”R”Us premiered a film made with OpenAI's artificial intelligence text-to-video tool Sora at this year's Cannes Lions Festival. “The Origin of Toys”R”Us” was produced by the company's entertainment production division Toys”R”Us Studios, and creative agency Native Foreign, who scored alpha access to Sora since OpenAI hasn't released it to the public yet. That makes Toys”R”Us one of the first brands to leverage the video AI tool in a major way

 “The Origin of Toys”R”Us” explores the early years of founder Charles Lazarus in a rather more whimsical way than retail giants are usually portrayed. Company mascot Geoffrey the Giraffe appears to Lazarus in a dream to inspire his business ambitions in a way that suggests huge profits were an unrelated side effect (at least until relatively recently) for Toys”R”Us.

“Charles Lazarus was a visionary ahead of his time and we wanted to honor his legacy with a spot using the most cutting-edge technology available,” four-time Emmy Award-winning producer and President of Toys”R”Us Studios Kim Miller Olko said in a statement. “Partnering with Native Foreign to push the boundaries of OpenAI's Sora is truly exciting. Dreams are full of magic and endless possibilities, and so is Toys”R”Us.”

Sora Stories and the uncanny valley

Sora can generate up to one-minute-long videos based on text prompts with realistic people and settings. OpenAI pitches Sora as a way for production teams to bring their visions to life in a fraction of the usual time. The results can be breathtaking and bizarre.

For “The Origin of Toys”R”Us,” the filmmakers condensed hundreds of iterative shots into a few dozen, completing the film in weeks rather than months. That said, the producers did use some corrective visual effects and added original music composed indie rock band Copeland's Aaron Marsh.

The film is brief and its AI origins are only really obvious when it is paused. Otherwise, you might think it was simply the victim of an overly enthusiastic editor with access to some powerful visual effects software and actors who don't know how to perform in front of a green screen.

Overall, it manages to mostly avoid the uncanny valley except for when the young founder smiles, then it's a little too much like watching “The Polar Express.” Still, when considering it was produced with the alpha version of Sora and with relatively limited time and resources, you can see why some are very excited about Sora.

“Through Sora, we were able to tell this incredible story with remarkable speed and efficiency,” Native Foreign Chief Creative Officer and the film's director Nik Kleverov said in a statement. “Toys”R”Us is the perfect brand to embrace this AI-forward strategy, and we are thrilled to collaborate with their creative team to help lead the next wave of innovative storytelling.”

The debut of “The Origin of Toys”R”Us” at the Cannes Lions Festival underscores the growing importance of AI tools in advertising and branding. The film acts as a new proof of concept for Sora. And it may portend a lot more generative AI-assisted movies in the future. That said, there's a lot skepticism and resistance in the entertainment world. Writers and actors went on strike for a long time in part because of generative AI, and the new contracts included rules for how companies can use AI models. The world premiere of a movie written by ChatGPT had to be outright canceled over complaints about that aspect, and if Toys”R”Us tried to make its film available in theaters, it would probably face the same backlash.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI’s ChatGPT might soon be thanking you for gold and asking if the narwhal bacons at midnight like a cringey Redditor after the two companies reach a deal

If you've ever posted a comment or post on Reddit, there's a chance that it will be used as material for training OpenAI's AI models after the two companies confirmed that they've reached a deal that enables this exchange. 

Reddit will be given access to OpenAI's technology to build AI features, and for that (as well as an undisclosed monetary amount), it's giving OpenAI access to Reddit posts in real-time that can be used by tools like ChatGPT to formulate more human-like responses. 

OpenAI will be able to access real-time information from Reddit's data API, software that enables the retrieval of and interaction with information from Reddit's platform, providing OpenAI with structured and unique content from Reddit. This is similar to an agreement Reddit reached with Google at the beginning of the year, allowing Google to train its own AI models on Reddit's data, reported to be worth $ 60 million. 

According to the official Reddit blog post publicizing the deal, the deal will help people discover and engage with Reddit's communities thanks to the Reddit content brought to ChatGPT and other new OpenAI products. Through Reddit's APIs, OpenAI's tools will be able to understand and showcase Reddit's content better, particularly when it comes to recent topics. 

Man sitting at a table working on a laptop

(Image credit: Shutterstock/GaudiLab)

Reddit, the company, and Reddit, the community of users

Users and moderators on Reddit will apparently be offered new features thanks to applications powered by OpenAI's large language models (LLMs). OpenAI will also start advertising on Reddit as an ad partner. 

The blog post put out by Reddit also claims that the deal is in the spirit of keeping the internet open, as well as fostering learning and research to keep it that way. It also cites that it wants to continue to build up its community, recognizing its uniqueness and how Reddit serves as a place for conversation online. Reddit claims that this deal was signed to improve everyone's Reddit experience using AI.

It remains to be seen whether users are convinced of these benefits, but previous changes of this type and scale haven't gone down particularly well. In June 2023, over 7,000 subreddit communities went dark to protest changes to Reddit's API pricing for developers

It also hasn't explicitly been stated by either company that Reddit data will be used to train OpenAI's models, but I think many people assume this will be the case – or that it’s already happening. In contrast, it was disclosed that Reddit would give Google “more efficient ways to train models,” and then there's the fact that OpenAI founder Sam Altman is himself a Reddit shareholder. This doesn't confirm anything specific and, as reported by The Verge, “This partnership was led by OpenAI’s COO and approved by its independent Board of Directors.”

OpenAI CEO Sam Altman speaking during Microsoft's February 7, 2023 event

(Image credit: JASON REDMOND/AFP via Getty Images)

Official statements expressing the benefits of the partnership

Speaking about the partnership and as quoted in the blog post, representatives from both companies said: 

“Reddit has become one of the internet’s largest open archives of authentic, relevant, and always up to date human conversations about anything and everything. Including it in ChatGPT upholds our belief in a connected internet, helps people find more of what they’re looking for, and helps new audiences find community on Reddit.”

– Steve Huffman, Reddit Co-Founder and CEO

“We are thrilled to partner with Reddit to enhance ChatGPT with uniquely timely and relevant information, and to explore the possibilities to enrich the Reddit experience with AI-powered features.”

– Brad Lightcap, OpenAI COO

They're not wrong, and many people make search queries appended with the word “Reddit” as Reddit threads will often provide information directly relevant to what you're searching for. 

It's an interesting development, and OpenAI's sourcing of information – both in terms of accuracy and concerning training data – has been the main topic of discussion around the ethics of its practices for some time. I suppose at least this way, Reddit users are being made aware that their information can be used by OpenAI – even if they don’t really have a choice in the matter. 

The announcement blog post reassures users that Reddit believes that “privacy is a right,” and that it has published a Public Content Policy that gives more detail about Reddit's approach to accessing public content and user protections. We'll have to see if this will be upheld as time goes on, and what the partnership looks like in practice, but I hope both companies will take users' concerns seriously. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Google’s answer to OpenAI’s Sora has landed – here’s how to get on the waitlist

Among the many AI treats that Google tossed into the crowd during its Google I/O 2024 keynote was a new video tool called Veo – and the waiting list for the OpenAI Sora rival is now open for those who want early access.

From Google's early Veo demos, the generative video tool certainly looks a lot like Sora, which is expected to be released “later this year.” It promises to whip up 1080p resolution videos that “can [be] beyond a minute” and in different cinematic styles, from time-lapses to aerial drone shots. You can see an example further down this page.

Veo, which is the engine behind a broader tool from Google's AI Test Kitchen called VideoFX, can also help you edit existing video clips. For example, you can give it an input video alongside a command, and it'll be able to generate extra scenery – Google's example being the addition of kayaks to an aerial coastal scene.

But like Sora, Veo is also only going to be open to a select few early testers. You can apply to be one of those 'Trusted Testers' now using the Google Labs form. Google says it will “review all submissions on a rolling basis” and some of the questions –including one that asks you to link to relevant work – suggest it could initially only be available to digital artists or filmmakers.

Still, we don't know the exact criteria to be an early Veo tester, so it's well worth applying if you're keen to take it for a spin.

The AI video tipping point

Veo certainly isn't the first generative video tool we've seen. As we noted when the Veo launch first broke, the likes of Synthesia, Colossyan, and Lumiere have been around for a while now. OpenAI's Sora has also hit the mainstream with its early music videos and strange TED Talk promos.

These tools are clearly hitting a tipping point because even the relatively conservative Adobe has shown how it plans to plug generative AI video tools into its industry-standard editor Premiere Pro, again “later this year.”

But the considerable computing power needed to run the likes of Veo's diffusion transformer models and maintain visual consistency across multiple frames, is also a major bottleneck on a wider rollout, which explains why many are still in demo form.

Still, we're now reaching a point where these tools are ready to partially leap into the wild, and being an early beta tester is a good way to get a feel for them before the inevitable monthly subscriptions are defined and rolled out.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI’s GPT-4o ChatGPT assistant is more life-like than ever, complete with witty quips

So no, OpenAI didn’t roll out a search engine competitor to take on Google at its May 13, 2024 Spring Update event. Instead, OpenAI unveiled GPT-4 Omni (or GPT-4o for short) with human-like conversational capabilities, and it's seriously impressive. 

Beyond making this version of ChatGPT faster and free to more folks, GPT-4o expands how you can interact with it, including having natural conversations via the mobile or desktop app. Considering it's arriving on iPhone, Android, and desktop apps, it might pave the way to be the assistant we've all always wanted (or feared). 

OpenAI's ChatGPT-4o is more emotional and human-like

OpenAI demoing GPT-4o on an iPhone during the Spring Update event.

OpenAI demoing GPT-4o on an iPhone during the Spring Update event. (Image credit: OpenAI)

GPT-4o has taken a significant step towards understanding human communication in that you can converse in something approaching a natural manner. It comes complete with all the messiness of real-world tendencies like interrupting, understanding tone, and even realizing it's made a mistake.

During the first live demo, the presenter asked for feedback on his breathing technique. He breathed heavily into his phone, and ChatGPT responded with the witty quip, “You’re not a vacuum cleaner.” It advised on a slower technique, demonstrating its ability to understand and respond to human nuances.

So yes, ChatGPT has a sense of humor but also changes the tone of responses, complete with different inflections while conveying a “thought”. Like human conversations, you can cut the assistant off and correct it, making it react or stop speaking. You can even ask it to speak in a certain tone, style, or robotic voice. Furthermore, it can even provide translations.

In a live demonstration suggested by a user on X (formerly Twitter), two presenters on stage, one speaking English and one speaking Italian, had a conversation with Chat GPT-4o handling translation. It could quickly deliver the translation from Italian to English and then seamlessly translate the English response back to Italian.

It’s not just voice understanding with GPT-4o, though; it can also understand visuals like a written-out linear equation and then guide you through how to solve it, as well as look at a live selfie and provide a description. That could be what you're wearing or your emotions. 

In this demo, GPT said the presenter looked happy and cheerful. It’s not without quirks, though. At one point ChatGPT said it saw the image of the equation before it was even written out, referring back to a previous visual of just a wooden tabletop.

Throughout the demo, ChatGPT worked quickly and didn't really struggle to understand the problem or ask about it. GPT-4o is also more natural than typing in a query, as you can speak naturally to your phone and get a desired response – not one that tells you to Google it.  

A little like “Samantha” in “Her”

If you’re thinking about Her or another futuristic-dystopian film with an AI, you’re not the only one. Speaking with ChatGPT in such a natural way is essentially the Her moment for OpenAI. Considering it will be rolling out to the mobile app and as a desktop app for free, many people may soon have their own Her moments.

The impressive demos across speech and visuals feel may only be scratching the surface of what's possible. Overall performance and how well GPT-4o performs day-to-day in various environments remains to be seen, and once available, TechRadar will be putting it through the test. Still, after this peek, it's clear that GPT-4o is preparing to take on the best Google and Apple have to offer in their eagerly-anticipated AI reveals.

The outlook on GPT-4o

However, announcing this the day before Google I/O kicks off and just a few weeks after we’ve seen new AI gadgets hit the scene – like the Rabbit R1 – OpenAI is giving us a taste of truly useful AI experiences we want. If this rumored partnership with Apple comes to fruition, Siri could be supercharged, and Google will almost certainly show off its latest AI tricks at I/O on May 14, 2024. But will they be enough?

We wish OpenAI showed off a bit more live demos with the latest ChatGPT-4o in what turned out to be a jam-packed, less-than-30-minute keynote. Luckily, it will be rolling out to users in the coming week, and you won’t have to pay to try it out.

You Might Also Like

TechRadar – All the latest technology news

Read More

OpenAI’s big launch event kicks off soon – so what can we expect to see? If this rumor is right, a powerful next-gen AI model

Rumors that OpenAI has been working on something major have been ramping up over the last few weeks, and CEO Sam Altman himself has taken to X (formerly Twitter) to confirm that it won’t be GPT-5 (the next iteration of its breakthrough series of large language models) or a search engine to rival Google. What a new report, the latest in this saga, suggests is that OpenAI might be about to debut a more advanced AI model with built-in audio and visual processing.

OpenAI is towards the front of the AI race, striving to be the first to realize a software tool that comes as close as possible to communicating in a similar way to humans, being able to talk to us using sound as well as text, and also capable of recognizing images and objects. 

The report detailing this purported new model comes from The Information, which spoke to two anonymous sources who have apparently been shown some of these new capabilities. They claim that the incoming model has better logical reasoning than those currently available to the public, being able to convert text to speech. None of this is new for OpenAI as such, but what is new is all this functionality being unified in the rumored multimodal model. 

A multimodal model is one that can understand and generate information across multiple modalities, such as text, images, audio, and video. GPT-4 is also a multimodal model that can process and produce text and images, and this new model would theoretically add audio to its list of capabilities, as well as a better understanding of images and faster processing times.

OpenAI CEO Sam Altman attends the artificial intelligence Revolution Forum. New York, US - 13 Jan 2023

(Image credit: Shutterstock/photosince)

The bigger picture that OpenAI has in mind

The Information describes Altman’s vision for OpenAI’s products in the future as involving the development of a highly responsive AI that performs like the fictional AI in the film “Her.” Altman envisions digital AI assistants with visual and audio abilities capable of achieving things that aren’t possible yet, and with the kind of responsiveness that would enable such assistants to serve as tutors for students, for example. Or the ultimate navigational and travel assistant that can give people the most relevant and helpful information about their surroundings or current situation in an instant.

The tech could also be used to enhance existing voice assistants like Apple’s Siri, and usher in better AI-powered customer service agents capable of detecting when a person they’re talking to is being sarcastic, for example.

According to those who have experience with the new model, OpenAI will make it available to paying subscribers, although it’s not known exactly when. Apparently, OpenAI has plans to incorporate the new features into the free version of its chatbot, ChatGPT, eventually. 

OpenAI is also reportedly working on making the new model cheaper to run than its most advanced model available now, GPT-4 Turbo. The new model is said to outperform GPT-4 Turbo when it comes to answering many types of queries, but apparently it’s still prone to hallucinations,  a common problem with models such as these.

The company is holding an event today at 10am PT / 1pm ET / 6pm BST (or 3am AEST on Tuesday, May 14, in Australia), where OpenAI could preview this advanced model. If this happens, it would put a lot of pressure on one of OpenAI’s biggest competitors, Google.

Google is holding its own annual developer conference, I/O 2024, on May 14, and a major announcement like this could steal a lot of thunder from whatever Google has to reveal, especially when it comes to Google’s AI endeavor, Gemini

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

OpenAI’s big Google Search rival could launch within days – and kickstart a new era for search

When OpenAI launched ChatGPT in 2022, it set off alarm bells at Google HQ about what OpenAI’s artificial intelligence (AI) tool could mean for Google’s lucrative search business. Now, those fears seem to be coming true, as OpenAI is set for a surprise announcement next week that could upend the search world forever.

According to Reuters, OpenAI plans to launch a Google search competitor that would be underpinned by its large language model (LLM) tech. The big scoop here is the date that OpenAI has apparently set for the unveiling: Monday, May 13.

Intriguingly, that’s just one day before the mammoth Google I/O 2024 show, which is usually one of the biggest Google events of the year. Google often uses the event to promote its latest advances in search and AI, so it will have little time to react to whatever OpenAI decides to reveal the day before.

The timing suggests that OpenAI is really gunning for Google’s crown and aims to upstage the search giant on its home turf. The stakes, therefore, could not be higher for both firms.

OpenAI vs Google

OpenAI logo on wall

(Image credit: Shutterstock.com / rafapress)

We’ve heard rumors before that OpenAI has an AI-based search engine up its sleeve. Bloomberg, for example, recently reported that OpenAI’s search engine will be able to pull in data from the web and include citations in its results. News outlet The Information, meanwhile, has made similar claims that OpenAI is “developing a web search product”, and there has been a near-constant stream of whispers to this effect for months.

But even without the direct leaks and rumors, it has been clear for a while that tools like ChatGPT present an alternative way of sourcing information to the more traditional search engines. You can ask ChatGPT to fetch information on almost any topic you can think of and it will bring up the answers in seconds (albeit sometimes with factual inaccuracies). ChatGPT Plus can access information on the web if you’re a paid subscriber, and it looks like this will soon be joined by OpenAI’s dedicated search engine.

Of course, Google isn’t going to go down without a fight. The company has been pumping out updates to its Gemini chatbot, as well as incorporating various AI features into its existing search engine, including AI-generated answers in a box on the results page.

Whether OpenAI’s search engine will be enough to knock Google off its perch is anyone’s guess, but it’s clear that the company’s success with ChatGPT has prompted Google to radically rethink its search offering. Come next week, we might get a clearer picture of how the future of search will look.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI’s Sora just made another brain-melting music video and we’re starting to see a theme

OpenAI's text-to-video tool has been a busy bee recently, helping to make a short film about a man with a balloon for a head and giving us a glimpse of the future of TED Talks – and now it's rustled up its first official music video for the synth-pop artist Washed Out (below).

This isn't the first music video we've seen from Sora – earlier this month we saw this one for independent musician August Kamp – but it is the first official commissioned example from an established music video director and artist.

That director is Paul Trillo, an artist who's previously made videos for the likes of The Shins and shared this new one on X (formerly Twitter). He said the video, which flies through a tunnel-like collage of high school scenes, was “an idea I had almost 10 years ago and then abandoned”, but that he was “finally able to bring it to life” with Sora.

It isn't clear exactly why Sora was an essential component for executing a fairly simple concept, but it helped make the process much simpler and quicker. Trillo points to one of his earlier music videos, The Great Divide for The Shins, which uses a similar effect but was “entirely 3D animated”.

As for how this new Washed Out video was made, it required less non-Sora help than the Shy Kids' Air Head video, which involved some lengthy post-production to create the necessary camera effects and consistency. For this one, Trillo said he used text-to-video prompts in Sora, then cut the resulting 55 clips together in Premiere Pro with only “very minor touch-ups”.

The result is a video that, like Sora's TED Talks creation (which was also created by Trillo), hints at the tool's strengths and weaknesses. While it does show that digital special effects are going to be democratized for visual projects with tight budgets, it also reveals Sora's issues with coherency across frames (as characters morph and change) and its persistent sense of uncanny valley.

Like the TED Talks video, a common technique to get around these limitations is the dreamy fly-through technique, which ensures that characters are only on-screen fleetingly and that any weird morphing is a part of the look rather than a jarring mistake. While it works for this video, it could quickly become a trope if it's over-used.

A music video tradition

Two people sitting on the top deck of a bus

(Image credit: OpenAI / Washed Out)

Music videos have long been pioneers of new digital technology – the Dire Straits video for Money For Nothing in 1985, for example, gave us an early taste of 3D animation, while Michael Jackson's Black Or White showed off the digital morphing trick that quickly became ubiquitous in the early 90s (see Terminator 2: Judgement Day). 

While music videos lack the cultural influence they once did, it looks like they'll again be a playground for AI-powered effects like the ones in this Washed Out creation. That makes sense because Sora, which OpenAI expects to release to the public “later this year”, is still well short of being good enough to be used in full-blown movies.

We can expect to see these kinds of effects everywhere by the end of the year, from adverts to TikTok promos. But like those landmark effects in earlier music videos, they will also likely date pretty quickly and become visual cliches that go out of fashion.

If Sora can develop at the same rate as OpenAI's flagship tool, ChatGPT, it could evolve into something more reliable, flexible, and mainstream – with Adobe recently hinting that the tool could soon be a plug-in for Adobe Premiere Pro. Until then, expect to see a lot more psychedelic Sora videos that look like a mashup of your dreams (or nightmares) from last night.

You might also like…

TechRadar – All the latest technology news

Read More

OpenAI’s new Sora video is an FPV drone ride through the strangest TED Talk you’ve ever seen – and I need to lie down

OpenAI's new Sora text-to-video generation tool won't be publicly available until later this year, but in the meantime it's serving up some tantalizing glimpses of what it can do – including a mind-bending new video (below) showing what TED Talks might look like in 40 years.

To create the FPV drone-style video, TED Talks worked with OpenAI and the filmmaker Paul Trillo, who's been using Sora since February. The result is an impressive, if slightly bewildering, fly-through of futuristic conference talks, weird laboratories and underwater tunnels.

The video again shows both the incredible potential of OpenAI Sora and its limitations. The FPV drone-style effect has become a popular one for hard-hitting social media videos, but it traditionally requires advanced drone piloting skills and expensive kit that goes way beyond the new DJI Avata 2.

Sora's new video shows that these kind of effects could be opened up to new creators, potentially at a vastly lower cost – although that comes with the caveat that we don't yet know how much OpenAI's new tool itself will cost and who it'll be available to.

See more

But the video (above) also shows that Sora is still quite far short of being a reliable tool for full-blown movies. The people in the shots are on-screen for only a couple of seconds and there's plenty of uncanny valley nightmare fuel in the background.

The result is an experience that's exhilarating, while also leaving you feeling strangely off-kilter – like touching down again after a sky dive. Still, I'm definitely keen to see more samples as we hurtle towards Sora's public launch later in 2024.

How was the video made?

A video created by OpenAI Sora for TED Talks

(Image credit: OpenAI / TED Talks)

OpenAI and TED Talks didn't go into detail about how this specific video was made, but its creator Paul Trillo recently talked more broadly about his experiences of being one of Sora's alpha tester.

Trillo told Business Insider about the kinds of prompts he uses, including “a cocktail of words that I use to make sure that it feels less like a video game and something more filmic”. Apparently these include prompts like “35 millimeter”, “anamorphic lens”, and “depth of field lens vignette”, which are needed or else Sora will “kind of default to this very digital-looking output”.

Right now, every prompt has to go through OpenAI so it can be run through its strict safeguards around issues like copyright. One of Trillo's most interesting observations is that Sora is currently “like a slot machine where you ask for something, and it jumbles ideas together, and it doesn't have a real physics engine to it”.

This means that it's still a long way way off from being truly consistent with people and object states, something that OpenAI admitted in an earlier blog post. OpenAI said that Sora “currently exhibits numerous limitations as a simulator”, including the fact that “it does not accurately model the physics of many basic interactions, like glass shattering”.

These incoherencies will likely limit Sora to being a short-form video tool for some time, but it's still one I can't wait to try out.

You might also like

TechRadar – All the latest technology news

Read More

Watch this: Adobe shows how AI and OpenAI’s Sora will change Premiere Pro and video editing forever

OpenAI's Sora gave us a glimpse earlier this year of how generative AI is going to change video editing – and now Adobe has shown off how that's going to play out by previewing of some fascinating new Premiere Pro tools.

The new AI-powered features, powered by Adobe Firefly, effectively bring the kinds of tricks we've seen from Google's photo-focused Magic Editor – erasing unwanted objects, adding objects and extending scenes – to video. And while it isn't the first piece of software to do that, seeing these tools in an industry standard app that's used by professionals is significant.

For a glimpse of what's coming “this year” to Premiere Pro and other video editing apps, check out the video below. In a new Generative panel, there's a new 'add object' option that lets you type in an object you want to add to the scene. This appears to be for static objects, rather than things like a galloping horse, but it looks handy for b-roll and backgrounds.

Arguably even more helpful is 'object removal', which uses Firefly's AI-based smart masking to help you quickly select an object to remove then make it vanish with a click. Alternatively, you can just combine the two tools to, for example, swap the watch that someone's wearing for a non-branded alternative.

One of the most powerful new AI-powered features in photo editing is extending backgrounds – called Generative Fill in Photoshop – and Premiere Pro will soon have a similar feature for video. Rather than extending the frame's size, Generative Extend will let you add frames to a video to help you, for example, pause on your character's face for a little longer. 

While Adobe hasn't given these tools a firm release date, only revealing that they're coming “later this year”, it certainly looks like they'll change Premiere Pro workflows in a several major ways. But the bigger AI video change could be yet to come… 

Will Adobe really plug into OpenAI's Sora?

A laptop screen showing AI video editing tools in Adobe Premiere Pro

(Image credit: Adobe)

The biggest Premiere Pro announcement, and also the most nebulous one, was Adobe's preview of third-party models for the editing app. In short, Adobe is planning to let you plug generative AI video tools including OpenAI's Sora, Runway and Pika Labs into Premiere Pro to sprinkle your videos with their effects.

In theory, that sounds great. Adobe showed an example of OpenAI's Sora generating b-roll with a text-to-video prompt, and Pika powering Generative Extend. But these “early examples” of Adobe's “research exploration” with its “friends” from the likes of OpenAI are still clouded in uncertainty.

Firstly, Adobe hasn't committed to launching the third-party plug-ins in the same way as its own Firefly-powered tools. That shows it's really only testing the waters with this part of the Premiere Pro preview. Also, the integration sits a little uneasily with Adobe's current stance on generative AI tools.

A laptop screen showing AI video editing tools in Adobe Premiere Pro

(Image credit: Adobe)

Adobe has sought to set itself apart from the likes of Midjourney and Stable Diffusion by highlighting that Adobe Firefly is only trained on Adobe Stock image library, which is apparently free of commercial, branded and trademark imagery. “We’re using hundreds of millions of assets, all trained and moderated to have no IP,” Adobe's VP of Generative AI, Alexandru Costin, told us earlier this year.

Yet a new report from Bloomberg claims that Firefly was partially trained on images generated by Midjourney (with Adobe suggesting that could account for 5% of Firefly's training data). And these previews of new alliances with generative video AI models, which are similarly opaque when it comes to their training data, again sits uneasily with Adobe's stance.

Adobe's potential get-out here is Content Credentials, a kind of nutrition label that's also coming to Premiere Pro and will add watermarks to clarify when AI was used in a video and with which model. Whether or not this is enough for Adobe to balance making a commercially-friendly pro video editor with keeping up in the AI race remains to be seen.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI’s Sora just made its first music video and it’s like a psychedelic trip

OpenAI recently published a music video for the song Worldweight by August Kamp made entirely by their text-to-video engine, Sora. You can check out the whole thing on the company’s official YouTube channel and it’s pretty trippy, to say the least. Worldweight consists of a series of short clips in a wide 8:3 aspect ratio featuring fuzzy shots of various environments. 

You see a cloudy day at the beach, a shrine in the middle of a forest, and what looks like pieces of alien technology. The ambient track coupled with the footage results in a uniquely ethereal experience. It’s half pleasant and half unsettling. 

It’s unknown what text prompts were used on Sora; Kamp didn’t share that information. But she did explain the inspiration behind them in the description. She states that whenever she created the track, she imagined what a video representing Worldweight would look like. However, she lacked a way to share her thoughts. Thanks to Sora, this is no longer an issue as the footage displays what she had always envisioned. It's “how the song has always ‘looked’” from her perspective.

Embracing Sora

If you pay attention throughout the entire runtime, you’ll notice hallucinations. Leaves turn into fish, bushes materialize out of nowhere, and flowers have cameras instead of petals. But because of the music’s ethereal nature, it all fits together. Nothing feels out of place or nightmare-inducing. If anything, the video embraces the nightmares.

We should mention August Kamp isn’t the only person harnessing Sora for content creation. Media production company Shy Kids recently published a short film on YouTube called “Air Head” which was also made on the AI engine. It plays like a movie trailer about a man who has a balloon for a head.

Analysis: Lofty goals

It's hard to say if Sora will see widespread adoption judging by this content. Granted, things are in the early stages, but ready or not, that hasn't stopped OpenAI from pitching its tech to major Hollywood studios. Studio executives are apparently excited at the prospects of AI saving time and money on production. 

August Kamp herself is a proponent of the technology stating, “Being able to build and iterate on cinematic visuals intuitively has opened up categorically new lanes of artistry for me”. She looks forward to seeing “what other forms of storytelling” will appear as artificial intelligence continues to grow.

In our opinion, tools such Sora will most likely enjoy a niche adoption among independent creators. Both Kamp and Shy Kids appear to understand what the generative AI can and cannot do. They embrace the weirdness, using it to great effect in their storytelling. Sora may be great at bringing strange visuals to life, but in terms of making “normal-looking content”, that remains to be seen.

People still talk about how weird or nightmare-inducing content made by generative AI is. Unless OpenAI can surmount this hurdle, Sora may not amount to much beyond niche usage.

It’s still unknown when Sora will be made publicly available. OpenAI is holding off on a launch, citing potential interference in global elections as one of its reasons. Although, there are plans to release the AI by the end of 2024.

If you're looking for other platforms, check out TechRadar's list of the best AI video makers for 2024.

You might also like

TechRadar – All the latest technology news

Read More