Watch the AI-produced film Toys”R”Us made using OpenAI’s Sora – and get misty about the AI return of Geoffrey the Giraffe

Toys”R”Us premiered a film made with OpenAI's artificial intelligence text-to-video tool Sora at this year's Cannes Lions Festival. “The Origin of Toys”R”Us” was produced by the company's entertainment production division Toys”R”Us Studios, and creative agency Native Foreign, who scored alpha access to Sora since OpenAI hasn't released it to the public yet. That makes Toys”R”Us one of the first brands to leverage the video AI tool in a major way

 “The Origin of Toys”R”Us” explores the early years of founder Charles Lazarus in a rather more whimsical way than retail giants are usually portrayed. Company mascot Geoffrey the Giraffe appears to Lazarus in a dream to inspire his business ambitions in a way that suggests huge profits were an unrelated side effect (at least until relatively recently) for Toys”R”Us.

“Charles Lazarus was a visionary ahead of his time and we wanted to honor his legacy with a spot using the most cutting-edge technology available,” four-time Emmy Award-winning producer and President of Toys”R”Us Studios Kim Miller Olko said in a statement. “Partnering with Native Foreign to push the boundaries of OpenAI's Sora is truly exciting. Dreams are full of magic and endless possibilities, and so is Toys”R”Us.”

Sora Stories and the uncanny valley

Sora can generate up to one-minute-long videos based on text prompts with realistic people and settings. OpenAI pitches Sora as a way for production teams to bring their visions to life in a fraction of the usual time. The results can be breathtaking and bizarre.

For “The Origin of Toys”R”Us,” the filmmakers condensed hundreds of iterative shots into a few dozen, completing the film in weeks rather than months. That said, the producers did use some corrective visual effects and added original music composed indie rock band Copeland's Aaron Marsh.

The film is brief and its AI origins are only really obvious when it is paused. Otherwise, you might think it was simply the victim of an overly enthusiastic editor with access to some powerful visual effects software and actors who don't know how to perform in front of a green screen.

Overall, it manages to mostly avoid the uncanny valley except for when the young founder smiles, then it's a little too much like watching “The Polar Express.” Still, when considering it was produced with the alpha version of Sora and with relatively limited time and resources, you can see why some are very excited about Sora.

“Through Sora, we were able to tell this incredible story with remarkable speed and efficiency,” Native Foreign Chief Creative Officer and the film's director Nik Kleverov said in a statement. “Toys”R”Us is the perfect brand to embrace this AI-forward strategy, and we are thrilled to collaborate with their creative team to help lead the next wave of innovative storytelling.”

The debut of “The Origin of Toys”R”Us” at the Cannes Lions Festival underscores the growing importance of AI tools in advertising and branding. The film acts as a new proof of concept for Sora. And it may portend a lot more generative AI-assisted movies in the future. That said, there's a lot skepticism and resistance in the entertainment world. Writers and actors went on strike for a long time in part because of generative AI, and the new contracts included rules for how companies can use AI models. The world premiere of a movie written by ChatGPT had to be outright canceled over complaints about that aspect, and if Toys”R”Us tried to make its film available in theaters, it would probably face the same backlash.

You might also like

TechRadar – All the latest technology news

Read More

Midjourney just changed the generative image game and showed me how comics, film, and TV might never be the same

Midjourney, the Generative AI platform that you can currently use on Discord just introduced the concept of reusable characters and I am blown away.

It's a simple idea: Instead of using prompts to create countless generative image variations, you create and reuse a central character to illustrate all your themes, live out your wildest fantasies, and maybe tell a story.

Up until recently, Midjourney, which is trained on a diffusion model (add noise to an original image and have the model de-noise it so it can learn about the image) could create some beautiful and astonishingly realistic images based on prompts you put in the Discord channel (“/imagine: [prompt]”) but unless you were asking it to alter one of its generated images, every image set and character would look different.

Now, Midjourney has cooked up a simple way to reuse your Midjourney AI characters. I tried it out and, for the most part, it works.

Image 1 of 3

Midjourney AI character creation

I guess I don’t know how to describe myself. (Image credit: Future)
Image 2 of 3

Midjourney AI character creation

(Image credit: Future)
Image 3 of 3

Midjourney AI character creation

Things are getting weird (Image credit: Future)

In one prompt, I described someone who looked a little like me, chose my favorite of Midjourney's four generated image options, upscaled it for more definition, and then, using a new “– cref” prompt and the URL for my generated image (with the character I liked), I forced Midjounrey to generate new images but with the same AI character in them.

Later, I described a character with Charles Schulz's Peanuts character qualities and, once I had one I liked, reused him in a different prompt scenario where he had his kite stuck in a tree (Midjourney couldn't or wouldn't put the kite in the tree branches).

Image 1 of 2

Midjourney AI character creation

An homage to Charles Schulz (Image credit: Future)
Image 2 of 2

Midjourney AI character creation

(Image credit: Future)

It's far from perfect. Midjourney still tends to over-adjust the art but I contend the characters in the new images are the same ones I created in my initial images. The more descriptive you make your initial character-creation prompts, the better result you'll get in subsequent images.

Perhaps the most startling thing about Midjourney's update is the utter simplicity of the creative process. Writing natural language prompts has always been easy but training the system to make your character do something might typically take some programming or even AI model expertise. Here it's just a simple prompt, one code, and an image reference.

Image 1 of 2

Midjourney AI character creation

Got a lot closer with my photo as a reference (Image credit: Future)
Image 2 of 2

Midjourney AI character creation

(Image credit: Future)

While it's easier to take one of Midjourney's own creations and use that as your foundational character, I decided to see what Midjourney would do if I turned myself into a character using the same “cref” prompt. I found an online photo of myself and entered this prompt: “imagine: making a pizza – cref [link to a photo of me]”.

Midjourney quickly spit out an interpretation of me making a pizza. At best, it's the essence of me. I selected the least objectionable one and then crafted a new prompt using the URL from my favorite me.

Midjourney AI character creation

Oh, hey, Not Tim Cook (Image credit: Future)

Unfortunately, when I entered this prompt: “interviewing Tim Cook at Apple headquarters”, I got a grizzled-looking Apple CEO eating pizza and another image where he's holding an iPad that looks like it has pizza for a screen.

When I removed “Tim Cook” from the prompt, Midjourney was able to drop my character into four images. In each, Midjourney Me looks slightly different. There was one, though, where it looked like my favorite me enjoying a pizza with a “CEO” who also looked like me.

Midjourney AI character creation

Midjourney me enjoying pizza with my doppelgänger CEO (Image credit: Future)

Midjourney's AI will improve and soon it will be easy to create countless images featuring your favorite character. It could be for comic strips, books, graphic novels, photo series, animations, and, eventually, generative videos.

Such a tool could speed storyboarding but also make character animators very nervous.

If it's any consolation, I'm not sure Midjourney understands the difference between me and a pizza and pizza and an iPad – at least not yet.

You might also like

TechRadar – All the latest technology news

Read More