NVIDIA Instant NeRFs need just a few images to make 3D scenes

NVIDIA sees AI as a means of putting new tools into the hands of gamers and creators alike. NVIDIA Instant NeRF is one such tool, leveraging the power of NVIDIA’s GPUs to make complex 3D creations orders of magnitude easier to generate. Instant NeRF is an especially powerful tool in its ability to create these 3D scenes and objects. 

In effect, NVIDIA Instant NeRF takes a series of 2D images, figures out how they overlap, and uses that knowledge to create an entire 3D scene. A NeRF (or Neural Radiance Field) isn’t a new thing, but the process to create one was not fast. By applying machine learning techniques to the process and specialized hardware, NVIDIA was able to make it much quicker, enough to be almost instant — thus Instant NeRF. 

Being able to snap a series of photos or even record a video of a scene and then turn it into a freely explorable 3D environment offers a new realm of creative possibility for artists. It also provides a quick way to turn a real-world object into a 3D one. 

Some artists are already realizing the potential of Instant NeRF. In a few artist showcases, NVIDIA highlights artists’ abilities to share historic artworks, capture memories, and allow viewers of the artworks to more fully immerse themselves in the scenes without being beholden to the original composition.

Karen X. Cheng explores the potential of this tool in her creation, Through the Looking Glass, which uses NVIDIA Instant NeRF to create the 3D scene through which her camera ventures, eventually slipping through a mirror into an inverted world. 

Hugues Bruyère uses Instant NeRF in his creation, Zeus, to present a historic sculpture from the Royal Ontario Museum in a new way. This gives those who may never have a chance to see it in person the ability to view it from all angles nonetheless.

Instant NeRF of inside NVIDIA HQ

(Image credit: NVIDIA)

With tools like Instant NeRF, it’s clear that NVIDIA’s latest hardware has much more than just gamers in mind. With more and more dedicated AI power built into each chip, NVIDIA RTX GPUs are bringing new levels of AI performance to the table that can serve gamers and creators alike. 

The same Tensor Cores that make it possible to infer what a 4K frame in a game would look like using a 1080p frame as a reference are also making it possible to infer what a fully fleshed out 3D scene would look like using a series of 2D images. And NVIDIA’s latest GPUs put those tools right into your hands. 

Instant NeRF isn’t something you just get to hear about. It’s actually a tool you can try for yourself. Developers can dive right in with this guide, and less technical users can grab a simpler Windows installer here which even includes a demo photo set. Since Instant NeRF runs on RTX GPUs, it’s widely available, though the latest RTX 40 Series and RTX Ada GPUs can turn out results even faster. 

The ability of NVIDIA’s hardware to accelerate AI is key to powering a new generation of AI PCs. Instant NeRF is just one of many examples of how NVIDIA’s GPUs are enabling new capabilities or dramatically speeding up existing tools. To help you explore the latest developments in AI and present them in an easy-to-understand format, NVIDIA has introduced the AI Decoded blog series. You can also see all the ways NVIDIA is boosting AI performance at NVIDIA’s RTX for AI page. 

TechRadar – All the latest technology news

Read More

The new Sonos app just leaked – and it might just fix the S2 app’s many problems

Audio brand Sonos may soon completely redesign its S2 app by making it easier to set up its devices as well as “strengthen connectivity between its many speakers.” It’ll also introduce several new customization options. This nugget of information comes from TheVerge which claims to have received screenshots of the revamp from sources close to the matter. 

According to the report, the company is removing all the navigation tabs at the bottom, replacing tabs with a search bar to help soundbar owners find music quickly. The home screen will serve as a central hub consisting of “scrollable carousels” housing playlists and direct access to streaming services. 

Of course, you will be allowed to customize the layout to your liking. You can tweak the settings of a soundbar through the “Your System” section on the app.

The Now Playing screen will see revisions as well. Both the shuffle and repeat buttons are going to be present on the page. Plus, the volume slider in the mini-player will appear “no matter where you are in the app.” 

Love it or hate it

For some people on the internet, this update has been a long time coming. The Verge shared links to posts from the Sonos subreddit of people complaining about how terrible the S2 app is. One of the more passionate rants talks about the software’s poor functionality, as the Redditor was unable to turn off their speaker’s alarms remotely despite it being connected. 

Most of the reviews on app stores are positive, however several users on the Google Play Store listing do complain about an unintuitive UI and strange connection problems. People either love S2 or they hate it. There doesn’t seem to be any real middle ground. 

The Verge states the S2 update will roll out for Android and iOS on May 7th although the date could change. 

Future plans

It’s worth mentioning that this isn’t the first time we’ve heard about the redesign.

Back in February, Bloomberg published a report detailing some of Sonos’ plans for 2024, such as their focus on a “revamped mobile app codenamed Passport.” At a glance, it appears Passport is the future S2 upgrade. Originally, the update was supposed to come in March, but the brand ran into development issues and were forced to delay it.

Bloomberg’s piece goes on to mention two new Sonos devices, codenamed Duke and Disco. The latter is said to be a set of earbuds able to connect to Wi-Fi. It’s supposed to be a Sonos take on Apple Airpods

Not much is known about the Duke, but it does share a name with a pair of Sonos headphones that were discovered back in late March on the Bluetooth SIG website. 91Mobiles dug through the page revealing the device could allow music streaming over Wi-Fi, it’s slated for a June launch, and should cost $ 450. These next couple of months are looking to be a busy time for Sonos. But as always, take the info in this leak with a grain of salt.

Until we learn more, check out TechRadar's list of the best soundbars for 2024.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI’s Sora just made its first music video and it’s like a psychedelic trip

OpenAI recently published a music video for the song Worldweight by August Kamp made entirely by their text-to-video engine, Sora. You can check out the whole thing on the company’s official YouTube channel and it’s pretty trippy, to say the least. Worldweight consists of a series of short clips in a wide 8:3 aspect ratio featuring fuzzy shots of various environments. 

You see a cloudy day at the beach, a shrine in the middle of a forest, and what looks like pieces of alien technology. The ambient track coupled with the footage results in a uniquely ethereal experience. It’s half pleasant and half unsettling. 

It’s unknown what text prompts were used on Sora; Kamp didn’t share that information. But she did explain the inspiration behind them in the description. She states that whenever she created the track, she imagined what a video representing Worldweight would look like. However, she lacked a way to share her thoughts. Thanks to Sora, this is no longer an issue as the footage displays what she had always envisioned. It's “how the song has always ‘looked’” from her perspective.

Embracing Sora

If you pay attention throughout the entire runtime, you’ll notice hallucinations. Leaves turn into fish, bushes materialize out of nowhere, and flowers have cameras instead of petals. But because of the music’s ethereal nature, it all fits together. Nothing feels out of place or nightmare-inducing. If anything, the video embraces the nightmares.

We should mention August Kamp isn’t the only person harnessing Sora for content creation. Media production company Shy Kids recently published a short film on YouTube called “Air Head” which was also made on the AI engine. It plays like a movie trailer about a man who has a balloon for a head.

Analysis: Lofty goals

It's hard to say if Sora will see widespread adoption judging by this content. Granted, things are in the early stages, but ready or not, that hasn't stopped OpenAI from pitching its tech to major Hollywood studios. Studio executives are apparently excited at the prospects of AI saving time and money on production. 

August Kamp herself is a proponent of the technology stating, “Being able to build and iterate on cinematic visuals intuitively has opened up categorically new lanes of artistry for me”. She looks forward to seeing “what other forms of storytelling” will appear as artificial intelligence continues to grow.

In our opinion, tools such Sora will most likely enjoy a niche adoption among independent creators. Both Kamp and Shy Kids appear to understand what the generative AI can and cannot do. They embrace the weirdness, using it to great effect in their storytelling. Sora may be great at bringing strange visuals to life, but in terms of making “normal-looking content”, that remains to be seen.

People still talk about how weird or nightmare-inducing content made by generative AI is. Unless OpenAI can surmount this hurdle, Sora may not amount to much beyond niche usage.

It’s still unknown when Sora will be made publicly available. OpenAI is holding off on a launch, citing potential interference in global elections as one of its reasons. Although, there are plans to release the AI by the end of 2024.

If you're looking for other platforms, check out TechRadar's list of the best AI video makers for 2024.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT just took a big step towards becoming the next Google with its new account-free version

The most widely available (and free) version of ChatGPT, ChatGPT-3.5, is being made available to use without having to create and log into a personal account. That means you can have conversations with the AI chatbot without it being tied to personal details like your email. However, OpenAI, the tech organization behind ChatGPT, limits what users can do without registering for an account. For example, unregistered users will be limited in the kinds of questions they can ask and in their access to advanced features. 

This means there are still some benefits to making and using a ChatGPT account, especially if you’re a regular user. OpenAI writes in an official blog post that this change is intended to make it easy for people to try out ChatGPT and get a taste of what modern AI can do, without going through the sign-up process. 

In its announcement post on April 1, 2024, OpenAI explained that it’s rolling out the change gradually, so if you want to try it for yourself and can’t yet, don’t panic. When speaking to PCMag, an OpenAI spokesperson explained that this change is in the spirit of OpenAI’s overall mission to make it easier for people “to experience ChatGPT and the benefits of AI.”

Woman sitting by window, legs outstretched, with laptop

(Image credit: Shutterstock/number-one)

To create an OpenAI account or not to create an OpenAI account

If you don’t want your entries into the AI chatbot to be tied to the details you would have to disclose when setting up an account, such as your birthday, phone number, and email address, then this is a great development. That said, lots of people create dummy accounts to be able to use apps and web services, so I don’t think it’s that hard to circumvent, but you’d have to have multiple emails and phone numbers to ‘burn’ for this purpose. 

OpenAI does have a disclaimer that states that it is storing your inputs to potentially use to improve ChatGPT by default whether you’re signed in or not, which I suspected was the case. It also states that you can turn this off via ChatGPT’s settings, and this can be done whether you have an account or not.

If you do choose to make an account, you get some useful benefits, including being able to see your previous conversations with the chatbot, link others to specific conversations you’ve had, make use of the newly-introduced voice conversational features, custom instructions, and the ability to upgrade to ChatGPT Plus, the premium subscription tier of ChatGPT which allows users to use GPT-4 (its latest large language learning (LLM) model). 

If you decide not to create an account and forgo these features, you can expect to see the same chat interface that users with accounts use. OpenAI will also be putting in additional content safeguards for users who aren’t logged in, detailing that it’s put in measures to block prompts and generated responses in more categories and topics. Its announcement post didn’t include any examples of the types of topics or categories that will get this treatment, however.

Man holding a phone which is displaying ChatGPT is, prototype artificial intelligence chatbot developed by OpenAI

(Image credit: Shutterstock/R Photography Background)

An invitation to users, a power play to rivals?

I think this is an interesting change that will possibly tempt more people to try ChatGPT, and when they try it for the first time, it can seem pretty impressive. It allows OpenAI to give users a glimpse of its capabilities, which I imagine will convince some people to make accounts and access its additional features. 

This will continue expanding ChatGPT’s user pool that may choose to go on and become ChatGPT Plus paid subscribers. Perhaps this is a strategy that will pay off for OpenAI, and it might institute a sort of pass-it-down approach through the tiers as it introduces new generations of its models.

This easier user accessibility could mean the type of user growth that could see OpenAI become as commonplace as Google products in the near future. One of Google Search’s appeals, for example, is that you could just fire up your browser and make a query in an instant. It’s a user-centric way of doing things, and if OpenAI can do something similar by making it that easy to use ChatGPT, then things could get seriously interesting.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

OpenAI’s new voice synthesizer can copy your voice from just 15 seconds of audio

OpenAI has been rapidly developing its ChatGPT generative AI chatbot and Sora AI video creator over the last year, and it's now got a new artificial intelligence tool to show off: Voice Generation, which can create synthetic voices from just 15 seconds of audio.

In a blog post (via The Verge), OpenAI says it's been running “a small-scale preview” of Voice Engine, which has been in development since late 2022. It's actually already being used in the Read Aloud feature in the ChatGPT app, which (as the name suggests) reads out answers to you.

Once you've trained the voice from a 15-second sample, you can then get it to read out any text you like, in an “emotive and realistic” way. OpenAI says it could be used for educational purposes, for translating podcasts into new languages, for reaching remote communities, and for supporting people who are non-verbal.

This isn't something everyone can use right now, but you can go and listen to the samples created by Voice Engine. The clips OpenAI has published sound pretty impressive, though there is a slight robotic and stilted edge to them.

Safety first

ChatGPT Android app

Voice Engine is already used in ChatGPT’s Read Aloud feature (Image credit: OpenAI)

Worries about misuse are the main reason Voice Engine is only in a limited preview for now: OpenAI says it wants to do more research into how it can protect tools like this from being used to spread misinformation and copy voices without consent.

“We hope to start a dialogue on the responsible deployment of synthetic voices, and how society can adapt to these new capabilities,” says OpenAI. “Based on these conversations and the results of these small scale tests, we will make a more informed decision about whether and how to deploy this technology at scale.”

With major elections due in both the US and UK this year, and generative AI tools getting more advanced all the time, it's a concern across every type of AI content – audio, text, and video – and it's getting increasingly difficult to know what to trust.

As OpenAI itself points out, this has the potential to cause problems with voice authentication measures, and scams where you might not know who you're talking to over the phone, or who's left you a voicemail. These aren't easy issues to solve – but we're going to have to find ways to deal with them.

You might also like

TechRadar – All the latest technology news

Read More

macOS has been riddled with bugs lately – but the new macOS 14.4.1 update has just fixed the most notorious one

The last few weeks have been plagued with bugs and oddities for users who updated their Mac devices to macOS Sonoma 14.4, including a particularly thorny issue that functionally broke a lot of users' USB hubs. A new update, macOS 14.4.1, has just been released to address the most notorious issues – so you’ll want to update your system as soon as possible. 

According to TweakTown, the update was released yesterday to the public and will resolve an issue that affected USB hubs connected to the monitors people were using with their Macs. Discussions on Reddit threads and Apple’s support forums indicated that while the problem might not have been incredibly widespread, it was still affecting a decent number of people.

While Apple hasn’t released any formal statements about the issues, it’s good to see Apple swoop in and provide a quick, no-frills solution for the issue. The new update doesn’t offer any new features besides the bug fixes, so if you’re worried about possibly catching another macOS bug from it you can assume there’s likely nothing else to go wrong with such a targeted fix.

Call the exterminator!

Sonoma 14.4 hasn’t just been plaguing users' USB hubs, it’s been taking down printers as well. While the printer issue hasn’t been reported as widely as the USB breakdown, it’s still another unwelcome bug that arrived courtesy of the update. 

The update has also been reported to be deleting previously saved versions of files in users’ iCloud Drives, effectively deleting people’s backups if they moved files out of iCloud. Normally, when you save your files in iCloud Drive all the edited versions of your file are saved for future reference, but thanks to yet another bug in Sonoma 14.4  these previous versions could be erased – which might mean all your work is gone. 

Hopefully, another upcoming update will address these issues alongside the USB hub bug, but we’ll have to wait and see if that is the case. There’s no indication so far that the new update deals with all the currently reported issues – but you’re better off updating your system just in case. 

You might also like…

TechRadar – All the latest technology news

Read More

OpenAI just gave artists access to Sora and proved the AI video tool is weirder and more powerful than we thought

A man with a balloon for a head is somehow not the weirdest thing you'll see today thanks to a series of experimental video clips made by seven artists using OpenAI's Sora generative video creation platform.

Unlike OpenAI's ChatGPT AI chatbot and the DALL-E image generation platform, the company's text-to-video tool still isn't publicly available. However, on Monday, OpenAI revealed it had given Sora access to “visual artists, designers, creative directors, and filmmakers” and revealed their efforts in a “first impressions” blog post.

While all of the films ranging in length from 20 seconds to a minute-and-a-half are visually stunning, most are what you might describe as abstract. OpenAI's Artist In Residence Alex Reben's 20-second film is an exploration of what could very well be some of his sculptures (or at least concepts for them), and creative director Josephine Miller's video depicts models melded with what looks like translucent stained glass.

Not all the videos are so esoteric.

OpenAI Sora AI-generated video image by Don Allen Stevenson III

OpenAI Sora AI-generated video image by Don Allen Stevenson III (Image credit: OpenAI sora / Don Allen Stevenson III)

If we had to give out an award for most entertaining, it might be multimedia production company shy kids' “Air Head”. It's an on-the-nose short film about a man whose head is a hot-air-filled yellow balloon. It might remind you of an AI-twisted version of the classic film, The Red Balloon, although only if you expected the boy to grow up and marry the red balloon and…never mind.

Sora's ability to convincingly merge the fantastical balloon head with what looks like a human body and a realistic environment is stunning. As shy kids' Walter Woodman noted, “As great as Sora is at generating things that appear real, what excites us is its ability to make things that are totally surreal.” And yes, it's a funny and extremely surreal little movie.

But wait, it gets stranger.

The other video that will have you waking up in the middle of the night is digital artist Don Allen Stevenson III's “Beyond Our Reality,” which is like a twisted National Geographic nature film depicting never-before-seen animal mergings like the Girafflamingo, flying pigs, and the Eel Cat. Each one looks as if a mad scientist grabbed disparate animals, carved them up, and then perfectly melded them to create these new chimeras.

OpenAI and the artists never detail the prompts used to generate the videos, nor the effort it took to get from the idea to the final video. Did they all simply type in a paragraph describing the scene, style, and level of reality and hit enter, or was this an iterative process that somehow got them to the point where the man's balloon head somehow perfectly met his shoulders or the Bunny Armadillo transformed from grotesque to the final, cute product?

That OpenAI has invited creatives to take Sora for a test run is not surprising. It's their livelihoods in art, film, and animation that are most at risk from Sora's already impressive capabilities. Most seem convinced it's a tool that can help them more quickly develop finished commercial products.

“The ability to rapidly conceptualize at such a high level of quality is not only challenging my creative process but also helping me evolve in storytelling. It's enabling me to translate my imagination with fewer technical constraints,” said Josephine Miller in the blog post.

Go watch the clips but don't blame us if you wake up in the middle of the night screaming.

You might also like

TechRadar – All the latest technology news

Read More

YouTube Music will finally let you look up tracks just by singing into your phone

It took a little while, but YouTube Music is, at long last, giving users the ability to search for songs just by singing a tune into a smartphone’s microphone.

The general YouTube app has had this feature since mid-October 2023, and judging from recently found images on Reddit, the version on YouTube Music functions in the exact same way. In the upper right corner next to the search bar is an audio chart icon. Tapping it activates song search where you then either play, sing, or hum a tune into your device. 

Using the power of artificial intelligence, the app will quickly bring up a track that, according to 9To5Google, matches “the sound to the original recording.” The tool’s accuracy may depend entirely on your karaoke skills. 

Missing details

Because there hasn't an official announcement yet, there are a lot of missing details. For starters, it’s unknown how long you're supposed to sing or hum. The original tool required people to enter a three-second input before it could perform a search. Presumably it will take the same amount of time, but without official word from the platform, it’s hard to say with total confidence.

Online reports claim the update is already available on YouTube Music for iOS. However, 9To5Google states they couldn’t find the feature on either their iPhones or Android devices. Our Android phone didn’t receive the patch either so it’s probably seeing a limited release at the moment. 

We reached out to Google asking if it would like to share official info about YouTube Music’s song search tool alongside a couple of other questions. More specifically, we wanted to know if the feature is rolling out to everyone, or will it require a YouTube Music Premium plan? We will update if we get answers. 

You can't listen to music without a good pair of headphones. For recommendations, check out TechRadar's list of the best wireless headphones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

The display specs for Samsung’s first XR headset may have just leaked

We're expecting a Samsung XR/VR headset in the near future – that's extended reality and/or virtual reality – and a new leak gives us a better idea of the display specs the hardware device might bring along with it.

According to Daily Korea (via SamMobile), the headset will use a micro-OLED display supplied by Sony, with a size of 1.3 inches, a resolution of 3,840 x 3,552 pixels, a refresh rate of 90 frames per second, and a maximum brightness of 1,000 nits.

For comparison, the Meta Quest 3 headset launched last year offers a resolution of 2,064 x 2,208 pixels per eye, an experimental 120Hz refresh rate mode, and a 90Hz standard refresh rate mode. The size of the displays isn't specified, but they use LCD technology, and you can read our thoughts on it in our Meta Quest 3 review.

As for the Apple Vision Pro, inside that particular device we've got micro-OLED tech. Apple doesn't specify a resolution, but does say there are 23 million pixels across both displays, and the refresh rate goes up to 100Hz. As our Apple Vision Pro review will tell you, it offers rather impressive visual performance.

Coming soon?

Samsung announced it was working on an XR headset all the way back at the start of 2023, with Google and Qualcomm helping out. It's yet to see the light of day though – there have been rumors that the launch of the Vision Pro gave Samsung executives enough food for thought that they decided to take more time over their own device.

Extended reality or XR, if you're unfamiliar, is the catch-all term for virtual reality (entirely enclosed virtual worlds), augmented reality (digital elements laid on top of the real world), and mixed reality (digital elements and the real world interacting in more realistic ways, like an upgraded augmented reality).

So far we've seen a battery leak, got confirmation that the Snapdragon XR2 Plus Gen 2 chipset will power the device, and… that's been about it for specs and features. There've been more leaks regarding a launch, which could happen in the second half of the year, 

Specifically, that might mean July. Samsung held an Unpacked launch event in July 2023, and is expected to do the same this year, giving full unveilings to the Galaxy Z Fold 6, the Galaxy Z Flip 6, the Galaxy Watch 7, the Galaxy Ring, and probably more besides.

You might also like

TechRadar – All the latest technology news

Read More

Midjourney just changed the generative image game and showed me how comics, film, and TV might never be the same

Midjourney, the Generative AI platform that you can currently use on Discord just introduced the concept of reusable characters and I am blown away.

It's a simple idea: Instead of using prompts to create countless generative image variations, you create and reuse a central character to illustrate all your themes, live out your wildest fantasies, and maybe tell a story.

Up until recently, Midjourney, which is trained on a diffusion model (add noise to an original image and have the model de-noise it so it can learn about the image) could create some beautiful and astonishingly realistic images based on prompts you put in the Discord channel (“/imagine: [prompt]”) but unless you were asking it to alter one of its generated images, every image set and character would look different.

Now, Midjourney has cooked up a simple way to reuse your Midjourney AI characters. I tried it out and, for the most part, it works.

Image 1 of 3

Midjourney AI character creation

I guess I don’t know how to describe myself. (Image credit: Future)
Image 2 of 3

Midjourney AI character creation

(Image credit: Future)
Image 3 of 3

Midjourney AI character creation

Things are getting weird (Image credit: Future)

In one prompt, I described someone who looked a little like me, chose my favorite of Midjourney's four generated image options, upscaled it for more definition, and then, using a new “– cref” prompt and the URL for my generated image (with the character I liked), I forced Midjounrey to generate new images but with the same AI character in them.

Later, I described a character with Charles Schulz's Peanuts character qualities and, once I had one I liked, reused him in a different prompt scenario where he had his kite stuck in a tree (Midjourney couldn't or wouldn't put the kite in the tree branches).

Image 1 of 2

Midjourney AI character creation

An homage to Charles Schulz (Image credit: Future)
Image 2 of 2

Midjourney AI character creation

(Image credit: Future)

It's far from perfect. Midjourney still tends to over-adjust the art but I contend the characters in the new images are the same ones I created in my initial images. The more descriptive you make your initial character-creation prompts, the better result you'll get in subsequent images.

Perhaps the most startling thing about Midjourney's update is the utter simplicity of the creative process. Writing natural language prompts has always been easy but training the system to make your character do something might typically take some programming or even AI model expertise. Here it's just a simple prompt, one code, and an image reference.

Image 1 of 2

Midjourney AI character creation

Got a lot closer with my photo as a reference (Image credit: Future)
Image 2 of 2

Midjourney AI character creation

(Image credit: Future)

While it's easier to take one of Midjourney's own creations and use that as your foundational character, I decided to see what Midjourney would do if I turned myself into a character using the same “cref” prompt. I found an online photo of myself and entered this prompt: “imagine: making a pizza – cref [link to a photo of me]”.

Midjourney quickly spit out an interpretation of me making a pizza. At best, it's the essence of me. I selected the least objectionable one and then crafted a new prompt using the URL from my favorite me.

Midjourney AI character creation

Oh, hey, Not Tim Cook (Image credit: Future)

Unfortunately, when I entered this prompt: “interviewing Tim Cook at Apple headquarters”, I got a grizzled-looking Apple CEO eating pizza and another image where he's holding an iPad that looks like it has pizza for a screen.

When I removed “Tim Cook” from the prompt, Midjourney was able to drop my character into four images. In each, Midjourney Me looks slightly different. There was one, though, where it looked like my favorite me enjoying a pizza with a “CEO” who also looked like me.

Midjourney AI character creation

Midjourney me enjoying pizza with my doppelgänger CEO (Image credit: Future)

Midjourney's AI will improve and soon it will be easy to create countless images featuring your favorite character. It could be for comic strips, books, graphic novels, photo series, animations, and, eventually, generative videos.

Such a tool could speed storyboarding but also make character animators very nervous.

If it's any consolation, I'm not sure Midjourney understands the difference between me and a pizza and pizza and an iPad – at least not yet.

You might also like

TechRadar – All the latest technology news

Read More