Midjourney just changed the generative image game and showed me how comics, film, and TV might never be the same

Midjourney, the Generative AI platform that you can currently use on Discord just introduced the concept of reusable characters and I am blown away.

It's a simple idea: Instead of using prompts to create countless generative image variations, you create and reuse a central character to illustrate all your themes, live out your wildest fantasies, and maybe tell a story.

Up until recently, Midjourney, which is trained on a diffusion model (add noise to an original image and have the model de-noise it so it can learn about the image) could create some beautiful and astonishingly realistic images based on prompts you put in the Discord channel (“/imagine: [prompt]”) but unless you were asking it to alter one of its generated images, every image set and character would look different.

Now, Midjourney has cooked up a simple way to reuse your Midjourney AI characters. I tried it out and, for the most part, it works.

Image 1 of 3

Midjourney AI character creation

I guess I don’t know how to describe myself. (Image credit: Future)
Image 2 of 3

Midjourney AI character creation

(Image credit: Future)
Image 3 of 3

Midjourney AI character creation

Things are getting weird (Image credit: Future)

In one prompt, I described someone who looked a little like me, chose my favorite of Midjourney's four generated image options, upscaled it for more definition, and then, using a new “– cref” prompt and the URL for my generated image (with the character I liked), I forced Midjounrey to generate new images but with the same AI character in them.

Later, I described a character with Charles Schulz's Peanuts character qualities and, once I had one I liked, reused him in a different prompt scenario where he had his kite stuck in a tree (Midjourney couldn't or wouldn't put the kite in the tree branches).

Image 1 of 2

Midjourney AI character creation

An homage to Charles Schulz (Image credit: Future)
Image 2 of 2

Midjourney AI character creation

(Image credit: Future)

It's far from perfect. Midjourney still tends to over-adjust the art but I contend the characters in the new images are the same ones I created in my initial images. The more descriptive you make your initial character-creation prompts, the better result you'll get in subsequent images.

Perhaps the most startling thing about Midjourney's update is the utter simplicity of the creative process. Writing natural language prompts has always been easy but training the system to make your character do something might typically take some programming or even AI model expertise. Here it's just a simple prompt, one code, and an image reference.

Image 1 of 2

Midjourney AI character creation

Got a lot closer with my photo as a reference (Image credit: Future)
Image 2 of 2

Midjourney AI character creation

(Image credit: Future)

While it's easier to take one of Midjourney's own creations and use that as your foundational character, I decided to see what Midjourney would do if I turned myself into a character using the same “cref” prompt. I found an online photo of myself and entered this prompt: “imagine: making a pizza – cref [link to a photo of me]”.

Midjourney quickly spit out an interpretation of me making a pizza. At best, it's the essence of me. I selected the least objectionable one and then crafted a new prompt using the URL from my favorite me.

Midjourney AI character creation

Oh, hey, Not Tim Cook (Image credit: Future)

Unfortunately, when I entered this prompt: “interviewing Tim Cook at Apple headquarters”, I got a grizzled-looking Apple CEO eating pizza and another image where he's holding an iPad that looks like it has pizza for a screen.

When I removed “Tim Cook” from the prompt, Midjourney was able to drop my character into four images. In each, Midjourney Me looks slightly different. There was one, though, where it looked like my favorite me enjoying a pizza with a “CEO” who also looked like me.

Midjourney AI character creation

Midjourney me enjoying pizza with my doppelgänger CEO (Image credit: Future)

Midjourney's AI will improve and soon it will be easy to create countless images featuring your favorite character. It could be for comic strips, books, graphic novels, photo series, animations, and, eventually, generative videos.

Such a tool could speed storyboarding but also make character animators very nervous.

If it's any consolation, I'm not sure Midjourney understands the difference between me and a pizza and pizza and an iPad – at least not yet.

You might also like

TechRadar – All the latest technology news

Read More

I created an AI app in 10 minutes and I might never think about artificial intelligence in the same way again

Pretty much anything we can do with AI today might have seemed like magic just a year ago, but MindStudio's platform for creating custom AI apps in a matter of minutes feels like a new level of alchemy.

The six-month-old free platform, which you can find right now under youai.ai, is a visual studio for building AI workflows, assistants, and AI chatbots. In its short lifespan it's already been used, according to CEO Dimitry Shapiro, to build more than 18,000 apps.

Yes, he called them “apps”, and if you're struggling to understand how or why anyone might want to build AI applications, just look at OpenAI's relatively new GPT apps (aka GPTs). These let you lock the powerful GPT-3.5 into topic-based thinking that you can package up, share, and sell. Shapiro, however, noted the limits of OpenAI's approach. 

He likened GPTs to “bookmarking a prompt” within the GPT sphere. MindStudio, on  the other hand, is generative model-agnostic. The system lets you use multiple models within one app.

If adding more model options sounds complicated, I can assure you it's not. MindStudio is the AI development platform for non-developers. 

Watch and learn

MindStudio

Choose your template. (Image credit: MindStudio)

To get you started, the company provides an easy-to-follow 18-minute video tutorial. The system also helps by offering a healthy collection of templates (many of them business-focused), or you can choose a blank template. I followed the guide to recreate the demo AI app (a blog post generator), and my only criticism is that the video is slightly out of date, with some interface elements having been moved or renamed. There are some prompts to note the changes, but the video could still do with a refresh.

Still, I had no trouble creating that first AI blog generator. The key here is that you can get a lot of the work done through a visual interface that lets you add blocks along a workflow and then click on them to customize, add details, and choose which AI model you want to use (the list includes GPT- 3.5 turbo, PaLM 2, Llama 2, and Gemini Pro). While you don't necessarily have to use a particular model for each task in your app, it might be that, for example, you should be using GPT-3.5 for fast chatbots or that PaLM would be better for math; however, MindStudio cannot, at least yet, recommend which model to use and when.

Image 1 of 2

MindStudio

Connect the boxes (Image credit: MindStudio)
Image 2 of 2

MindStudio

And then edit their contents (Image credit: MindStudio)

The act of adding training data is also simple. I was able to find web pages of information, download the HTML, and upload it to MindStudio (you can upload up to 150 files on a single app). MindStudio uses the information to inform the AI, but will not be cutting and pasting information from any of those pages into your app responses.

Most of MindStudio's clients are in business, and it does hide some more powerful features (embedding on third-party websites) and models (like GPT 4 Turbo) behind a paywall, but anyone can try their hand at building and sharing AI apps (you get a URL for sharing).

Confident in my newly acquired, if limited, knowledge, I set about building an AI app revolving around mobile photography advice. Granted, I used the framework I'd just learned in the AI blog post generator tutorial, but it still went far better than I expected.

One of the nice things about MindStudio is that it allows for as much or as little coding as you're prepared to do. In my case, I had to reference exactly one variable that the model would use to pull the right response.

MindStudio

Options include setting your model’s ‘temperature’ (Image credit: MindStudio)

There are a lot of smart and dead-simple controls that can even teach you something about how models work. MindStudio lets you set, for instance, the 'Temperature' of your model to control the randomness of its responses. The higher the 'temp', the more unique and creative each response. If you like your model verbose, you can drag another slider to set a response size of up to 3,000 characters.

The free service includes unlimited consumer usage and messages, some basic metrics, and the ability to share your AI via a link (as I've done here). Pro users can pay $ 23 a month for the more powerful models like GPT-4, less MindStudio branding, and, among other things, site embedding. The $ 99 a-month tier includes all you get with Pro, but adds the ability to charge for access to your AI app, better analytics, API access, full chat transcripts, and enterprise support.

Image 1 of 2

MindStudio

Look, may, I made an AI app. (Image credit: MindStudio)
Image 2 of 2

MindStudio

It’s smarter than I am. (Image credit: MindStudio)

I can imagine small and medium-sized businesses using MindStudio to build customer engagement and content capture on their sites, and even as a tool for guiding users through their services.

Even at the free level, though, I was surprised at the level of customization MindStorm offers. I could add my own custom icons and art, and even build a landing page.

I wouldn't call my little AI app anything special, but the fact that I could take the germ of an idea and turn it into a bespoke chatbot in 10 minutes is surprising even to me. That I get to choose the right model for each job within an AI app is even better; and that this level of fun and utility is free is the icing on the cake.

You might also like

TechRadar – All the latest technology news

Read More

The Apple Vision Pro is compatible with Intel Apple Macs – even if the performance may not be the same

The Apple Vision Pro has finally launched, and if you were thinking you may have to upgrade your Mac or MacBook to use the new headset (piling on another expensive purchase onto an already very pricey device) there is some good news, as it seems like the Vision Pro headset is compatible with Intel-based Macs, potentially opening the door for users with older models. 

A support page on the official Apple website, explaining how to use the headset with a Mac as a display, reveals that support for this feature is not limited to Apple Silicon Macs (such as recent MacBooks with the M1, M2 or M3 chips). The post explains that if you happen to be using a Mac with an Intel processor, you can still use the Vision Pro as a workspace, however, you’ll be working with resolutions capped at 3K rather than 4K as you normally would with an Apple Silicon-powered Mac. 

You’ll still be able to resize the Virtual Display window and use the computer's keyboard and trackpad. That being said, if you’re looking to take advantage of the Virtual Display feature, your Mac will need to be running on macOS 14 Sonoma or newer, so if you are planning on giving it a go you’d probably have to upgrade your operating system. Very old Macs and MacBooks may not be compatible with macOS Sonoma, which means you won’t be able to use the Vision Pro as an additional screen with those products.

Cool, but not very useful.

While I am glad to see support for older Macs, I’m not sure I see the point. Of course, Intel-based Macs are still good computers despite their age, but with the cost of the Apple Vision Pro, you could buy yourself an M3 iMac and have plenty of cash to spare. 

Of course, I’m sure plenty of people may have an older iMac collecting dust at home that would like to give it a go, but again the Apple Vision Pro isn’t exactly a product you buy on a whim. I wouldn’t really encourage anyone to buy the headset if they exclusively work on an Intel Mac since you won’t get the full 4K experience. You’d be better off just upgrading your device to a new MacBook, Mac mini or iMac and buying a Vision Pro later… if at all. 

There’s also no guarantee that this support on the Intel Macs will last forever – now that the M3 iMac has launched I wouldn’t be surprised if we started to see support for newer accessories or features being limited. So, if you are in the position to try out the Vision Pro with your older Mac, I suggest you get on it soon and decide if you like the pairing enough to justify upgrading to an Apple Silicon Mac – because you might have to in the future. 

Via 9to5Mac

You might also like

TechRadar – All the latest technology news

Read More

Windows 10 users may not get Copilot yet due to the same weird bug that’s plagued Windows 11

Windows 10 users are officially getting Copilot, with the desktop assistant rolling out now, but not everyone has got the AI yet – and if you haven’t, that could be due to a bug.

That glitch affects Windows 10 setups with multiple monitors, and it’s an odd one as highlighted by Microsoft in the known issues for patch KB5032278, which is the November preview update for Windows 10 – though it’s a bug Windows 11 users will be familiar with.

The problem is that icons on the desktop can shift in a seemingly random fashion across the different screens in a Windows 10 multi-monitor rig, and other icon alignment issues can manifest, too.

As mentioned this has been seen on Windows 11 already, and with Copilot now rolling out to Windows 10 users, we shouldn’t really be too surprised that the same thing is occurring.

Analysis: Upgrade block

If you haven’t yet got Copilot on Windows 10, and you run multiple screens, this is the reason why – Microsoft has put a block in place to prevent upgrades carrying the AI assistant from being delivered to these PCs (and the same is true for Windows 11).

Microsoft tells us: “We are working on a resolution and will provide an update in an upcoming release.”

Even if you don’t have multiple monitors, but you’ve run a multi-monitor system in the past, you may find your PC is blocked from taking on this upgrade. As Microsoft explains: “Copilot in Windows (in preview) might not be available on devices that have been used or are currently being used in a multi-monitor configuration.”

Of course, this new update for Windows 10 is optional anyway, and as a preview, it’s expected that it might be bugged in some respects.

The fix will hopefully come soon and Windows 10 and Windows 11 users alike with multiple monitors should then be able to enjoy Copilot – though the AI is pretty limited in its functionality in this initial incarnation, it has to be said. Eventually, it will have sweeping powers to manipulate Windows settings, but right now the reality is that Copilot is pretty much a glorified Bing AI in a side panel.

Via XDA Developers

You might also like

TechRadar – All the latest technology news

Read More

Over 13000 Vivo phones found to be using same IMEI number

Smartphone’s IMEI is like a digital fingerprint that is unique to each and every device. It is used by the telecom companies to provide network connectivity on a SIM card and since the IMEI number of two devices cannot be the same, it is also used to track and trace lost devices or criminals.

However, in a bizarre case, the police in Meerut, located in Uttar Pradesh has stated that it has found not one but over 13,500 smartphones using the same IMEI number. Identifying this as a severe security issue, the cops have registered a case against the Chinese smartphone maker Vivo.

As per the reports, the case came into highlight after a Sub Inspector from Meerut Police got his smartphone back after repairs that cost him Rs. 2605 in September last year. However, even after the repairs, the phone showed a system error and he later found that the IMEI number of the device was changed.

A case was filed and notices were sent to the smartphone maker and due to the unsatisfactory response from Vivo, a complaint was filed with the cyber cell team. This is when the cyber cell team identified that there were 13,557 different Vivo phones with the same IMEI number operational across the country.

While IMEI number may sound rather irrelevant for a common user, however, it becomes a grave security concern as it makes it impossible for cops to intercept criminals.

Back in 2012, a similar incident was reported when 18,000 phones were found to be using the same IMEI number. Later in 2017, the federal government announced that tampering with the IMEI numbers is a punishable offence. Last year over a lakh stolen phones were found to be using the same IMEI number.

While this can be seen as negligence at the end of the company, the Meerut Police has already started an investigation in this matter. We have also reached out to Vivo and will update this story once we receive any response.

TechRadar – All the latest technology news

Read More