The Meta Quest 3 Lite gets another image leak – and a new name

Nothing is official yet, but it seems increasingly likely that a cheaper version of the Meta Quest 3 is on the way. Now we have a freshly leaked image of the headset – which apparently isn't called the Meta Quest 3 Lite.

That's the name we've used for previous leaks, but as per a now-deleted Reddit post reposted to social media by @Lunayian (via Android Central), this upcoming device is actually going to be called the Meta Quest 3s.

Adding a lower-case letter to the name of a more affordable product has been done before, and doesn't come with the connotations of sub-standard quality that you might get with 'Lite', so we're inclined to believe the leak could be accurate.

This information apparently comes from slides taken from an internal Meta presentation, and one of them shows the Meta Quest 3 and the Meta Quest 3s side by side – which might soon be the choice for buyers heading to the Meta online store.

Headset design

See more

As for the design of the headset, it has a slightly unusual camera configuration on the front – this was missing in an earlier image leak, and given the importance of passthrough tech for mixed reality applications, there are probably going to be cameras somewhere.

The design looks a little thicker, probably due to the cheaper components inside, and is reminiscent of both the Oculus Go and the Oculus Quest 2. Specs-wise, the resolution is listed as 1832 x 1920 pixels, compared to 2064 x 2208 pixels on the Meta Quest 3.

As usual, it's difficult to verify the authenticity of these images, especially as the original Reddit post has been deleted and came from a source we haven't heard from before. This might be the cheaper Meta Quest 3 – or it might not be.

It would certainly make sense for Meta to want to put out a cheaper model to appeal to a broader range of consumers, and it's apparently something Apple is thinking about for its Vision Pro too. Watch this space.

You might also like

TechRadar – All the latest technology news

Read More

Midjourney just changed the generative image game and showed me how comics, film, and TV might never be the same

Midjourney, the Generative AI platform that you can currently use on Discord just introduced the concept of reusable characters and I am blown away.

It's a simple idea: Instead of using prompts to create countless generative image variations, you create and reuse a central character to illustrate all your themes, live out your wildest fantasies, and maybe tell a story.

Up until recently, Midjourney, which is trained on a diffusion model (add noise to an original image and have the model de-noise it so it can learn about the image) could create some beautiful and astonishingly realistic images based on prompts you put in the Discord channel (“/imagine: [prompt]”) but unless you were asking it to alter one of its generated images, every image set and character would look different.

Now, Midjourney has cooked up a simple way to reuse your Midjourney AI characters. I tried it out and, for the most part, it works.

Image 1 of 3

Midjourney AI character creation

I guess I don’t know how to describe myself. (Image credit: Future)
Image 2 of 3

Midjourney AI character creation

(Image credit: Future)
Image 3 of 3

Midjourney AI character creation

Things are getting weird (Image credit: Future)

In one prompt, I described someone who looked a little like me, chose my favorite of Midjourney's four generated image options, upscaled it for more definition, and then, using a new “– cref” prompt and the URL for my generated image (with the character I liked), I forced Midjounrey to generate new images but with the same AI character in them.

Later, I described a character with Charles Schulz's Peanuts character qualities and, once I had one I liked, reused him in a different prompt scenario where he had his kite stuck in a tree (Midjourney couldn't or wouldn't put the kite in the tree branches).

Image 1 of 2

Midjourney AI character creation

An homage to Charles Schulz (Image credit: Future)
Image 2 of 2

Midjourney AI character creation

(Image credit: Future)

It's far from perfect. Midjourney still tends to over-adjust the art but I contend the characters in the new images are the same ones I created in my initial images. The more descriptive you make your initial character-creation prompts, the better result you'll get in subsequent images.

Perhaps the most startling thing about Midjourney's update is the utter simplicity of the creative process. Writing natural language prompts has always been easy but training the system to make your character do something might typically take some programming or even AI model expertise. Here it's just a simple prompt, one code, and an image reference.

Image 1 of 2

Midjourney AI character creation

Got a lot closer with my photo as a reference (Image credit: Future)
Image 2 of 2

Midjourney AI character creation

(Image credit: Future)

While it's easier to take one of Midjourney's own creations and use that as your foundational character, I decided to see what Midjourney would do if I turned myself into a character using the same “cref” prompt. I found an online photo of myself and entered this prompt: “imagine: making a pizza – cref [link to a photo of me]”.

Midjourney quickly spit out an interpretation of me making a pizza. At best, it's the essence of me. I selected the least objectionable one and then crafted a new prompt using the URL from my favorite me.

Midjourney AI character creation

Oh, hey, Not Tim Cook (Image credit: Future)

Unfortunately, when I entered this prompt: “interviewing Tim Cook at Apple headquarters”, I got a grizzled-looking Apple CEO eating pizza and another image where he's holding an iPad that looks like it has pizza for a screen.

When I removed “Tim Cook” from the prompt, Midjourney was able to drop my character into four images. In each, Midjourney Me looks slightly different. There was one, though, where it looked like my favorite me enjoying a pizza with a “CEO” who also looked like me.

Midjourney AI character creation

Midjourney me enjoying pizza with my doppelgänger CEO (Image credit: Future)

Midjourney's AI will improve and soon it will be easy to create countless images featuring your favorite character. It could be for comic strips, books, graphic novels, photo series, animations, and, eventually, generative videos.

Such a tool could speed storyboarding but also make character animators very nervous.

If it's any consolation, I'm not sure Midjourney understands the difference between me and a pizza and pizza and an iPad – at least not yet.

You might also like

TechRadar – All the latest technology news

Read More

Adobe’s new beta Express app gives you Firefly AI image generation for free

Adobe has released a new beta version of its Express app, letting users try out their Firefly generative AI on mobile for the first time.

The AI functions much like Firefly on the web since it has a lot of the same features. You can have the AI engine create images from a single text prompt, insert or remove objects from images, and add words with special effects. The service also offers resources like background music tracks, stock videos, and a content scheduler for posting on social media platforms. It’s important to mention that all these features and more normally require a subscription to Adobe Express Premium. But, according to the announcement, everything will be available for free while the beta is ongoing. Once it’s over, you’ll have to pay the $ 10-a-month subscription to keep using the tools 

Adobe Express with Firefly features

(Image credit: Adobe)

Art projects on the current Express app will not be found in the beta – at least not right now. Ian Wang, who is the vice president of product for Adobe Express, told The Verge that once Express with Firefly exits beta, all the “historical data from the old app” will carry over to the new one. 

The new replacement

Adobe is planning on making Express with Firefly the main platform moving forward. It’s unknown when the beta will end. A company representative couldn’t give us an exact date, but they told us the company is currently collecting feedback for the eventual launch. When the trial period ends, the representative stated, “All eligible devices will be automatically updated to the new [app]”.

We managed to gain access to the beta and the way it works is pretty simple. Upon installation, you’ll see a revolving carousel of the AI tools at the top. For this quick demo, we’ll have Firefly make an image from a text prompt. Tap the option, then enter whatever you want to see from the AI.

Adobe Express with Firefly demo

(Image credit: Future)

Give it a few seconds to generate the content where you’ll be given multiple pictures to choose from. From there, you edit the image to your liking. After you’re all done, you can publish the finished product on social media or share it with someone.

Availability

Android users can download the beta directly from the Google Play Store. iPhone owners, on the other hand, will have a harder time. Apple has restrictions on how many testers can have access to beta software at a time. iOS users will instead have to join Adobe’s waitlist first and wait to get chosen. If you’re one of the lucky few, the company will guide you through the process of installing the app on your iPhone.

There is a system requirements page listing all of the smartphones eligible for the beta, however, it doesn’t appear to be a super strict list. The device we used was a OnePlus Nord N20 and it ran the app just fine. Adobe’s website also has all the supported languages which include English, French, Korean, plus Brazilian Portuguese.

Check out TechRadar's list of the best photo editor for 2024 if you want more robust tools.

You might also like

TechRadar – All the latest technology news

Read More

Google’s Gemini will be right back after these hallucinations: image generator to make a return after historical blunders

Google is gearing up to relaunch its image creation tool that’s part of the newly-rebranded generative artificial intelligence (AI) bot, Gemini, in the next few weeks. The generative AI image creation tool is in theory capable of generating almost anything you can dream up and put into words as a prompt, but “almost” is the key word here. 

Google has pumped the brakes on Gemini’s image generation after Gemini was observed creating historical depictions and other questionable images that were considered inaccurate or offensive. However, it looks like Gemini could return to image generation soon, as Google DeepMind CEO Demis Hassabis announced that Gemini will be rebooted in the coming week after taking time to address these issues. 

Image generation came to Gemini earlier in February, and users were keen to test its abelites. Some people attempted to generate images depicting a certain historical period that appeared to greatly deviate from accepted historical fact. Some of these users took to social media to share their results and direct criticism at Google. 

The images caught many people’s attention and sparked many conversations, and Google has recognized the images as a symptom of a problem within Gemini. The tech giant then chose to take the feature offline and fix whatever was causing the model to dream up such strange and controversial pictures. 

Hassabis confirmed that Gemini was not working as intended, and that it would take some weeks to amend it, and bring it back online while speaking at a panel taking place at the Mobile World Congress (MWC) event in Barcelona

Person using a laptop in a coffeeshop

(Image credit: Shutterstock)

If at first, your generative AI bot doesn't succeed…

Google’s first attempt at a generative AI chatbot was Bard, which saw a lukewarm reception and didn’t win users over from the more popular ChatGPT in the way Google had hoped, after which it changed course and debuted its revamped and rebranded family of generative models, Gemini. Like ChatGPT, Google is now offering a premium-tier for Gemini, which offers advanced features for a subscription. 

The examples of Gemini's misadventures have also reignited discussions about AI ethics generally, and Google’s AI ethics specifically, and around issues like the accuracy of generated AI output and AI hallucinations. Companies like Microsoft and Google are pushing ahead to win the AI assistant arms race, but while racing ahead, they’re in danger of releasing products with flaws that could undermine their hard work.

AI-generated content is becoming increasingly popular and, especially due to their size and resources, these companies can (and really, should) be held to a high standard of accuracy. High profile fails like the one Gemini experienced aren’t just embarrassing for Google – it could damage the product’s perception in the eyes of consumers. There’s a reason Google rebranded Bard after its much-mocked debut.

There’s no doubt that AI is incredibly exciting, but Google and its peers should be mindful that rushing out half-baked products just to get ahead of the competition could spectacularly backfire.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Google explains how Gemini’s AI image generation went wrong, and how it’ll fix it

A few weeks ago Google launched a new image generation tool for Gemini (the suite of AI tools formerly known as Bard and Duet) which allowed users to generate all sorts of images from simple text prompts. Unfortunately, Google’s AI tool repeatedly missed the mark and generated inaccurate and even offensive images that led a lot of us to wonder – how did the bot get things so wrong? Well, the company has finally released a statement explaining what went wrong, and how it plans to fix Gemini. 

The official blog post addressing the issue states that when designing the text-to-image feature for Gemini, the team behind Gemini wanted to “ensure it doesn’t fall into some of the traps we’ve seen in the past with image generation technology — such as creating violent or sexually explicit images, or depictions of real people.” The post further explains that users probably don’t want to keep seeing people of just one ethnicity or other prominent characteristic. 

So, to offer a pretty basic explanation for what’s been going on: Gemini has been throwing up images of people of color when prompted to generate images of white historical figures, giving users ‘diverse Nazis’, or simply ignoring the part of your prompt where you’ve specified exactly what you’re looking for. While Gemini’s image capabilities are currently on hold, when you could access the feature you’d specify exactly who you’re trying to generate – Google uses the example “a white veterinarian with a dog” – and Gemini would seemingly ignore the first half of that prompt and generate veterinarians of all races except the one you asked for. 

Google went on to explain that this was the outcome of two crucial failings – firstly, Gemini was showing a range of different people without considering a range not to show. Alongside that, in trying to make a more conscious, less biased generative AI, Google admits the “model became way more cautious than we intended and refused to answer certain prompts entirely – wrongly interpreting some very anodyne prompts as sensitive.”

So, what's next?

At the time of writing, the ability to generate images of people on Gemini has been paused while the Gemini team works to fix the inaccuracies and carry out further testing. The blog post notes that AI ‘hallucinations’ are nothing new when it comes to complex deep learning models – even Bard and ChatGPT had some questionable tantrums as the creators of those bots worked out the kinks. 

The post ends with a promise from Google to keep working on Gemini’s AI-powered people generation until everything is sorted, with the note that while the team can’t promise it won’t ever generate “embarrassing, inaccurate or offensive results”, action is being taken to make sure it happens as little as possible. 

All in all, this whole episode puts into perspective that AI is only as smart as we make it. Our editor-in-chief Lance Ulanoff succinctly noted that “When an AI doesn't know history, you can't blame the AI.” With how quickly artificial intelligence has swooped in and crammed itself into various facets of our daily lives – whether we want it or not – it’s easy to forget that the public proliferation of AI started just 18 months ago. As impressive as the tools currently available to us are, we’re ultimately still in the early days of artificial intelligence. 

We can’t rain on Google Gemini’s parade just because the mistakes were more visually striking than say, ChatGPT’s recent gibberish-filled meltdown. Google’s temporary pause and reworking will ultimately lead to a better product, and sooner or later we’ll see the tool as it was meant to be. 

You might also like…

TechRadar – All the latest technology news

Read More

Google Bard finally gets a free AI image generator – here’s how to try it

Considering how popular image generating is right now, Google Bard’s recent update that adds AI image generation seems all but inevitable.

According to the official Google Bard update page, using the tool is quite simple. You can activate the AI image generator by simply entering “a few words” into the search bar, “starting with English prompts.” Then you “click ‘Generate more’ for more options and download the ones you like.”

Those generated images are stored in pinned chats, recent chats, and Bard Activity, and can be deleted from your Bard Activity by deleting the prompt that generated them.

Bard’s image generation is powered by its updated Imagen 2 model, which is meant to balance speed with quality to deliver photorealistic images. The new feature is available worldwide including in the US, except in the following regions/countries: European Economic Area (EEA), Switzerland, and the UK. It’s also only available in English and only to those 18 and above, though how Google would enforce the age restriction beyond a generic ‘Yes’ or ‘No’ question is unclear.

In terms of responsibility, Google states in its official blog that “Bard uses SynthID to embed digitally identifiable watermarks into the pixels of generated images,” which makes its AI-generated images distinct from works created by humans. Most likely this feature was developed to prevent people from using generated images for commercial use, which tracks since that would open up a legal can of worms.

Google also asserts that it seeks to limit violent, offensive, or sexual content from the training data, as well as applying filters to prevent named people from being involved in image generation. 

Bard with Gemini Pro will also be enhanced with this new update. The update page states that “Bard will be far more capable at things like understanding, summarising, reasoning, brainstorming, writing, and planning.” This upgrade to Bard’s AI is most likely what allowed Google to offer the free AI image generator tool in the first place, along with the support.

Google Bard Image Generation

We created the first two images with a simple prompt and clicked “Generate more” to see the second two. (Image credit: Future)

Google Bard is taking over 

Google has been going all in with its Bard AI, even using it to slowly replace Google Assistant, which was the tech giant’s previous answer to Apple’s Siri. It was discovered that Google changed the greeting pop-up for various devices from ‘I’m Assistant with Bard’ to simply ‘I’m Bard.’

It was also announced that Google would be removing 17 features from Assistant in the coming weeks, including playing audiobooks on Google Play Books through voice command and asking for information about your contacts. The official announcement even implied that more features would be removed.

You might also like

TechRadar – All the latest technology news

Read More

This scary AI breakthrough means you can run but not hide – how AI can guess your location from a single image

There’s no question that artificial intelligence (AI) is in the process of upending society, with ChatGPT and its rivals already changing the way we live our lives. But a new AI project has just emerged that can pinpoint the location of where almost any photo was taken – and it has the potential to become a privacy nightmare.

The project, dubbed Predicting Image Geolocations (or PIGEON for short) was created by three students at Stanford University and was designed to help find where images from Google Street View were taken. But when fed personal photos it had never seen before, it was even able to accurately find their locations, usually with a high degree of accuracy.

Jay Stanley of the American Civil Liberties Union says that has serious privacy implications, including government surveillance, corporate tracking and stalking, according to NPR. For instance, a government could use PIGEON to find dissidents or see whether you have visited places it disapproves of. Or a stalker could employ it to work out where a potential victim lives. In the wrong hands, this kind of tech could wreak havoc.

Motivated by those concerns, the student creators have decided against releasing the tech to the wider world. But as Stanley points out, that might not be the end of the matter: “The fact that this was done as a student project makes you wonder what could be done by, for example, Google.”

A double-edged sword

Google Maps

(Image credit: Google)

Before we start getting the pitchforks ready, it’s worth remembering that this technology might also have a range of positive uses, if deployed responsibly. For instance, it could be used to identify places in need of roadworks or other maintenance. Or it could help you plan a holiday: where in the world could you go to see landscapes like those in your photos? There are other uses, too, from education to monitoring biodiversity.

Like many recent advances in AI, it’s a double-edged sword. Generative AI can be used to help a programmer debug code to great effect, but could also be used by a hacker to refine their malware. It could help you drum up ideas for a novel, but might assist someone who wants to cheat on their college coursework.

But anything that helps identify a person’s location in this way could be extremely problematic in terms of personal privacy – and have big ramifications for social media. As Stanley argued, it’s long been possible to remove geolocation data from photos before you upload them. Now, that might not matter anymore.

What’s clear is that some sort of regulation is desperately needed to prevent wider abuses, while the companies making AI tech must work to prevent damage caused by their products. Until that happens, it’s likely we’ll continue to see concerns raised over AI and its abilities.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft reins in Bing AI’s Image Creator – and the results don’t make much sense

You may have noticed that Bing AI got a big upgrade for its image creation tool last week (among other recent improvements), but it appears that after having taken this sizeable step forward, Microsoft has now taken a step back.

In case you missed it, Bing’s image creation system was upgraded to a whole new version – Dall-E 3 – which is much more powerful. So much so that Microsoft noted the supercharged Dall-E 3 was generating a lot of interest and traffic, and so might be sluggish initially.

There’s another issue with Dall-E 3 though, because as Windows Central observed, Microsoft has considerably reined in the tool since its recent revamp.

Now, we were already made aware that the image creation tool would employ a ‘content moderation system’ to stop inappropriate pics being generated, but it seems the censorship imposed is harsher than expected. This might be a reaction to the kind of content Bing AI users have been trying to get the system to create.

As Windows Central points out, there has been a lot of controversy about an image created of Mickey Mouse carrying out the 9/11 attack (unsurprisingly).

The problem, though, is that beyond those kinds of extreme asks, as the article makes clear, some users are finding innocuous image creation requests being denied. Windows Central tried to get the chatbot to make an image of a man breaking a server rack with a sledgehammer, but was told this violated Microsoft’s terms of using Bing AI.

Whereas last week, the article author noted that they could create violent zombie apocalypse scenarios featuring popular characters (that are copyrighted) with Bing AI not raising a complaint.


Analysis: Random censorship

The point is about censorship being an overreaction here, or this seemingly being the case going by reports, we should add. Microsoft left the rules too slack in the initial implementation, it appears, but has gone ahead and tightened things too much now.

What really illustrates this is that Bing AI is even censoring itself, as highlighted by someone on Reddit. Bing Image Creator has a ‘surprise me’ button that generates a random image (the equivalent of Google’s ‘I’m feeling lucky’ button, if you will, that produces a random search). But here’s the kicker – the AI is going ahead, creating an image, and then censoring it immediately.

Well, we suppose that is a surprise, to be fair – and one that would seem to aptly demonstrate that Microsoft’s censorship of the Image Creator has maybe gone too far, limiting its usefulness at least to some extent. As we said at the outset, it’s a case of a step forward, then a quick step back.

Windows Central observes that it was able to replicate this scenario of Bing’s self-censorship, and that it’s not even a rare occurrence – it reportedly happens around a third of the time. It sounds like it’s time for Microsoft to do some more fine-tuning around this area, although in fairness, when new capabilities are rolled out, there are likely to be adjustments applied for some time – so perhaps that work could already be underway.

The danger of Microsoft erring too strongly on the ‘rather safe than sorry’ side of the equation is that this will limit the usefulness of a tool that, after all, is supposed to be about exploring creativity.

We’ve reached out to Microsoft to check what’s going on with Bing AI in this respect, and will update this story if we hear back.

You might also like …

TechRadar – All the latest technology news

Read More