Wix Studio brings Figma designs into the fold with a new tool

Wix, one of the best website builders, just introduced a new tool, allowing Figma designers to import their work into Wix Studio more easily.

The tool, called Wix Studio Figma, is a plugin that allows designers to create dynamic web experiences more easily. The company claims Studio’s built-in robust native business solutions, as well as AI and agency tools, will help designers, agencies, and professionals, save both time and resources, while building out their solutions. 

Wix Studio is one of the best website builders for agencies, allowing them to build highly customizable and visually appealing websites with ease. Besides advanced design tools and responsive design, Wix Studio allows for code integration, and comes with various collaboration features. Furthermore, it provides a wide array of professional templates and design assets, as well as different SEO and marketing tools. 

It was launched in 2023 and includes a newly-designed development and creation editor, multi-site management workspaces, and access to new monetization opportunities. 

Streamlining production

“We are thrilled to present the new plugin to the design community,” said Gali Erez, Head of Product at Wix Studio Editor. “With its innovative features and intuitive interface the plugin empowers users to craft captivating designs, and swiftly streamline the path from design to production. This efficiency enhances their design and development experience and ultimately drives conversions.”

Figma is a collaborative web application for interface design and prototyping. It is allegedly quite popular among designers and developers thanks to its ability to facilitate real-time collaboration. Since it is cloud-based, professionals can access their work from any device with an internet connection.

Figma combines vector graphics editing and prototyping capabilities, allowing designers to create and iterate on user interfaces efficiently. It supports features such as component libraries, and powerful design systems.

It is also said that its interface and robust tools make Figma a great tool for both beginners, and expert designers.

More from TechRadar Pro

TechRadar – All the latest technology news

Read More

AI-generated movies will be here sooner than you think – and this new Google DeepMind tool proves it

AI video generators like OpenAI's Sora, Luma AI's Dream Machine, and Runway Gen-3 Alpha have been stealing the headlines lately, but a new Google DeepMind tool could fix the one weakness they all share – a lack of accompanying audio.

A new Google DeepMind post has revealed a new video-to-audio (or 'V2A') tool that uses a combination of pixels and text prompts to automatically generate soundtracks and soundscapes for AI-generated videos. In short, it's another big step toward the creation of fully-automated movie scenes.

As you can see in the videos below, this V2A tech can combine with AI video generators (including Google's Veo) to create an atmospheric score, timely sound effects, or even dialogue that Google DeepMind says “matches the characters and tone of a video”.

Creators aren't just stuck with one audio option either – DeepMind's new V2A tool can apparently generate an “unlimited number of soundtracks for any video input” for any scene, which means you can nudge it towards your desired outcome with a few simple text prompts.

Google says its tool stands out from rival tech thanks to its ability to generate audio purely based on pixels – giving it a guiding text prompt is apparently purely optional. But DeepMind is also very aware of the major potential for misuses and deepfakes, which is why this V2A tool is being ringfenced as a research project – for now.

DeepMind says that “before we consider opening access to it to the wider public, our V2A technology will undergo rigorous safety assessments and testing”. It will certainly need to be rigorous, because the ten short video examples show that the tech has explosive potential, for both good and bad.

The potential for amateur filmmaking and animation is huge, as shown by the 'horror' clip below and one for a cartoon baby dinosaur. A Blade Runner-esque scene (below) showing cars skidding through a city with an electronic music soundtrack also shows how it could drastically reduce budgets for sci-fi movies. 

Concerned creators will at least take some comfort from the obvious dialogue limitations shown in the 'Claymation family' video. But if the last year has taught us anything, it's that DeepMind's V2A tech will only improve drastically from here.

Where we're going, we won't need voice actors

The combination of AI-generated videos with AI-created soundtracks and sound effects is a game-changer on many levels – and adds another dimension to an arms race that was already white hot.

OpenAI has already said that it has plans to add audio to its Sora video generator, which is due to launch later this year. But DeepMind's new V2A tool shows that the tech is already at an advanced stage and can create audio based purely on videos alone, rather than needing endless prompting.

DeepMind's tool works using a diffusion model that combines information taken from the video's pixels and the user's text prompts then spits out compressed audio that's then decoded into an audio waveform. It was apparently trained on a combination of video, audio, and AI-generated annotations.

Exactly what content this V2A tool was trained on isn't clear, but Google clearly has a potentially huge advantage in owning the world's biggest video-sharing platform, YouTube. Neither YouTube nor its terms of service are completely clear on how its videos might be used to train AI, but YouTube's CEO Neal Mohan recently told Bloomberg that some creators have contracts that allow their content to be used for training AI models.

Clearly, the tech still has some limitations with dialogue and it's still a long way from producing a Hollywood-ready finished article. But it's already a potentially powerful tool for storyboarding and amateur filmmakers, and hot competition with the likes of OpenAI means it's only going to improve rapidly from here.

You might also like…

TechRadar – All the latest technology news

Read More

Logitech’s new MX Ink stylus might be a dream art tool for your Meta Quest headset

Logitech has recently unveiled its first mixed reality stylus, and it's exclusive to the Meta Quest series. Known as the MX Ink, it's designed to give people a more precise way to create and draw when wearing a Meta headset. While you can utilize the native controllers for content creation, they simply don’t offer the same level of accuracy as a stylus. 

One of the first things you’ll notice looking at the MX Ink is it’s quite large, resembling a marker more than a pen. It measures 6.46 x 0.72 inches (64 mm x 18.2 mm) and weighs a little over an ounce (28 grams). 

By comparison, the Apple Pencil Pro measures 6.53 x 0.35 inches (166 mm x 8.9 mm) and weighs 0.68 ounces (19.15 grams). Logitech’s MX Ink has four buttons in total: three near the front and one in the back.  

See more

The frontmost button lets you grab objects in the mixed reality space to drag around, while the middle option allows users to alter the pen’s pressure sensitivity. Behind that is an Options button for configuring the stylus. Lastly, the button all the way at the end gives access to the headset’s Meta menu.

Logitech claims they developed the MX Ink to be “optimized for precision” as it reportedly has “low-latency on par with Meta Quest controllers.” Thanks to the haptic feedback, the stylus offers an immersive experience meant to mimic what it is to use an actual pen on paper.

Mode of operation

The MX Ink works under two modes of operation. First is 2D Tableau, which allows Meta Quest owners to use the stylus on a flat surface when drawing. It’s unknown if the mode works on any flat surface or if you need the MX Mat accessory.

Logitech’s demo shows someone illustrating on a wooden table, but the sheet of paper is sitting on the mat – not the natural surface. The mat appears crucial, but the same video shows a woman drawing on a canvas. 

Or perhaps she’s using the other operation mode – 3D Sculpting. This allows you to freely create just by drawing in the air. The same demo displays multiple use cases, from building a house in a 3D environment to tracing the outline of what appears to be a snowboarding boot.

Other notable features include swappable tips and a seven-hour battery life. You can recharge it by plugging it in using a USB-C cable or purchasing the MX Inkwell combo to get a charging dock for the stylus. 

Supporting apps

The company states you can use the MX Ink and the paired Quest controllers simultaneously, and you won’t be forced to disconnect them. It’s important to note that the stylus is only compatible with the Meta Quest 2 and Quest 3 headsets. Logitech told RoadtoVR it won’t work on the Quest Pro, and we've reached out to the company for comment, as they didn’t explain why that support is missing.

Additionally, the pen doesn’t work across all of the Quest library; just a handful of art apps. This includes Gravity Sketch, ShapesXR, and Arkio for now, but it’s possible we could see more added to the list. Logitech is offering third-party developers the opportunity to integrate MX Ink into their apps by applying for a developer kit. 

The MX Ink launches in late September 2024 for $ 129.99 or $ 169.99 for the Inkwell combo. You can sign up to receive notifications letting you know when it’s available for purchase.

In the meantime, check out TechRadar's list of the best VR headsets for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Can your PC or Mac run on-device AI? This handy new Opera tool lets you find out

Opera wants to make it easy for everyday users to find out whether their PC or Mac can run AI locally, and to that end, has incorporated a tool into its browser.

When we talk about running AI locally, we mean on the device itself, using your system and its resources for the entire AI workload being done – in contrast to having your PC tap the cloud to get the computing power to achieve the task at hand.

Running AI locally can be a demanding affair – particularly if you don’t have a modern CPU with a built-in NPU to accelerate AI workloads happening on your device – and so it’s pretty handy to have a benchmarking tool that tells you how capable your hardware is in terms of completing these on-device AI tasks effectively.

There is a catch though, namely that the ‘Is your computer AI ready?’ test is only available in the developer version of the Opera browser right now. So, if you want to give it a spin, you’ll need to download that developer (test) spin on the browser.

Once that’s done, you can get Opera to download an LLM (large language model) with which to run tests, and it checks the performance of your PC in various ways (tokens per second, first token latency, model load time and more).

If all that sounds like gobbledegook, it doesn’t really matter, as after running all these tests – which might take anything from just a few minutes to more like 20 – the tool will deliver a simple and clear assessment of whether your machine is ready for AI or not.

There’s an added nuance, mind: if you get the ‘ready for AI’ result then local performance is good, and ‘not AI ready’ is self-explanatory – you can forget running local AI tasks – but there’s a middle result of ‘AI functional.’ This means your device is capable of running AI tasks locally, but it might be rather slow, depending on what you’re doing.

Opera AI Benchmark Result

(Image credit: Opera)

There’s more depth to these results for experts, that you can explore if you wish, but it’s great to get an at-a-glance estimation of your PC’s on-device AI chops. It’s also possible to download different (increasingly large) AI models to test with, too, with heftier versions catering for cutting-edge PCs with the latest hardware and NPUs.


Analysis: Why local AI processing is important

It’s great to have an easily accessible test that anyone can use to get a good idea of their PC’s processing chops for local AI work. Doing AI tasks locally, kept within the confines of the device, is obviously important for privacy – as you’re not sending any data off your machine into the cloud.

Furthermore, some AI features will use local processing partly, or indeed exclusively, and we’ve already seen the latter: Windows 11’s new cornerstone AI functionality for Copilot+ PCs, Recall, is a case in point, as it works totally on-device for security and privacy reasons. (Even so, it’s been causing a storm of controversy since it was announced by Microsoft, but that’s another story).

So, to be able to easily discern your PC’s AI grunt is a useful capability to have, though right now, downloading the Opera developer version is probably not a hassle you’ll want to go through. Still, the feature will be inbound for the full version of Opera soon enough we’d guess, so you likely won’t have to wait long for it to arrive.

Opera is certainly getting serious about climbing the rankings of the best web browsers by leveraging AI, with one of the latest moves being drafting in Google Gemini to help supercharge its Aria AI assistant.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Adobe Lightroom’s new Generative Remove AI tool makes Content-aware Fill feel basic – and gives you one less reason to use Photoshop

One of Adobe Lightroom's most used editing tools, Content-aware Fill, just got a serious upgrade in the AI-powered shape of Generative Remove. The Adobe Firefly tool is branded “Lightroom's most powerful remove tool yet” and after a quick play ahead of its announcement, I'd have to agree. 

Compared to Content-aware Fill, which will remain in Adobe's popular photo organizer and editor, Generative Remove is much more intelligent, plus it's non-destructive. 

As you can see from the gif below, Generative Remove is used to remove unwanted objects in your image, plus it works a treat for retouching. You simply brush over the area you'd like to edit – whether that's removing a photo bomber or something as simple as creases in clothing – and then the new tool creates a selection of smart edits with the object removed (or retouch applied) for you to pick your favorite from.

If I was to use Lightroom's existing Content-aware Fill for the same image in the gif below and in the same way, or even for a much smaller selection, it would sample parts of the model's orange jacket and hair and place them in the selection. I'd then need to repeatedly apply the tool to remove these new unwanted details, and the new area increasingly becomes an artifact-ridden mess.

Adobe Lightroom Generative Remove tool

(Image credit: Adobe)

Put simply, Lightroom's existing remove tool works okay for small selections but it regularly includes samples of parts of the image you don't want. Generative Remove is significantly faster and more effective for objects of all sizes than Content-aware Fill, plus it's non-destructive, creating a new layer that you can turn on and off.

From professionals wanting to speed up their workflow to simply removing distant photo bombers with better results, Generative Remove is next-level Lightroom editing and it gives you one less reason to use Adobe Photoshop. It is set to be a popular tool for photographers of all skills levels needing to make quick remove and retouching edits.

Generative Remove is available now as an early access feature across all Lightroom platforms: mobile, desktop, iPad, web and Classic.

Adobe Lightroom Generative Remove tool

(Image credit: Adobe)

Adobe also announced that its Lens Blur tool is being rolled out in full to Lightroom, with new automatic presets. As you can see in the gif above, presets include subtle, bubble and geometric effects to bokeh. For example, speckled and artificial light can be given a circular shape with the Lens Blur bubble effect.

Lens Blur is another AI-tool and doesn't just apply a uniform strength blur to the background, but uses 3D mapping in order to apply a different strength of blur based on how far away objects are in the background, for more believable results.

It's another non-destructive edit, too, meaning that you can add to or remove from the selection if you're not happy with the strength of blur applied or if background objects get missed out first time around – for instance, it might mistake a lamp in the image above as a subject and not apply blur to it.

Having both Generative Remove and Lens Blur AI-tools to hand makes Lightroom more powerful than ever. Lens Blur is now generally available across the Lightroom ecosystem. Furthermore, there are other new tools added to Lightroom and you can find out more on the Adobe website.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft stoops to new low with ads in Windows 11, as PC Manager tool suggests your system needs ‘repairing’ if you don’t use Bing

As Windows 11 users are becoming accustomed to more ads in key places of the operating system, Microsoft is seemingly experimenting with adding yet another advert covertly presented as a recommendation. This time the software giant is trying out having PC Manager suggest that you 'repair' your system by reverting to Microsoft's default search engine, Bing.

PC Manager is a Microsoft utility available in some regions that enables you to get a handle on system storage management and file management, and it can help optimize your PC's performance. Generally speaking, it's considered a pretty good app, but as with a lot of its products, that's not enough for Microsoft – it's also increasingly in the business of turning various products and features into ad vehicles (especially if they’re free!). 

Windows 11 has already seen ads introduced in parts of the interface like File Explorer, the Settings app, and, most recently, the Start menu. That roster is being expanded, as Windows Latest discovered, to include PC Manager, which recently got the addition of a 'Repair Tips' section and a Files Cleanup feature (which can detect duplicate files and more besides).

Looking for potential repairs? Microsoft has a suggestion

The advert was discovered when Windows Latest checked out the new 'Repair Tips' section of the PC Manager app, which suggested that the PC be 'repaired' by switching the default search engine back to Bing (which is the Windows pre-installed default) from Google Search (or whatever other browser is set as default). 

People who use Windows have picked up on Microsoft's persistence when it comes to ads, for example the 'promoted' third-party ads beginning to show up in the Start menu's 'Recommended' section. The suggestion that switching back to Bing is a ‘repair’ is a new low, though, as it’s effectively implying that using another search engine is actually a fault with your PC, in a way. Switching to Bing search is not going to improve your PC’s performance, is it? Hardly.

As Windows Latest reports, the PC Manager app was developed by Microsoft engineers in China, and it’s possible that the company may drop odd manner to push Bing if the software is rolled out more broadly elsewhere – it may come to the US eventually.

Bing Search

(Image credit: Getting Images)

Letting Edge, Bing, and PC Manager stand on their own merits

From what we've seen so far, aside from this advertising push that's been witnessed across Windows 11 more broadly, PC Manager looks like a good app to help you better manage your PC's resources and files, and Windows Latest recommends it as a seemingly secure performance-boosting app. This makes sense as it's developed by Microsoft itself, which has an interest in ensuring that its apps are as secure as they can be. 

Microsoft Edge, the default browser pre-installed on Windows machines, and Bing Search aren't bad products by any means – they are solid alternatives to Google's own Chrome and Search. Edge has recently seen a whole host of new useful features like a sidebar, sleeping tabs, and an immersive reader. That said, there are parts of the browser that some people consider 'bloatware' and unnecessary clutter. For example, some folks don't currently see much purpose in using Microsoft's AI assistant, Copilot, which is integrated into Edge. 

Bing Search and Edge have enough of their own merits to be considered viable alternatives to the industry leaders, and I know personally that this kind of repeated prodding doesn't convince me to try them. If anything, it can push people away, and tech companies would do well to remember that what wins people's minds are products that work well. It’s as simple as that – let the product speak for itself, and the user base will grow.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Google Workspace is getting a talkative tool to help you collaborate better – meet your new colleague, AI Teammate

If your workplace uses Google Workspace productivity suite of apps, then you might soon get a new teammate – an AI Teammate that is. 

In its mission to improve our real-life collaboration, Google has created a tool to pool shared documents, conversations, comments, chats, emails, and more into a singular virtual generative AI chatbot: the AI Teammate. 

Powered by Google's own Gemini generative AI model, AI Teammate is designed to help you concentrate more on your role within your organization and leave the tracking and tackling of collective assignments and tasks to the AI tool.

This virtual colleague will have its own identity, its own Workspace account, and a specifically defined role and objective to fulfil.

When AI Teammate is set up, it can be given a custom name, as well as have other modifications, including its job role, a description of how it's expected to help your team, and specific tasks it's supposed to carry out.

In a demonstration of an example AI Teammate at I/O 2024, Google showed a virtual teammate named 'Chip' who had access to a group chat of those involved in presenting the I/O 2024 demo. The presenter, Tony Vincent, explained that Chip was privy to a multitude of chat rooms that had been set up as part of preparing for the big event. 

Vincent then asks Chip if I/O storyboards had been approved – the type of question you'd possibly ask colleagues –  and Chip was able to answer as it can analyze all of these conversations that it had been keyed into. 

As AI Teammate is added to more threads, files, chats, emails, and other shared items, it builds a collective memory of the work shared in your organization. 

Google Workspace

(Image credit: Google)

In a second example, Vincent shows another chatroom for an upcoming product release and asks the room if the team is on track for the product's launch. In response, AI Teammate searches through everything it has access to like Drive, chat messages, and Gmail, and synthesizes all of the relevant information it finds to form its response. 

When it's ready (which looks like about a second or slightly less), AI Teammate delivers a digestible summary of its findings. It flagged up a potential issue to make the team aware, and then gave a timeline summary, showing the stages of the product's development. 

As the demo is taking place in a group space, Vincent stated that anyone can follow along and jump in at any point, for example asking a question about the summary or for AI Teammate to transfer its findings into a Doc file, which it does as soon as the Doc file is ready. 

AI Teammate becomes as useful as it's customized to be and Google promises that it can make your collaborative work seamless, being integrated into Google's host of existing products that many of us are already used to.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Google reveals new video-generation AI tool, Veo, which it claims is the ‘most capable’ yet – and even Donald Glover loves it

Google has unveiled its latest video-generation AI tool, named Veo, at its Google I/O live event. Veo is described as offering “improved consistency, quality, and output resolution” compared to previous models.

Generating video content with AI is nothing new; tools like Synthesia, Colossyan, and Lumiere have been around for a little while now, riding the wave of generative AI's current popularity. Veo is only the latest offering, but it promises to deliver a more advanced video-generation experience than ever before.

Google IO 2024

Donald Glover invited Google to his creative studio at Gilga Farm, California, to make a short film together. (Image credit: Google)

To showcase Veo, Google recruited a gang of software engineers and film creatives, led by actor, musician, writer, and director Donald Glover (of Community and Atlanta fame) to produce a short film together. The film wasn't actually shown at I/O, but Google promises that it's “coming soon”.

As someone who is simultaneously dubious of generative AI in the arts and also a big fan of Glover's work (Awaken, My Love! is in my personal top five albums of all time), I'm cautiously excited to see it.

Eye spy

Glover praises Veo's capabilities on the basis of speed: this isn't a deletion of human ideas, but rather a tool that can be utilized by creatives to “make mistakes faster”, as Glover puts it.

The flexibility of Veo's prompt reading is a key point here. It's capable of understanding prompts in text, image, or video format, paying attention to important details like cinematic style, camera positioning (for example, a birds-eye-view shot or fast-tracking shot), time elapsed on camera, and lighting types. It also has an improved capability to accurately and consistently render objects and how they interact with their surroundings.

Google DeepMind CEO Demis Hassabis demonstrated this with a clip of a car speeding through a dystopian cyberpunk city.

Google IO 2024

The more detail you provide in your prompt material, the better the output becomes. (Image credit: Google)

It can also be used for things like storyboarding and editing, potentially augmenting the work of existing filmmakers. While working with Glover, Google DeepMind research scientist Kory Mathewson explains how Veo allows creatives to “visualize things on a timescale that's ten or a hundred times faster than before”, accelerating the creative process by using generative AI for planning purposes.

Veo will be debuting as part of a new experimental tool called VideoFX, which will be available soon for beta testers in Google Labs.

TechRadar – All the latest technology news

Read More

Google I/O showcases new ‘Ask Photos’ tool, powered by AI – but it honestly scares me a little

At the Google I/O 2024 keynote today, CEO Sundar Pichai debuted a new feature for the nine-year-old Google Photos app: 'Ask Photos', an AI-powered tool that acts as an augmented search function for your photos.

The goal here is to make finding specific photos faster and easier. You ask a question – Pichai's example is 'what's my license plate number' – and the app uses AI to scan through your photos and provide a useful answer. In this case, it isolates the car that appears the most, then presents you with whichever photo shows the number plate most clearly.

Google IO 2024

I really want to know if this is a Google employee’s actual child or if it’s a Gemini-generated kid… (Image credit: Google )

It can reportedly handle more in-depth queries, too: Pichai went on to explain that if your hypothetical daughter Lucia has been learning to swim, you could ask the app to 'show me how Lucia's swimming has progressed', and it'll present you with a slideshow showcasing Lucia's progression. The AI (powered by Google's Gemini model) is capable of identifying the context of images, such as differentiating between swimming in a pool and snorkeling in the ocean, and even highlighting the dates on photos of her swimming certificates.

While the Photos app already had a search function, it was fairly rudimentary, only really capable of identifying text within images and retrieving photos from selected dates and locations. 

Ask Photos is apparently “an experimental feature” that will start to roll out “soon”, and it could get more features in the future. As it is, it's a seriously impressive upgrade – so why am I terrified of it?

Eye spy

A major concern surrounding AI models is data security. Gemini is a predominantly cloud-based AI tool (its data parameters are simply too large to be run locally on your device), which introduces a potential security vulnerability as your data has to be sent to an external server via the internet, a flaw that doesn't exist for on-device AI tools.

Ask Photos is powerful enough to not only register important personal details from your camera roll, but also understand the context behind them. In other words, the Photos app – perhaps one of the most innocuous apps on your Android phone's home screen – just became the app that potentially knows more about your life than any other.

I can't be the only person who saw this revealed at Google I/O and immediately thought 'oh, this sounds like an identity thief's dream'. How many of us have taken a photo of a passport or ID to complete an online sign-up? If malicious actors gain remote access to your phone or are able to intercept your Ask Photos queries, they could potentially take better advantage of your photo library than ever before.

Google says it's guarding against this kind of scenario, stating that “The information in your photos can be deeply personal, and we take the responsibility of protecting it very seriously. Your personal data in Google Photos is never used for ads. And people will not review your conversations and personal data in Ask Photos, except in rare cases to address abuse or harm.”

It continues that “We also don't train any generative AI product outside of Google Photos on this personal data, including other Gemini models and products. As always, all your data in Google Photos is protected with our industry-leading security measures.”

So, nothing to worry about? We'll see. But quite frankly… I don't need an AI to help me manage my photo library anyway. Honestly Google, it really isn't that hard to make some folders.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI is working on a new tool to help you spot AI-generated images and protect you from deep fakes

You’ve probably noticed a few AI-generated images sprinkled throughout your different social media feeds – and there are likely a few you’ve probably scrolled right past, that may have slipped your keen eyes. 

For those of us who have been immersed in the world of generative AI, spotting AI images is a little easier, as you develop a mental checklist of what to look out for.

However, as the technology gets better and better, it is going to get a lot harder to tell. To solve this, OpenAI is developing new methods to track AI-generated images and prove what has and has not been artificially generated.

According to a blog post, OpenAI’s new proposed methods will add a tamper-resistant ‘watermark’ that will tag content with invisible ‘stickers.’ So, if an image is generated with OpenAI’s DALL-E generator, the classifier will flag it even if the image is warped or saturated.

The blog post claims the tool will have around a 98% accuracy when spotting images made with DALL-E. However, it will only flag 5-10% of pictures from other generators like Midjourney or Adobe Firefly

So, it’s great for in-house images, but not so great for anything produced outside of OpenAI. While it may not be as impressive as one would hope in some respects, it’s a positive sign that OpenAI is starting to address the flood of AI images that are getting harder and harder to distinguish.

Okay, so this may not seem like a big deal to some, as a lot of instances of AI-generated images are either memes or high-concept art that are pretty harmless. But that said, there’s also a surge of scenarios now where people are creating hyper-realistic fake photos of politicians, celebrities, people in their lives, and more besides, that could lead to misinformation being spread at an incredibly fast pace.

Hopefully, as these kinds of countermeasures get better and better, the accuracy will only improve, and we can have a much more accessible way to double-check the authenticity of the images we come across in our day-to-day life.

You might also like

TechRadar – All the latest technology news

Read More