Facebook and Instagram will label fake AI images to stop misinfo from spreading

Meta will begin flagging AI-generated images on Facebook, Instagram, and Threads in an effort to uphold online transparency.

The tech giant already labels content made by its Imagine AI engine with a visible watermark. Moving forward, it’s going to do something similar for pictures coming from third-party sources like OpenAI, Google, and Midjourney just to name a few. It’s unknown exactly what these labels will look like although, looking at the announcement post, it may simply consist of the words “AI Info” next to generated content. Meta states this design is not final, hinting that it could change once the update officially launches.

Facebook's new AI label

(Image credit: Meta)

In addition to visible labels, the company says it’s also working on tools to “identify invisible markers” in images from third-party generators. Imagine AI does this too by embedding watermarks into the metadata of its content. Its purpose is to include a unique tag that cannot be manipulated by editing tools. Meta states other platforms have plans to do the same and want a system in place to detect the tagged metadata.

Audio and video labeling

So far, everything has centered around branding images, but what about AI-generated audio and video? Google’s Lumiere is capable of creating incredibly realistic clips and OpenAI is working on implementing video-creation to ChatGPT. Is there something in place to detect more complex forms of AI content? Well, sort of.

Meta admits there is currently no way for it to detect AI-generated audio and video at the same level as images. The technology just isn’t there yet. However, the industry is working “towards this capability”. Until then, the company is going to rely on the honor system. It’ll require users to disclose if the video clip or audio file they want to upload was produced or edited by artificial intelligence. Failure to do so will result in a “penalty”. What’s more, if a piece of media is so realistic that it runs the risk of tricking the public, Meta will attach “a more prominent label” offering important details.

Future updates

As for its own platforms, Meta is working on improving first-party tools as well. 

The company’s AI Research lab FAIR is developing a new type of watermarking tech called Stable Signature. Apparently, it’s possible to remove the invisible markers from the metadata of AI-generated content. Stable Signature is supposed to stop that by making watermarks an integral part of the “image generation process”. On top of all this, Meta has begun training several LLMs (Large Language Models) on their Community Standards so the AIs can determine if certain pieces of content violate the policy.

Expect to see the social media labels rolling out within the coming months. The timing of the release should come as no surprise: 2024 is a major election year for many countries, most notably the United States. Meta is seeking to mitigate misinformation from spreading on its platforms as much as possible. 

We reached out to the company for more information on what kind of penalties a user may face if they don’t adequately mark their post and if it plans on marking images from a third-party source with a visible watermark. This story will be updated at a later time.

Until then, check out TechRadar's list of the best AI image generators for 2024.

You might also like

TechRadar – All the latest technology news

Read More

These fake Blue Screen of Death mock-ups highlight a serious problem with Windows 11

Windows 11 getting a redesigned BSOD – the dreaded Blue Screen of Death that pops up when a PC crashes – might be a joke on X (formerly Twitter) right now, but it highlights a serious issue.

OK, 'joke' might be a strong word, but the BSOD mock-ups presented by Lucia Scarlet on X are certainly tongue-in-cheek, featuring colorful emojis which are rather cutesy – not what you really want to see when your PC has just crashed and burned.

See more

That said, the overall theme of the design, giving the BSOD a more modern look, isn’t unwelcome, even if the emojis aren’t appropriate in our book.

That said, there are comments in the threads of those tweets that highlight how some folks are disappointed that these aren’t real incoming redesigns for Windows 11. In some cases, there are people who appreciate a more friendly emoji appearing, as opposed to the frowny face (a text-based one, mind) which has been present on BSODs.

See more

Analysis: The blue screen blues

That disappointment is likely, at least in part, to be a more general indicator of the level of dissatisfaction with the BSOD – particularly in regards to the lack of information the screen provides, and shortfalls with the help that is supplied.

When a BSOD appears, it’s usually highly generic, and tells the Windows 11 (or Windows 10) user very little – you’ll read something like “a problem happened” with no elaboration on exactly what went wrong.

Meaningless error messages (known as stop codes that can pop up elsewhere in Windows 11, too) which are a jumble of hexadecimal letters and numbers might be cited, or a techie reference to a DLL perhaps, none of which are likely to be a jot of help in discerning what actually misfired in your system.

Never mind visual redesigns, Microsoft improving the info and help provided with BSODs would be the biggest step forward that could be taken with these screens. We've witnessed one innovation in the form of the QR codes provided – as seen in the mock-ups above – but these were introduced way back in 2016, and haven’t progressed much in the best part of a decade, often linking through to not fully relevant or up-to-date information.

We feel there’s definitely more Microsoft could do to improve BSODs, and in fairness, a more modern touch for the visuals wouldn’t hurt – though there’s another thought that occurs. Should we still be getting full system lock-ups at this point in the evolution of desktop operating systems?

Ideally not, of course, but to be fair to Microsoft, BSODs are definitely a whole lot less common these days than in the past. For those who do encounter them, though, we have a handy Blue Screen of Death survival guide.

You might also like…

TechRadar – All the latest technology news

Read More

Amazon says even AI isn’t powerful enough to stop fake reviews

Amazon has renewed its war on fake reviews by developing new AI-powered tools to help tackle the problem, but the retail giant admits they aren't enough to solve the issue on their own.

In a new blog post, Dharmesh Mehta, who's Amazon's VP of Worldwide Selling Partner Services, writes “we must work together to stop the fake review brokers that are the source of most fake reviews”, calling on “private sector, consumer groups, and governments” to work together to stop the brokers.

What are these so-called 'fake review brokers'? Amazon says the brokers have become an industry in recent years, and have “evolved in an attempt to evade detection”. They work by approaching average consumers though websites, social media or encrypted messaging services and getting to them write fake reviews “in exchange for money, free products, or other incentives”.

Amazon says it's using increasingly sophisticated AI tools and machine learning to stem the tide. These fraud-detection programs apparently analyze thousands of data points, including sign-in activity and review history, to help spot fake reviews. The figures involved are pretty staggering; Amazon says that last year it blocked over 200 million suspected fake reviews in 2022, and sued over 10,000 Facebook group administrators. 

But Amazon's financial might and its increasingly sophisticated AI tools seemingly aren't enough to stop fake reviews. The retail giant says that because much of the misconduct happens outside of Amazon’s store “it can be more challenging to detect, prevent, and enforce these bad actors if we are acting alone”.

A hand holding an iPhone showing Amazon reviews

(Image credit: Amazon)

So Amazon has made a three-point plan to get some extra help. Firstly, it wants there to be more cross-industry sharing about fake review brokers and their various tactics and techniques. Secondly, it wants governments and regulators to use their authority more to take action against bad actors. 

And lastly, in a veiled nudge at Meta and other social media giants, it's asked that “all sites that could be used to facilitate this illicit activity should have robust notice and takedown processes”. Amazon wants to work with “these companies” (read Facebook, WhatsApp, Signal and more) to help improve their detection methods.

Whether or not these three steps are realistic remains to be seen, but the message from Amazon is clear – it doesn't think it can stem the tide of fake reviews on its own, and that's a problem for all of us. Until that improves, it's more important than ever to follow advice on how to spot fake Amazon reviews during Prime Day and other big shopping events.

How to spot fake Amazon reviews

A laptop screen on an orange background showing an Amazon review in the website ReviewMeta

Sites like ReviewMeta (above) can help you weed out suspicious reviews from an Amazon product’s rating (Image credit: Future)

We've been highlighting the problem of fake Amazon reviews for over a decade, and it's clear that the issue has become a game of whack-a-mole – while Amazon's tools have improved, the retail giant admits that the “tactics of fake review brokers have also evolved in an attempt to try to evade detection”.

This is a big problem for the average online shopper – in the UK, the consumer group Which? says that around one in seven reviews are fake. And that means you can be misled into buying poor-quality products.

Mehta's blog post is a reminder than even the world's biggest tech giants, and the latest AI technology, aren't powerful enough to stop fake reviews. And that means we all need to be increasingly savvy when shopping online.

As our in-depth guide to spotting fake Amazon reviews highlights, there are some simple red flags to look out for in product reviewers, including “overly promotional language, repeated reviews, and reviews for an entirely different product”. 

But there are also handy third-party tools like ReviewMeta and FakeSpot (which was recently bought by the Firefox owner Mozilla) that can help you use AI to detect fake reviews and scams. These allow you paste in Amazon product URLs to get an analysis of the reviews or use Chrome extensions for a quick check.

While Amazon's three-point call-out for outside help is understandable, recent history suggests that progress is going to be slow – which means we'll all need to remain on guard when doing our online shopping, particularly during big events like Amazon Prime Day 2023.

TechRadar – All the latest technology news

Read More

Warning: this fake Windows 11 upgrade is filled with malware

Security researchers have found a fake Windows 11 upgrade website that promises to offer a free Windows 11 install for PCs that don’t meet the minimum specifications, but actually installs data-stealing malware.

Windows 11 has some… interesting… requirements to run, and its most famous demand is for Trusted Platform Module (TPM) version 2.0 support. This has led to perfectly capable, and powerful, PCs and laptops being unable to upgrade to Windows 11, as they did not meet the minimum specifications.

Understandably, this annoyed people with relatively new hardware that couldn’t upgrade to the latest version of Windows, and many looked at ways of circumnavigating the TPM 2.0 requirement to install Windows 11 on their unsupported devices.

It’s these people that this new threat is targeting, as Bleeping Computer reports.

Looking legitimate

While the website’s address (URL) should be a red flag (we won't mention it here), as it’s clearly not a Microsoft website, the actual website itself does look like it’s an official Microsoft website, using logos and artwork that makes it difficult to tell it apart from a real Microsoft page.

However, as security researchers CloudSEK discovered by clicking the ‘Download now’ button, the website downloads an ISO file that contains malware.

This malware, called ‘Inno Stealer’, uses a part of the Windows installer to create temporary files on an infected PC. These create processes that run and place four additional files on your PC, some of which contain scripts that disable various security features, including in the Windows registry. They also tweak the built-in Windows Defender anti-virus, and remove other security products from Emisoft and ESET.

Other files then run commands at the highest system privileges, while yet another file is created in the C:\Users\AppData\Roaming\Windows11InstallationAssistant folder, and it’s this file that contains the data-stealing code, named Windows11InstallationAssistant.scr. This then takes information from web browsers, as well as cryptocurrency wallets, stored passwords and files from the PC itself. This stolen data is then sent to the malicious users who created the malware.

Pretty nasty stuff.


Analysis: Be careful what you wish for

Hacker

(Image credit: Pixabay)

The scale of the infection here, and what it’s able to steal from you, is very scary, but the good news is that it’s easy to avoid.

No matter how desperate you are to install Windows 11, you should only download ISO files from sources you are absolutely certain are legitimate. While the makers of this malware have put in a lot of work to make the website look legitimate (like many so-called ‘phishing’ attacks), there are some tell-tale signs, such as the aforementioned URL, which highlights that this is not a genuine Microsoft website.

If your PC is eligible for a Windows 11 upgrade, you’ll be alerted via Windows Update, a tool that’s built into Windows operating systems. This is the safest way to ensure you are downloading and installing a genuine copy of Windows 11.

If your PC isn’t eligible, due to not meeting the TPM 2.0 requirements, then there are some safer ways to install Windows 11 without TPM anyway. But we don’t really recommend any of them, especially as Microsoft is making it harder to run Windows 11 on unsupported systems, which could mean you miss out on important updates, security fixes and features in the future.

Above all, however, you should never attempt to download and install a Windows 11 ISO file from any website that isn’t run by Microsoft itself.

TechRadar – All the latest technology news

Read More

Surfshark backpedals on Fake News feature after barrage of criticism

Following a surge in propaganda coinciding with Russia's invasion of Ukraine, the VPN provider Surfshark recently released a new fake news warning feature for its browser extensions for Chrome and Firefox.

At the time, Surfshark CEO Vytautas Kaziukonis explained why the company decided to release the feature in a press release, saying:

“The 21st century has shown that information might be sharper than the sword. It’s evident that today’s disinformation campaigns aim to distract, confuse, manipulate, and sow division, discord, and uncertainty in the community. Keeping in mind the intensifying propaganda, we decided to release a feature that would allow people to identify fake news websites easily.” 

Fake News Warning

(Image credit: Surfshark )

Surfshark's now defunct fake news warning feature would detect specific URLs from a list of untrustworthy websites taken from the site propornot.com reviewed by the the company's security experts. Sites known for spreading fake news were highlighted with a “YYY” symbol in Google and other search engines. While the feature was enabled by default, Surfshark users were able to toggle it off under the “VPN settings” menu in the company's browser extension.

Suspending its fake news feature

Although Surfshark's intentions were good, the company explained in a post on Twitter that “the topic is more nuanced that initially thought” when it announced that it would be temporarily suspending its fake news notification feature only a few days after its launch.

The problem with the feature is that in addition to being overwhelming for some users, it identified far too many sites as being a source of disinformation. Some of the sites that had a “YYY” next to them on Google's search results page included Drudge Report, Ron Paul's website, the alternative video platform BitChute and even WikiLeaks.

While consumers rely on VPN services to protect their privacy online and to get around geo-blocking, many of the users that responded to a separate post on Twitter by BitChute took issue with Surfshark limiting freedom of expression online. At the same time, BitChute pointed out that several major news stories in the last year were considered 'misinformation' before being revealed to be true.

Despite the fact that Surfshark has said that it would temporarily suspend the feature, its original blog post announcing its fake news notifications has been removed from its site. We'll have to wait and see as to whether or not the company decides to bring it back though based on the criticism the feature faced online, it likely won't be returning anytime soon.

TechRadar – All the latest technology news

Read More