Microsoft quietly updates controversial Windows 11 Recall feature – but not with the changes that are really needed

Microsoft’s flagship AI feature for Copilot+ PCs, Recall, has been through the wringer lately, and at the risk of sounding like a hater – rightfully so.

In case you missed it, Recall takes screenshots every few seconds, building up a library of images you can search via AI, but the feature has some serious issues on the privacy front, to the point that the launch of Recall was pulled and banished back to the Windows Insider Program for further testing.

However, that hasn’t stopped Microsoft from quietly adding new features to Recall as the tech giant runs damage control around this whole controversy.

As discovered by well-known leaker Albacore, writing for Tom’s Hardware (via Neowin), there are a few new chunky bits of functionality hidden away in the latest Windows 11 preview build (in the Canary channel).

One of those is ‘screenray’ which is a utility that’ll pop up to analyze what’s currently on the screen. It’s summoned via a keyboard shortcut and allows the user to get extra information from Copilot about anything present on-screen, or access a translation for something in a foreign language.

Windows Recall screenshot

(Image credit: Tom’s Hardware )

While we have a limited understanding of the exact nature of this new tool, it does seem similar to the Reader feature in Safari that Apple introduced during WWDC – which leverages Apple Intelligence to scan a web page and translate, summarize, or add insight to whatever content is currently being browsed. Of course, Windows 11’s Recall tool is available across your entire system, not just in a browser.

Alongside this, Microsoft has implemented a revamped homepage design for Windows 11’s Recall feature. This means that when you fire up Recall, instead of being presented with a new snapshot, you get a grid of recent snapshots (there’s still a button to allow you to create a new snapshot – this just doesn’t happen by default anymore).

Also new is a ‘Topic’ section that organizes snapshots by themes, so you can group together related screenshots (for, say, Spotify) to make for easier searching.

Windows Recall screenshot

(Image credit: Tom’s Hardware )

Finally, Windows Recall also has better integration with Copilot in this new preview build. Clicking on a snapshot will produce a drop-down menu with context-sensitive choices, so you can get Copilot to copy something, open it in an app, or if it’s an image, find pictures in the same vein, or create a similar image. All the standard Copilot options, essentially.

While these new additions to the controversial feature seem useful, I’m finding it hard to get past how bizarre the whole feature feels in the first place. I’m sure I won’t be the only one, either, and with all the concerns raised about Recall in recent times, Microsoft has a lot of work to do. It’ll definitely take a lot more to get me on board than a homepage redesign and this new screenray functionality.

For now, Windows Recall lives in the Windows Insider Program, where it’ll be tinkered with and tested for quite some time, most likely, before Microsoft dares try to launch it again. Whatever happens, when the feature hits release, Microsoft needs to make sure it gets things right this time around, and that means working on privacy and security as an absolute priority.

You might also like…

TechRadar – All the latest technology news

Read More

Apple is forging a path towards more ethical generative AI – something sorely needed in today’s AI-powered world

Copyright is something of a minefield right now when it comes to AI, and there’s a new report claiming that Apple’s generative AI – specifically its ‘Ajax’ large language model (LLM) – may be one of the only ones to have been both legally and ethically trained. It’s claimed that Apple is trying to uphold privacy and legality standards by adopting innovative training methods. 

Copyright law in the age of generative AI is difficult to navigate, and it’s becoming increasingly important as AI tools become more commonplace. One of the most glaring issues that comes up, again and again, is that many companies train their large language models (LLMs) using copyrighted works, typically not disclosing whether they license that training material. Sometimes, the outputs of these models include entire sections of copyright-protected works. 

The current justification for why copyrighted material is so widely used as far as some of these companies to train their LLMs is that, not dissimilar to humans, these models need a substantial amount of information (called training data for LLMs) to learn and generate coherent and convincing responses – and as far as these companies are concerned, copyrighted materials are fair game.

Many critics of generative AI consider it copyright infringement if tech companies use works in training and output of LLMs without explicit agreements with copyright holders or their representatives. Still, this criticism hasn’t put tech companies off from doing exactly that, and it’s assumed to be the case for most AI tools, garnering a growing pool of resentment towards the companies in the generative AI space.  

OpenAI CEO Sam Altman attends the artificial intelligence Revolution Forum. New York, US - 13 Jan 2023

(Image credit: Shutterstock/photosince)

There have even been a growing number of legal challenges mounted in these tech companies’ direction. OpenAI and Microsoft have actually been sued by the New York Times for copyright infringement back in December 2023, with the publisher accusing the two companies of training their LLMs on millions of New York Times articles. In September 2023, OpenAI and Microsoft were also sued by a number of prominent authors, including George R. R. Martin, Michael Connelly, and Jonathan Franzen. In July of 2023, over 15,000 authors signed an open letter directed at companies such as Microsoft, OpenAI, Meta, Alphabet, and others, calling on leaders of the tech industry to protect writers, calling on these companies to properly credit and compensate authors for their works when using them to train generative AI models. 

In April of this year, The Register reported that Amazon was hit with a lawsuit by an ex-employee alleging she faced mistreatment, discrimination, and harassment, and in the process, she testified about her experience when it came to issues of copyright infringement.  This employee alleges that she was told to deliberately ignore and violate copyright law to improve Amazon’s products to make them more competitive, and that her supervisor told her that “everyone else is doing it” when it came to copyright violations. Apple Insider echoes this claim, stating that this seems to be an accepted industry standard. 

As we’ve seen with many other novel technologies, the legislation and ethical frameworks always arrive after an initial delay, but it looks like this is becoming a more problematic aspect of generative AI models that the companies responsible for them will have to respond to.

A man editing a photo on a Mac Mini

(Image credit: Apple)

The Apple approach to ethical AI training (that we know of so far)

It looks like at least one major tech player might be trying to take the more careful and considered route to avoid as many legal (and moral!) challenges as possible – and somewhat surprisingly, it’s Apple. According to Apple Insider, Apple has been pursuing diligently licensing major news publications’ works when looking for AI training material. Back in December, Apple petitioned to license the archives of several major publishers to use these as training material for its own LLM, known internally as Ajax. 

It’s speculated that Ajax will be the software for basic on-device functionality for future Apple products, and it might instead license software like Google’s Gemini for more advanced features, such as those requiring an internet connection. Apple Insider writes that this allows Apple to avoid certain copyright infringement liabilities as Apple wouldn’t be responsible for copyright infringement by, say, Google Gemini. 

A paper published in March detailed how Apple intends to train its in-house LLM: a carefully chosen selection of images, image-text, and text-based input. In its methods, Apple simultaneously prioritized better image captioning and multi-step reasoning, at the same time as paying attention to preserving privacy. The last of these factors is made all the more possible for the Ajax LLM by it being entirely on-device and therefore not requiring an internet connection. There is a trade-off, as this does mean that Ajax won’t be able to check for copyrighted content and plagiarism itself, as it won’t be able to connect to online databases that store copyrighted material. 

There is one other caveat that Apple Insider reveals about this when speaking to sources who are familiar with Apple’s AI testing environments: there don’t currently seem to be many, if any, restrictions on users utilizing copyrighted material themselves as the input for on-device test environments. It's also worth noting that Apple isn't technically the only company taking a rights-first approach: art AI tool Adobe Firefly is also claimed to be completely copyright-compliant, so hopefully more AI startups will be wise enough to follow Apple and Adobe's lead.

I personally welcome this approach from Apple as I think human creativity is one of the most incredible capabilities we have, and I think it should be rewarded and celebrated – not fed to an AI. We’ll have to wait to know more about what Apple’s regulations regarding copyright and training its AI look like, but I agree with Apple Insider’s assessment that this definitely sounds like an improvement – especially since some AIs have been documented regurgitating copyrighted material word-for-word. We can look forward to learning more about Apple’s generative AI efforts very soon, which is expected to be a key driver for its developer-focused software conference, WWDC 2024


TechRadar – All the latest technology news

Read More

Your WhatsApp voice calls are getting a needed overhaul for iOS and Android

WhatsApp is testing a new look for being in a call, both on iOS and Android, which shows who's speaking in a group call with waveforms, alongside a more modern design.

The company has been working on improvements across the app for the last year, with multi-device support, a desktop app for Windows 11, and more to better rival other messaging apps.

But calling in WhatsApp has been relegated to the standard user interface of what iOS and Android offer to third-party apps with call features.

However, version, currently available to beta testers, the new look for calling in the app is going to benefit group calls more than those that are one-to-one.

Analysis: Making your voice calls look much better

WhatsApp audio wave form call

(Image credit: WABetaInfo)

For years, the interface when you're in a call on iOS and Android has barely seen any improvement since their first versions. While iOS 14 brought a compact view for when you would be in a call, the full-screen view has been relatively unchanged.

More users are preferring to choose to call over other apps, from WhatsApp to Skype, especially with group calls, which is why an update to the interface for WhatsApp is welcome.

Here, you've got an elegant design that shows who's speaking thanks to the audio waveforms for when someone speaks, alongside three options that's available to you at all times if you want to go on mute, end the call, or go on loudspeaker.

It's a modern design that only goes to show how much of an update the call screen in iOS and Android needs, especially for group calls.

Via WABetaInfo

TechRadar – All the latest technology news

Read More

Twitter begins to expand its downvote feature – but is it needed?

Twitter has been working on a way to enable users to downvote tweets, while not making them public, since early 2020, however the company is expanding the feature to more users across the world, not just in US.

Downvoting was also confined to web users, but this wider testing of the feature will also apply to some iOS and Android users, where you may start to see a downward-facing arrow on some tweets.

This won't hide the tweet or let the tweeter, or your followers, know that you've downvoted. This is more for Twitter to help refine its algorithm in helping you find better curated tweets. However, users aren't convinced.

A 'hide tweet' option instead?

If you use Reddit, you'll see the downvote button everywhere. It's a major part of the site's design, as it shows other users how well the post has been responded to by others.

But Twitter has gone down another avenue here, where the downvotes are seemingly just for the company to help improve its service, which seems over-engineered.

See more

Other companies such as YouTube has changed how it displays dislikes, with the option remaining, but the number of dislikes being hidden, and that has also proved controversial so far. The co-founder of YouTube, Jawed Karim, spoke of his frustrations in how the platform may decline after this change.

But with Twitter, it feels as though it's a feature that doesn't need to exist. While the company explains that it's to help the content you see, there's still the bigger problem of harassment and bullying that many users have been subjected to.

Showcasing a downvote button for Twitter's algorithm is backward, and instead, there should be other features, and beefed-up existing ones to help curb the harassment.

A 'I don't like this tweet' would be beneficial, alongside more streamlined ways to report abuse on the platform. Having a downvote button that benefits Twitter, not the user may be something that will urge the company to put the feature on pause for now, and to see how it can better serve users, rather than the algorithm.

TechRadar – All the latest technology news

Read More