Microsoft is planning to make Copilot launch when Windows 11 starts – and it could spark the next user backlash

It looks like Microsoft is going to make Copilot, its new AI assistant, start up automatically on PCs with ‘wide screens’ running suitable versions of Windows 11. As it happens, most PC screens are wide, so it seems like Microsoft wants to get Copilot in front of as many users as possible. 

This potential development has been discovered in a Windows preview build that’s just been released in the Dev Channel of the Windows Insider Program. The Windows Insider program is Microsoft’s official community of professionals and Windows enthusiasts who can access previews of new Windows features and versions. Windows Copilot’s interface opening automatically when a PC boots up is being trialed as part of preview build 23616, and it’s worth pointing out that this feature is still in the testing stages and may not end up being included in a finalized Windows 11 update that’s rolled out to all users. 

The feature is already being called controversial, which I understand – I get very annoyed when apps and features are sneakily enabled to start up automatically when I turn on my laptop. Also, in a Microsoft Windows Blog post, it does emphasize that users can turn off this feature, which will probably be the case if it makes it into a final Windows update version. Even Windows Insiders who are in the Dev Channel may not see it at the moment, as the rollout of the preview build is ongoing.

Here’s what Microsoft has to say about this Copilot change: 

We are trying out opening Copilot automatically when Windows starts on widescreen devices with some Windows Insiders in the Dev Channel. This can be managed via Settings > Personalization > Copilot. Note that this is rolling out so not all Insiders in the Dev Channel will see this right away.

Screenshot of Windows Copilot in use

(Image credit: Microsoft)

A frosty reception so far

Microsoft didn’t specify which widescreens will qualify for this automatic feature – specifically what aspect ratios will be eligible. Windows Central asks if “widescreen” means common 16:9 and 16:10 screens, or ultrawide monitors with 21:9 ratios.

So far, this is being received as unnecessary and possibly annoying, especially as Copilot currently is pretty limited in what it’s able to do. Windows Central speculates that this update could be laying the groundwork for a more substantial Copilot update, suspected to be in development for the next iteration of Windows (unofficially known as “Windows 12”). 

When Microsoft presented its vision for Copilot, it was presented as an AI assistant that would work across a multitude of apps and could enhance users’ productivity. When it becomes something that’s more familiar (and popular) like Microsoft hopes, maybe there’s a case for Copilot opening up as soon as your PC turns on. 

At present, Copilot isn’t there yet – and this move will probably just end up rubbing users the wrong way, especially if it ends up slowing down the time it takes for their PCs to load Windows 11. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

The AI backlash begins: artists could protect against plagiarism with this powerful tool

A team of researchers at the University of Chicago has created a tool aimed to help online artists “fight back against AI companies” by inserting, in essence, poison pills into their original work.

Called Nightshade, after the family of toxic plants, the software is said to introduce poisonous pixels to digital art that messes with the way generative AIs interpret them. The way models like Stable Diffusion work is they scour the internet, picking up as many images as they can to use as training data. What Nightshade does is exploit this “security vulnerability”. As explained by the MIT Technology Review, these “poisoned data samples can manipulate models into learning” the wrong thing. For example, it could see a picture of a dog as a cat or a car as a cow.

Poison tactics

As part of the testing phase, the team fed Stable Diffusion infected content and “then prompted it to create images of dogs”. After being given 50 samples, the AI generated pictures of misshapen dogs with six legs. After 100, you begin to see something resembling a cat. Once it was given 300, dogs became full-fledged cats. Below, you'll see the other trials.

Nightshade tests

(Image credit: University of Chicago/MIT Technology Review)

The report goes on to say Nightshade also affects “tangentially related” ideas because generative AIs are good “at making connections between words”. Messing with the word “dog” jumbles similar concepts like puppy, husky, or wolf. This extends to art styles as well. 

Nightshade's tangentially related samples

(Image credit: University of Chicago/MIT Technology Review)

It is possible for AI companies to remove the toxic pixels. However as the MIT post points out, it is “very difficult to remove them”. Developers would have to “find and delete each corrupted sample.” To give you an idea of how tough this would be, a 1080p image has over two million pixels. If that wasn’t difficult enough, these models “are trained on billions of data samples.” So imagine looking through a sea of pixels to find the handful messing with the AI engine.

At least, that’s the idea. Nightshade is still in the early stages. Currently, the tech “has been submitted for peer review at [the] computer security conference Usenix.” MIT Technology Review managed to get a sneak peek.

Future endeavors

We reached out to team lead, Professor Ben Y. Zhao at the University of Chicago, with several questions. 

He told us they do have plans to “implement and release Nightshade for public use.” It’ll be a part of Glaze as an “optional feature”. Glaze, if you’re not familiar, is another tool Zhao’s team created giving artists the ability to “mask their own personal style” and stop it from being adopted by artificial intelligence. He also hopes to make Nightshade open source, allowing others to make their own venom.

Additionally, we asked Professor Zhao if there are plans to create a Nightshade for video and literature. Right now, multiple literary authors are suing OpenAI claiming the program is “using their copyrighted works without permission.” He states developing toxic software for other works will be a big endeavor “since those domains are quite different from static images. The team has “no plans to tackle those, yet.” Hopefully someday soon.

So far, initial reactions to Nightshade are positive. Junfeng Yang, a computer science professor at Columbia University, told Technology Review this could make AI developers “respect artists’ rights more”. Maybe even be willing to pay out royalties.

If you're interested in picking up illustration as a hobby, be sure to check out TechRadar's list of the best digital art and drawing software in 2023.

You might also like

TechRadar – All the latest technology news

Read More

After backlash, Zoom ditches snooping Facebook code from iOS app

Following the revelation by Motherboard on Friday (March 27) that video calling platform Zoom was sharing user information with Facebook via its iOS app, the popular video conferencing service has rolled out an update for iOS users.

Zoom has removed the data-sharing code from the app, telling Motherboard in a statement that the 'Login with Facebook' feature was implemented "in order to provide our users with another convenient way to access our platform". 

That login feature – found on several apps – is applied by using a Facebook SDK (software development kit) that connects users of the app to Facebook's Graph API (Application Programming Interface) when the app is launched. The SDK can then share information with third parties, even if a user doesn't have a social media account with Facebook.

Facebook requires app makers to share this information with users in privacy policies, however Zoom's made no explicit mention that the social media company would have access to user data if there was no linked account.

Stay updated

Zoom says it was "recently made aware that the Facebook SDK was collecting unnecessary device data" and has since removed the code and an updated version of the iOS app is now available on the App Store.

According to Zoom's statement to Motherboard, the app did not share any sensitive information, like user names, emails and phone numbers, but "included data about users’ devices such as the mobile OS type and version, the device time zone, device OS, device model and carrier, screen size, processor cores, and disk space". This coincides with Motherboard's findings from last week.

Motherboard has since tried out the updated iOS app and found that Zoom has, indeed, stopped sending data to Facebook when the app is launched. 

In the 'What's New' section of the app, Zoom says that, despite the Facebook SDK being removed, users will still be able to log in with their Facebook accounts if they have one. Users have been recommended to update the app to enable the changes.

Zoom has issued an apology for the "oversight" and the company says it "takes its users’ privacy extremely seriously".

TechRadar – All the latest technology news

Read More