Copilot AI’s mission to infiltrate the Windows 11 desktop appears to have advanced another step

Copilot is creeping into another corner of the Windows 11 interface, it seems, with the AI assistant seen in the context menu of File Explorer.

This is still in test builds of Windows 11, mind, and not officially either. Windows Latest flagged up the change, which was first noticed by PhantomOfEarth, a well-known leaker on X (formerly Twitter) who previously picked up on clues that File Explorer integration was inbound for Copilot back in January 2024.

See more

Now we can see how the context menu option will work, enabling you to right click on a file, and choose to send it to Copilot – open the AI’s panel with the file active, as if you’d dragged it in there – or to elect to ‘summarize’ the file. The latter choice being the standard option for Copilot to summarize a document or PDF for example.

Even though we’ve caught a glimpse of the menu now, it still doesn’t work (which is why it isn’t officially running in Windows 11 previews – yet). As Windows Latest makes clear, if you click to summarize, a summary isn’t provided.

Other options may be added down the line, too. In fact, it’s very likely we’ll see a ‘rewrite’ choice for example, allowing for rewriting a document, another task Copilot is currently capable of.


Analysis: Copilot’s future flight path

We can expect to see Copilot’s tendrils snaking into all parts of the Windows 11 interface eventually, which may not be to everyone’s tastes.

Those who don’t want to use the AI, or even see it in Windows at all, can ignore it, or turn off the functionality for the time being (one way or another) – but there will come a point where Copilot will be the beating heart of Microsoft’s OS, and you’ll have to use AI, like it or not. Although the functionality provided will probably be pretty advanced and undeniably useful (or indeed indispensable) at that stage.

This particular move is not a big intrusion into the desktop, though. We’re talking about an extra line in the right-click menu, and perhaps Microsoft will be incorporating an option to turn it off as well. In the same way you can remove the Copilot icon from the taskbar if you wish – maybe there’ll be a way to switch all the AI’s functions off with an easy flick of a toggle. (Or an instruction, perhaps: “Copilot, remove yourself from all parts of my Windows 11 interface” – we wouldn’t bank on it, mind).

As long as users have a choice, that’s a good thing, but as we’ve already said, in the future we feel there likely won’t be a choice as such because Copilot will pretty much become Windows, or the central pillar of the OS. Windows 2030 might just be called Copilot 2030.

You might also like…

TechRadar – All the latest technology news

Read More

Windows 11 users can’t get enough of Copilot, apparently – that’s why Microsoft supersized the AI’s panel

The Copilot panel in Windows 11 has been tinkered with a good deal in recent times, and a newer change that has been applied is one to switch up the way it appears by default, which has now been accompanied by a prompt from Microsoft explaining why.

This was spotted by regular leaker Leopeva64 on X (formerly Twitter), who noted that the Copilot pane is now wider than it used to be, and opens as an overlay, rather than in side-by-side view (a more compact form, where it’s always nestling next to your active window).

See more

Leopeva64 explains that the Copilot interface has opened this way for a short while now, but a new addition is a prompt Microsoft has added to explain why.

The ‘What’s new’ pop-up tells us that the recent change to make the panel wider so there’s “more space to chat” was due to Windows 11 user feedback requesting that additional real-estate. It also notes, however, that there’s a button at the top of the panel you can click to switch back to the more compact side-by-side layout, if you wish.


Analysis: Copilot expansion

It’s useful to have an explanation of the recent move to change the default settings for how the AI opens, and by all accounts, this points to Windows 11 users favoring a larger Copilot panel. (Or at least some of them, and we could assume the majority, at least of those who’ve fed back to Microsoft on Copilot’s interface).

Certainly those who use Copilot quite a lot in Windows 11, engaging in longer sessions of queries, may welcome the AI assistant getting more screen space by default.

The truth is we can expect to see a lot more of Copilot, one way or another, going forward. By which we mean Microsoft is already testing the waters for having the AI assistant appear when Windows 11 first boots (in a limited fashion thus far, mind). Furthermore, there are clues that Copilot may be integrated into other parts of the Windows 11 interface (such as File Explorer). We can envisage further possibilities like being able to dock Copilot elsewhere (it sits on the right-hand side of the screen currently).

What we definitely don’t want to see are nudges or adverts to use Copilot, but sadly – yet somewhat predictably – this has been spotted in testing too (promoting Copilot Pro, the supercharged paid version of the AI, we should clarify).

You might also like…

TechRadar – All the latest technology news

Read More

Google Bard AI’s addition to Messages could change the way we text forever

Google’s experimental AI chatbot Bard may be coming to the Google Messages app in the near future – and it promises to bring some major upgrades to your phone-based chats. 

Tipster Assembler Debug uncovered the feature in the beta code of the Google Messages app. The AI-enhanced features are not yet available, and Assembler Debug states that it doesn’t seem to work. However, according to leaked images, you can use Bard to help you write text messages, as well as arrange a date and craft a message calling in sick to your boss, alongside other difficult conversations. 

Bard in Google Messages could also help to translate conversations and identify images, as well as explore interests. The code suggests it could provide book recommendations and recipe ideas, too.

According to the examination of its code, the app is believed to use your location data and past chat information to help generate accurate replies. However, you can provide feedback to Bard's response with a thumbs up or down by long pressing, as well as copy, forward, and favorite its answers, thus helping the AI learn if its reply was appropriate. 

See more

The project codename “Penpal” was noted in a beta version (20240111_04_RC00) of the Google Messages app. According to 9to5Google’s insights of the beta code, Bard can be accessed by selecting the “New conversation” option, allowing you to select Bard as a stand-alone chat option.

You must be eighteen-years-old to use it and conversations with Bard in the Messages app are not end-to-end encrypted or treated as private, unlike messages exchanged with your contacts. So you might want to avoid sending personal or sensitive messages through the app when Bard is enabled. 

Google states that chat histories are kept for eighteen months to help enhance Bard and could be reviewed by a human, but no information is associated with your account beyond three years. Google recommends not to say anything to Bard you wouldn't want others to see. Conversations with Bard could be reviewed by Google but are not accessible to other users. However, you can delete your chat history with Bard anytime, which will take 72 hours to remove the data.

Echoes of Allo

Bard AI's inclusion into the Messages app seems slightly reminiscent of the past project Google Allo, which incorporated the Google Assistant in both stand-alone requests and chats. This service was shut down in 2019 but it could live on in some way through this Bard integration.

When asked directly Bard said: “While I can't say for certain right now, there are strong indications that I might become available with Google RCS messages in the future.” 

Bard then went on to say that integration with Google Messages was being tested in March 2023 and the functionality aligns with Bard's capabilities to process language, generate text, and answer questions, as well as summarize information making it a natural fit for enhancing messages. 

The integration of AI into messaging apps reflects many companies' eagerness to infuse AI technologies into their upcoming smartphones, with Samsung’s Galaxy AI features being a recent example. Google, however, is no stranger to AI tools in its phones with features like Magic Eraser, Photo Unblur, or Live Translate all being staples of Pixel devices.

The implications of AI being added to messages are also intriguing, meaning you may never know if that thoughtful reply or fantastic date idea was thought up by a human or their AI assistant.

Although Bard’s inclusion in Google's messaging app isn’t yet available and no release date has been announced, Google could decide to not continue with the project. Google could go the Samsung route and make its functionality a subscription-based feature. However, all of this is speculation right now and we’ll have to wait to see exactly how much Bard will change the Messages app in the future.  

You may also like

TechRadar – All the latest technology news

Read More

Microsoft reins in Bing AI’s Image Creator – and the results don’t make much sense

You may have noticed that Bing AI got a big upgrade for its image creation tool last week (among other recent improvements), but it appears that after having taken this sizeable step forward, Microsoft has now taken a step back.

In case you missed it, Bing’s image creation system was upgraded to a whole new version – Dall-E 3 – which is much more powerful. So much so that Microsoft noted the supercharged Dall-E 3 was generating a lot of interest and traffic, and so might be sluggish initially.

There’s another issue with Dall-E 3 though, because as Windows Central observed, Microsoft has considerably reined in the tool since its recent revamp.

Now, we were already made aware that the image creation tool would employ a ‘content moderation system’ to stop inappropriate pics being generated, but it seems the censorship imposed is harsher than expected. This might be a reaction to the kind of content Bing AI users have been trying to get the system to create.

As Windows Central points out, there has been a lot of controversy about an image created of Mickey Mouse carrying out the 9/11 attack (unsurprisingly).

The problem, though, is that beyond those kinds of extreme asks, as the article makes clear, some users are finding innocuous image creation requests being denied. Windows Central tried to get the chatbot to make an image of a man breaking a server rack with a sledgehammer, but was told this violated Microsoft’s terms of using Bing AI.

Whereas last week, the article author noted that they could create violent zombie apocalypse scenarios featuring popular characters (that are copyrighted) with Bing AI not raising a complaint.


Analysis: Random censorship

The point is about censorship being an overreaction here, or this seemingly being the case going by reports, we should add. Microsoft left the rules too slack in the initial implementation, it appears, but has gone ahead and tightened things too much now.

What really illustrates this is that Bing AI is even censoring itself, as highlighted by someone on Reddit. Bing Image Creator has a ‘surprise me’ button that generates a random image (the equivalent of Google’s ‘I’m feeling lucky’ button, if you will, that produces a random search). But here’s the kicker – the AI is going ahead, creating an image, and then censoring it immediately.

Well, we suppose that is a surprise, to be fair – and one that would seem to aptly demonstrate that Microsoft’s censorship of the Image Creator has maybe gone too far, limiting its usefulness at least to some extent. As we said at the outset, it’s a case of a step forward, then a quick step back.

Windows Central observes that it was able to replicate this scenario of Bing’s self-censorship, and that it’s not even a rare occurrence – it reportedly happens around a third of the time. It sounds like it’s time for Microsoft to do some more fine-tuning around this area, although in fairness, when new capabilities are rolled out, there are likely to be adjustments applied for some time – so perhaps that work could already be underway.

The danger of Microsoft erring too strongly on the ‘rather safe than sorry’ side of the equation is that this will limit the usefulness of a tool that, after all, is supposed to be about exploring creativity.

We’ve reached out to Microsoft to check what’s going on with Bing AI in this respect, and will update this story if we hear back.

You might also like …

TechRadar – All the latest technology news

Read More

Stability AI’s new text-to-audio tool is like a Midjourney for music samples

Stability AI is taking its generative AI tech into the world of music as the developer has launched a new text-to-audio engine called Stable Audio.

Similar to the Stable Diffusion model, Stable Audio can create short sound bites based on a simple text prompt. The company explains in its announcement post that the AI was trained on content from the online music library AudioSparx. It even claims the model is capable of creating “high-quality, 44.1 kHz music for commercial use”. To put that number into perspective, 44.1 kHz is considered to be CD quality audio. So it’s pretty good but not the greatest.

Stable Audio user interface

(Image credit: Stability AI)

A free version of Stable Audio is currently available to the public where you’re allowed to generate and download 20 individual tracks a month. Each sound bite has a 45 second runtime so they won’t be very long.

Prompting music

The text prompts you enter can be simple inputs. Listening to the samples provided by Stability AI, “Car Passing By” sounds exactly as the title suggests – a car driving by in the distance although it is a little muffled. Conversely, you can also stack on details. One particular sample has a prompt involving Ambient Techno, an 808 drum machine, claps, a synthesizer, the word “ethereal”, 122 BPM, and a “Scandinavian Forest” (whatever that means). The result of this word combination is an ambient lo-fi hip-hop beat.

We took Stable Audio out for a quick spin. We were able to enter one prompt asking the AI to create a fast-paced garage rock song from the early 2000s and it sort of accomplished the goal. The generated track matched the style although it sounded really messy. 

Personal Stable Audio input

(Image credit: Future)

Unfortunately, we couldn’t go any further besides the single input. At the time of this writing, Stable Audio is seeing a huge influx of traffic from people rushing in to try out the model. The developer recommends trying again later or the next day if you’re met with nothing but a blank screen.

There is a catch with the free version – it’s for non-commercial use only. If you want to use the content commercially, then you’ll have to purchase the $ 12 Stable Audio Professional monthly plan. It also offers 500 track generations a month, each with a duration of up to 90 seconds. There’s an Enterprise plan too for custom audio duration and monthly generations. You will, however, have to contact Stability AI first to set up a plan.

Imperfect tool

Do be aware the technology isn’t perfect. The content sounds fine for the most part, however certain aspects will seem off. The mix in that Ambient Techno song mentioned earlier isn’t very good in our opinion. It was like the bass and synthesizer are fighting over what will be the dominant sound, resulting in just noise. Additionally, it doesn’t appear the AI can do vocals. It only does instrumentals. 

Stable Audio is interesting for sure, but not something that should be totally relied on. We should note the company is asking for feedback from users on how to improve the AI. A contact email can found on the official announcement page.

If you plan on utilizing this tech for your own purpose, we recommend checking TechRadar’s list of the best audio editors for 2023 to fix any flaw you might come across. 

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

ChatGPT use declines as users complain about ‘dumber’ answers, and the reason might be AI’s biggest threat for the future

 

Is ChatGPT old news already? It seems impossible, with the explosion of AI popularity seeping into every aspect of our lives – whether it’s digital masterpieces forged with the best AI art generators or helping us with our online shopping.

But despite being the leader in the AI arms race – and powering Microsoft’s Bing AI – it looks like ChatGPT might be losing momentum. According to SimilarWeb, traffic to OpenAI’s ChatGPT site dropped by almost 10% compared to last month, while metrics from Sensor Tower also demonstrated that downloads of the iOS app are in decline too.

As reported by Insider, paying users of the more powerful GPT-4 model (access to which is included in ChatGPT Plus) have been complaining on social media and OpenAI’s own forums about a dip in output quality from the chatbot.

A common consensus was that GPT-4 was able to generate outputs faster, but at a lower level of quality. Peter Yang, a product lead for Roblox, took to Twitter to decry the bot’s recent work, claiming that “the quality seems worse”. One forum user said the recent GPT-4 experience felt “like driving a Ferrari for a month then suddenly it turns into a beaten up old pickup”.

See more

Why is GPT-4 suddenly struggling?

Some users were even harsher, calling the bot “dumber” and “lazier” than before, with a lengthy thread on OpenAI’s forums filled with all manner of complaints. One user, ‘bitbytebit’, described it as “totally horrible now” and “braindead vs. before”.

According to users, there was a point a few weeks ago where GPT-4 became massively faster – but at a cost of performance. The AI community has speculated that this could be due to a shift in OpenAI’s design ethos behind the more powerful machine learning model – namely, breaking it up into multiple smaller models trained in specific areas, which can act in tandem to provide the same end result while being cheaper for OpenAI to run.

OpenAI has yet to officially confirm this is the case, as there has been no mention of such a major change to the way GPT-4 works. It’s a credible explanation according to industry experts like Sharon Zhou, CEO of AI-building company Lamini, who described the multi-model idea as the “natural next step” in developing GPT-4.

AIs eating AIs

However, there’s another pressing problem with ChatGPT that some users suspect could be the cause of the recent drop in performance – an issue that the AI industry seems largely unprepared to tackle.

If you’re not familiar with the term ‘AI cannibalism’, let me break it down in brief: large language models (LLMs) like ChatGPT and Google Bard scrape the public internet for data to be used when generating responses. In recent months, a veritable boom in AI-generated content online – including an unwanted torrent of AI-authored novels on Kindle Unlimited – means that LLMs are increasingly likely to scoop up materials that were already produced by an AI when hunting through the web for information.

An iPhone screen showing the OpenAI ChatGPT download page on the App Store

ChatGPT app downloads have slowed, indicating a decrease in overall public interest. (Image credit: Future)

This runs the risk of creating a feedback loop, where AI models ‘learn’ from content that was itself AI-generated, resulting in a gradual decline in output coherence and quality. With numerous LLMs now available both to professionals and the wider public, the risk of AI cannibalism is becoming increasingly prevalent – especially since there’s yet to be any meaningful demonstration of how AI models might accurately differentiate between ‘real’ information and AI-generated content.

Discussions around AI have largely focused on the risks it poses to society – for example, Facebook owner Meta recently declined to open up its new speech-generating AI to the public after it was deemed ‘too dangerous’ to be released. But content cannibalization is more of a risk to the future of AI itself; something that threatens to ruin the functionality of tools such as ChatGPT, which depend upon original human-made materials in order to learn and generate content.

Do you use ChatGPT or GPT-4? If you do, have you felt that there’s been a drop in quality recently, or have you simply lost interest in the chatbot? I’d love to hear from you on Twitter. With so many competitors now springing up, is it possible that OpenAI’s dominance might be coming to an end? 

TechRadar – All the latest technology news

Read More

Google changed its privacy policy to reflect Bard AI’s data collecting, and we’re spooked

Google just changed the wording of its privacy policy, and it’s quite an eye-opening adjustment that has been applied to encompass the AI tech the firm is working with.

As TechSpot reports, there’s a section of the privacy policy where Google discusses how it collects information (about you) from publicly accessible sources, and clarifying that, there’s a note that reads: “For example, we may collect information that’s publicly available online or from other public sources to help train Google’s AI models and build products and features, like Google Translate, Bard and Cloud AI capabilities.”

Preivously, that paragraph read that the publicly available info would be used to train “language models” and only mentioned Google Translate.

So, this section has been expanded to make it clear that training is happening with AI models and Bard.

It’s a telling change, and basically points out that anything you post online publicly  may be picked up and used by Google's Bard AI.


Analysis: So what about privacy, plagiarism, and other concerns?

We already knew that Google’s Bard, and indeed Microsoft’s Bing AI for that matter, are essentially giant data hoovers, extracting and crunching online content from all over the web to refine conclusions on every topic under the sun that they might be questioned on.

This change to Google’s privacy policy makes it crystal clear that its AI is operating in this manner, and seeing it in cold, hard, text on the screen, may make some folks step back and question this a bit more.

After all, Google has had Bard out for a while now, so has been working in this manner for some time, and has only just decided to update its policy? That in itself seems pretty sly.

Don’t want stuff you’ve posted online where other people can see it to be used to train Google’s big AI machinery? Well, tough. If it’s out there, it’s fair game, and if you want to argue with Google, good luck with that. Despite the obvious concerns around not just basic privacy issues, but plagiarism (if an AI reply uses content written by others, picked up by Bard’s training) – where do any boundaries lie with the latter? Of course, it’d be impractical (or indeed impossible) to police that anyway.

There are broader issues around accuracy and misinformation when data is scraped from the web in a major-scale fashion, too, of course.

On top of this, there are worries recently expressed by platforms like Reddit and Twitter, with Elon Musk apparently taking a stand against “scraping people’s public Twitter data to build AI models” with those frustrating limitations that have just been brought in (which could be big win for Zuckerberg and Threads, ultimately).

All of this is a huge minefield, really, but the big tech outfits making big strides with their LLM (large language model) data-scraping AIs are simply forging ahead, all eyes on their rivals and the race to establish themselves at the forefront, seemingly with barely a thought about how some of the practical side of this equation will play out.

TechRadar – All the latest technology news

Read More