Microsoft continues to frustrate users with ads in the operating system – this time plaguing the MSN Weather app

Microsoft has made another move to push more advertising into Windows 11, with fresh ads arriving in the stock Weather app installed by default. So, alongside the likes of the Start menu and the Settings app, now the MSN Weather app will also have ads – more intrusive efforts, too, once again pointing towards a system-wide ‘adpocalypse’ as it were. 

According to Windows Latest, a new server-side update now places two ads in the default Weather app as soon as you open it, and the situation is more dire than normal because the advertisements in question are pinned. In other words, even as you scroll down, looking at the forecasts and other details in the app, the ads will scroll, too, remaining constantly visible.

This is a pretty aggressive approach, similar to the Game Pass ad in the Settings app – and as I said in that instance, it seems like Microsoft is trying to usher in a whole new era of over-advertising. I fear that as time progresses, not only will we see more of these ads, but they might become more aggressive in terms of being unskippable and generally unavoidable.

Screenshot

Ads pinned to the Microsoft Weather app. (Image credit: Windows Latest)

Okay, so it could be argued that these are just small ads in the corner, and we all have to deal with ignoring or skipping advertisements in so much of our lives these days – but why should I do that on my PC, too? You’re telling me now that the new normal is just advertising everywhere I look – and not a single bit of technology is my own? 

I paid for my PC and its operating system, and I don’t expect to have to suffer through ads (which might be expected on a free OS, granted – but not one that’s charged for).

Also, while at the moment they’re only relatively little ads, the fear is that Microsoft might push boundaries in the future. If – or when, perhaps – these advertisements become more and more accepted, we could see personalized, bigger, unavoidable, and maybe even one-day unskippable ads in Windows 11 (or a future version of the desktop OS). 

It’s not like these ads are placed in some obscure part of Windows 11; you’re often going to find yourself opening up the Settings app, Start menu, or perhaps perusing the weather forecast, and so on. If more advertisements are placed in more prominent places, at what point will that make using your computer infuriating? It’s a dangerous path to tread with Windows 11, but one Microsoft seems intent on exploring, sadly.

You might also like…

TechRadar – All the latest technology news

Read More

YouTube may be planning to give us new AI song generators this year – and this time the music labels could let it happen

The battle between the music industry and the rampant, often copyright-infringing, use of AI to train and compile data sets has been heating up for quite some time. But now YouTube is reportedly negotiating with record labels to pay for that privilege instead.

It seems that Sony Music Entertainment, Universal Music Group, and Warner Records are in talks with the Google-owned platform about paying to license their songs for AI training, according to an article from the Financial Times (and reported on by Engadget). However, if this deal goes through, the individual artists, not the record companies, will most likely have the last word on their participation.

It’s no coincidence that these giants have been the focus of YouTube, either. Artificial intelligence music makers Suno and Udio have recently been hit with major lawsuits filed by the Recording Industry Association of America (RIAA) and major music labels for copyright infringement. The RIAA has also been backed by the likes of Sony Music Entertainment, UMG Recordings, Inc., and Warner Records, Inc.

Furthermore, this isn’t even the first time YouTube has been reportedly involved in ways to properly compensate music artists for generative AI use. In August 2023, the video platform announced its partnership with Universal Music Group to create YouTube’s Music AI Incubator program. This program would partner with music industry talent like artists, songwriters, and producers to decide on how to proceed with the advent of AI music.

Artists have been quite outspoken about generative AI use and music 

Judging from artists' past responses on the subject of AI, many of them have been very outspoken about its dangers and how it devalues their music. In April 2023, over 200 artists signed an open letter calling for protections for AI.

In a statement by the Artist Rights Alliance, those artists wrote: “This assault on human creativity must be stopped states. We must protect against the predatory use of AI to steal professional artists' voices and likenesses, violate creators' rights, and destroy the music ecosystem.”

Even artists who are more open to and have even benefited from generative AI’s usage regarding music ask to be properly included in any decision-making regarding such use, as asserted by an open letter from Creative Commons released in September 2023. 

According to said letter: “Sen. Schumer and Members of Congress, we appreciate…that your goal is to be inclusive, pulling from a range of ‘scientists, advocates, and community leaders’ who are actively engaged with the field. Ultimately, that must mean including artists like us.”

The general consensus from creatives in the music industry is that, whether for or against generative AI use, artists must be included in conversations and policy-making and that their works must be properly protected. And considering that artists are the ones with the most to lose, this is by far the best and most ethical way to approach this issue.

You might also like

TechRadar – All the latest technology news

Read More

This app can add AI narration to any site or text – here’s how to make it work

Artificial intelligence-powered audio creator ElevenLabs has brought its synthetic voices to the iPhone with a new iOS app. The ElevenLabs Reader App will read out any uploaded text or website using ElevenLabs' library of synthetic and cloned voices, even your own if you want. 

The new app essentially turns books, website content, and any other text into a kind of podcast hosted by whichever voice you want to hear. Users can listen to content by pasting a link, copying text, uploading a file, or selecting one of the preloaded stories, which are then read in the chosen voice from the library. The stories are public domain and come from Project Gutenberg, including “Cinderella,” “The Tale of Peter Rabbit,” and “The Adventures of Sherlock Holmes.” 

As for the voices, users can pick based on accent, style, and tone to match the text. That might mean switching from a warm, friendly voice reading a bedtime story to a child to a brisk, authoritative voice reading a scientific study. The app can run in the background like an audiobook or podcast and is clearly aimed at those who are multitasking, at least based on the promotional video. 

Narrate Your Life

The ElevenLabs Reader App only narrates in English for now and only in the U.S., Canada, and the UK. The company said it is “working on widening access, adding content download and audio sharing features, and adding all of the 29 languages available inElevenLabs'' wider library thanks to its multilingual AI model. The app is included with a subscription to ElevenLabs' platform, though you can get three months of free access without an account. An Android version is also coming soon, with an early access waitlist available to sign up for.

“It's our mission to make content accessible in any language and voice, and everything we do is oriented around achieving that mission,” ElevenLabs head of growth Sam Sklar explained in a blog post about the new app.”Creating best-in-class AI audio models is not enough. Creators need tools through which they can create. And consumers need interfaces through which they can consume audio.”

You might also like

TechRadar – All the latest technology news

Read More

macOS Sequoia has yet another cool feature to look forward to, this time adding a way to customize your AirPods Audio experience

It seems like every day, there is a new macOS Sequoia feature to look forward to, or some kind of improvement in Apple’s incoming OS, with a freshly spotted one opening up the doors to improved accessibility on the audio front.

MacRumors has been busy playing with the macOS 15 developer beta and discovered this new functionality in System Settings. Under Headphone Accommodations (in Accessibility > Audio), you can tweak the sound for your AirPods and some Beats headphones. 

The settings therein let you amplify softer sounds – to make them more easily heard – and change the audio output frequencies to make your music, phone calls, and more clearer sounding (or at least that’s the idea). From what we can tell, the new settings you run with will carry over when using your AirPods on devices other than your Mac. 

This could be a really useful feature for those who are hard of hearing to some degree, and it’s an ability that has been on iOS devices for some time. So, while it’s undoubtedly a very commendable step forward for accessibility with macOS, some folks out there are wondering why it took so long to bring this functionality across to the Mac.

Still, we’re glad to see it’s arriving, and in the run-up to the release of macOS Sequoia, we’re seeing a lot of new and interesting features and tweaks pop up that seem to be popular. 

A recent example is the fix for the annoying storage issue Mac users have to deal with when it comes to downloading apps, as well as the more anticipated changes like iPhone mirroring and a plethora of AI features powered by Apple Intelligence.

You might also like…

TechRadar – All the latest technology news

Read More

Windows 11 loses keyboard shortcut for Copilot, making us wonder if this is a cynical move by Microsoft to stoke Copilot+ PC sales

What’s going to drive Copilot+ PC sales, do you think? Superb AI acceleration chops? Windows on Arm getting emulation nailed for fast app and gaming performance (on Snapdragon X models)? No – it’s the Copilot key on the keyboard, dummy.

Surprised? Well, we certainly are, but apparently one of Microsoft’s selling points for Copilot+ PCs is the dedicated key to summon the AI on the keyboard.

We can draw that unexpected conclusion from a move Microsoft just made which seems pretty mystifying otherwise: namely the removal of the keyboard shortcut for Copilot from Windows 11.

As flagged up by Tom’s Hardware, the new Windows 11 preview (build 22635) in the Beta channel has dumped the keyboard shortcut (Windows key + C) that brings up the Copilot panel. This is an update that just happened (on June 19), after the preview build initially emerged on June 14.

Microsoft explains very vaguely that: “As part of the Copilot experience’s evolution on Windows to become an app that is pinned to the taskbar, we are retiring the WIN + C keyboard shortcut.”


An Acer Swift Go 14 on a desk

(Image credit: Future / James Holland)

Analysis: A cynical move by Microsoft?

What now? How is removing a useful keyboard shortcut part of the ‘evolution’ of Copilot? Surely, it’s a step backwards to drop one of the ways to invoke the AI assistant to the desktop?

Now, if Microsoft had big plans for the Windows + C shortcut elsewhere, say another piece of functionality that had come in which required this particular combo, the reasoning might at least be a little clearer. But by all accounts, there’s no replacement function here – Windows + C now does nothing.

As for the reason somehow being tied to Copilot shifting to become an app window, rather than a locked side panel in Windows 11, we don’t see how that has any relevance at all to whether you can open the AI with a keyboard shortcut or not.

As Tom’s Guide points out, seemingly the driver for this change is to make the Copilot key on the keyboard a more pivotal function, replacing the shortcut, but guess what – you only get that key on new Copilot+ PCs (right now anyway). So, the logical conclusion for the skeptical is that this is simply a fresh angle on helping to stoke sales for Copilot+ PCs.

It’s not like you can’t just click on the Copilot icon, of course, so you’re not lost at sea with no AI assistance all of a sudden – but that’s not the point. It is a lost convenience, clearly though, and it feels like a cynical move by Microsoft.

Tom’s Guide points out that you could use third-party key mapping software to restore the functionality of this particular shortcut, but the point is, you really shouldn’t have to bother jumping through such hoops. Come on, Microsoft – don’t pull stunts like this, or, if there is a good reason behind the change, share it, not some waffling soundbites about evolving Copilot.

You might also like…

TechRadar – All the latest technology news

Read More

AI-generated movies will be here sooner than you think – and this new Google DeepMind tool proves it

AI video generators like OpenAI's Sora, Luma AI's Dream Machine, and Runway Gen-3 Alpha have been stealing the headlines lately, but a new Google DeepMind tool could fix the one weakness they all share – a lack of accompanying audio.

A new Google DeepMind post has revealed a new video-to-audio (or 'V2A') tool that uses a combination of pixels and text prompts to automatically generate soundtracks and soundscapes for AI-generated videos. In short, it's another big step toward the creation of fully-automated movie scenes.

As you can see in the videos below, this V2A tech can combine with AI video generators (including Google's Veo) to create an atmospheric score, timely sound effects, or even dialogue that Google DeepMind says “matches the characters and tone of a video”.

Creators aren't just stuck with one audio option either – DeepMind's new V2A tool can apparently generate an “unlimited number of soundtracks for any video input” for any scene, which means you can nudge it towards your desired outcome with a few simple text prompts.

Google says its tool stands out from rival tech thanks to its ability to generate audio purely based on pixels – giving it a guiding text prompt is apparently purely optional. But DeepMind is also very aware of the major potential for misuses and deepfakes, which is why this V2A tool is being ringfenced as a research project – for now.

DeepMind says that “before we consider opening access to it to the wider public, our V2A technology will undergo rigorous safety assessments and testing”. It will certainly need to be rigorous, because the ten short video examples show that the tech has explosive potential, for both good and bad.

The potential for amateur filmmaking and animation is huge, as shown by the 'horror' clip below and one for a cartoon baby dinosaur. A Blade Runner-esque scene (below) showing cars skidding through a city with an electronic music soundtrack also shows how it could drastically reduce budgets for sci-fi movies. 

Concerned creators will at least take some comfort from the obvious dialogue limitations shown in the 'Claymation family' video. But if the last year has taught us anything, it's that DeepMind's V2A tech will only improve drastically from here.

Where we're going, we won't need voice actors

The combination of AI-generated videos with AI-created soundtracks and sound effects is a game-changer on many levels – and adds another dimension to an arms race that was already white hot.

OpenAI has already said that it has plans to add audio to its Sora video generator, which is due to launch later this year. But DeepMind's new V2A tool shows that the tech is already at an advanced stage and can create audio based purely on videos alone, rather than needing endless prompting.

DeepMind's tool works using a diffusion model that combines information taken from the video's pixels and the user's text prompts then spits out compressed audio that's then decoded into an audio waveform. It was apparently trained on a combination of video, audio, and AI-generated annotations.

Exactly what content this V2A tool was trained on isn't clear, but Google clearly has a potentially huge advantage in owning the world's biggest video-sharing platform, YouTube. Neither YouTube nor its terms of service are completely clear on how its videos might be used to train AI, but YouTube's CEO Neal Mohan recently told Bloomberg that some creators have contracts that allow their content to be used for training AI models.

Clearly, the tech still has some limitations with dialogue and it's still a long way from producing a Hollywood-ready finished article. But it's already a potentially powerful tool for storyboarding and amateur filmmakers, and hot competition with the likes of OpenAI means it's only going to improve rapidly from here.

You might also like…

TechRadar – All the latest technology news

Read More

Your ChatGPT data automatically trains its AI models – unless you turn off this setting

If you rely on ChatGPT to run aspects of your life, and often pass fairly sensitive data to the AI, you might want to make sure you’ve opted out of its ‘Improve model for everyone’ setting. Otherwise OpenAI’s model will be training itself on what you tell it.

Before you panic, you should know that not all data is automatically passed over to ChatGPT’s training pool. Temporary Chats and Business plans will have this feature turned off by default. What’s more, OpenAI makes clear that the data is kept private and is purely used to improve the AI’s understanding of how language is used, rather than being used to create individualized profiles of users for advertising or other nefarious purposes.

Still, if you’re a free or even a premium ChatGPT Plus account anything you say will be helping to train ChatGPT with by default. So how do you turn it off?

Three simple steps

Looking to opt out of contributing your data to the training of OpenAI's AI models? Here's how to do it:

A laptop screen on a pink and purple background showing ChatGPT's settings page

(Image credit: OpenAI / Future)
  • Start by clicking your profile picture in the top right corner of the ChatGPT screen.
  • You’ll then want to go into Settings, and the third option down will be Data Controls. Click it.
  • Once in this submenu, toggle ‘Improve the model for everyone’ off and close out of settings.

A more private AI era?

Apple Intelligence presentation

(Image credit: Apple)

Privacy in AI has always been an important topic, but it has been thrust firmly into the spotlight recently thanks to Apple’s WWDC 2024 keynote. 

This is where the company finally unveiled its Apple Intelligence model, and one of its core features is its top-tier data handling and privacy methods – which Apple has boasted are verified by independent third-parties.

It also follows Microsoft’s botched rollout of Recall; it’s a Windows Copilot feature where an AI takes screenshots your display very frequently and logs everything you do on your PC so it can remind you of your actions later. Useful, sure, but also a potential privacy nightmare.

We expect privacy will only continue to be an important conversation, with users more and more wary of auto-on data sharing settings like ChatGPT’s 'Improve model for everyone', but we’ll have to wait and see how AI creators react.

You might also like

TechRadar – All the latest technology news

Read More

Your old photos are getting a 3D makeover thanks to this huge Vision Pro update

With the unveiling of visionOS 2.0 for the Vision Pro at WWDC 24, Apple introduced many new features but left my wish to open up environments ungranted. Even so, aside from new display options for Mac Virtual Display and more control gestures, there is one feature that stands out from the rest.

When I reviewed the Vision Pro, I noted how emotional an experience it could be, especially viewing photos back on it. Looking at photos of loved ones who have since passed or even reliving moments that I frequently call up on my iPhone or iPad, there was something more about life-size or larger-than-life representations of the content. When shot properly, the most compelling spatial videos and photos give off a real feeling of intimacy and engagement.

The catch is that, currently, the only photos and videos that can be viewed in this way are videos that have been shot in Apple's spatial image format, and that's something you can only do on the 15 Pro or 15 Pro Max

However, in the case of photos, that's set to change with visionOS 2.

Make any photo more immersive

Apple Vision Pro – spatial photos visionos 2.0

(Image credit: Apple)

Photos that you view on the Vision Pro running the second generation of VisionOS will be able to be displayed as spatial photos thanks to the power of machine learning. This will add a left and right side to the 2D image to create the impression of depth and let the image effectively 'pop.' I cannot wait to give this a go, and I think it’ll give folks a more impactful experience with Apple's 'spatial computer.'

I also really like Apple’s approach here, as it won’t automatically present every photo as a spatial image – that could lead to some strange-looking shots, and there will also be photos that you’d rather leave in their original 2D form.

According to the visionOS 2.0 portion of Apple's keynote, the process is as simple as swiping through pictures within Photos and tapping a button to watch as machine learning kicks in, analyzes your photo, and adds depth elements. The resulting images really pop, and when viewed on a screen that could be as large as you want on the Vision Pro, the effect is striking.

I’ve already enjoyed looking at standard photos of key memories of my life with friends and family who are still here and some who have passed. Viewing it back on that grand stage is emotional, makes you think, and can be powerful. I’m hopeful that this option of engaging this 3D effect will make that impact even stronger.

It has the potential to greatly expand how much a Vision Pro owner actually uses the Photos app, considering that it’s a great way to view images on a large scale, be it a standard shot, ultra-wide, portrait, or even a panorama.

Mac Virtual Display expands, and improved gestures

Apple Vision Pro, Mac Virtual Display VisionOs 2.0

(Image credit: Apple)

While 'spatial photos' was the new feature that most caught my eye, it’s joined by two other new features in visionOS 2.0. For starters, Mac Virtual Display is set to get a big enhancement – you’ll be able to make the screen sizes much larger, almost like a curved display that wraps around, and one that will benefit from improved resolutions. That means more applications will run even better here.

Additionally, you can do more with hand gestures. Rather than hitting the Digital Crown to pull up the home screen, you can make a gesture similar to double-tapping to pull up that interface, while another gesture will let you easily access Control Center.

New ways of interaction are either overlaid in your reality, in an immersive one for Apple, or on Tatooine if you’re in Disney Plus.

TechRadar – All the latest technology news

Read More

Microsoft has yanked Windows 11 24H2 update from testing – is this a bad sign?

Windows 11 24H2 has apparently been pulled back from testing for the time being, with Microsoft hitting the pause button presumably due to issues with the major update due to land later this year.

If you recall, the 24H2 update was sent out to the Release Preview channel back on May 22, but Windows Latest noticed that on a PC in that testing channel, the update was no longer being offered.

After further investigation into why that might be, the tech site stumbled across an update (from the end of last week) to the original blog post introducing the preview build, where Microsoft states: “We are temporarily pausing the rollout of Windows 11, version 24H2 to the Release Preview Channel. We will resume the rollout in the coming weeks.”

That’s all Microsoft has said on the matter, leaving the question of why the update has been yanked open to debate. Well, we say that, but there’s a fairly obvious reason you can discern from examining the posts in Microsoft’s Feedback Hub about the 24H2 update, and it’s seemingly had quite a few problems.

Windows Latest observes that there’s a notable bug with a ‘RunDLL’ error box that keeps popping up annoying testers, and much more in terms of general stability issues, with apps and games freezing, stuttering, or crashing. Nasty.


Analysis: Time to fret about a delay? We don’t think so

This all sounds a bit worrying, and might make you wonder whether the Windows 11 24H2 update might even be delayed – if there are gremlins crawling around the inner workings serious enough to get the upgrade pulled from testing for the time being. Microsoft’s timeframe of the “coming weeks” for the return of the final test version (Release Preview) of 24H2 doesn’t sound too comforting either – hinting at a lengthier pause, perhaps.

Then again, we shouldn’t read too much into that statement – it’s standard language commonly used in these kinds of situations. Also, remember that the 24H2 update is still a good way off. It’s not expected to arrive until September 2024 or October, or thereabouts, so there’s still a lot of time to iron out any issues.

Rather than expecting that things are delayed, what’s more likely the case here is Microsoft was a bit too early in deploying Windows 11 24H2 to Release Preview. After all, we were a bit surprised when it emerged last month, and Microsoft did note that it was a very limited rollout initially (in an update to the blog post at the end of May). In other words, the company was being cautious here, and we can see why now.

Granted, there is a slight concern due to the issues present sounding pretty bad here, but for now, this feels like a misstep with an early release, rather than the alarm bells sounding for Windows 11 24H2 not being ready for its roughly rumored launch timeframe later this year.

You might also like…

TechRadar – All the latest technology news

Read More

Can your PC or Mac run on-device AI? This handy new Opera tool lets you find out

Opera wants to make it easy for everyday users to find out whether their PC or Mac can run AI locally, and to that end, has incorporated a tool into its browser.

When we talk about running AI locally, we mean on the device itself, using your system and its resources for the entire AI workload being done – in contrast to having your PC tap the cloud to get the computing power to achieve the task at hand.

Running AI locally can be a demanding affair – particularly if you don’t have a modern CPU with a built-in NPU to accelerate AI workloads happening on your device – and so it’s pretty handy to have a benchmarking tool that tells you how capable your hardware is in terms of completing these on-device AI tasks effectively.

There is a catch though, namely that the ‘Is your computer AI ready?’ test is only available in the developer version of the Opera browser right now. So, if you want to give it a spin, you’ll need to download that developer (test) spin on the browser.

Once that’s done, you can get Opera to download an LLM (large language model) with which to run tests, and it checks the performance of your PC in various ways (tokens per second, first token latency, model load time and more).

If all that sounds like gobbledegook, it doesn’t really matter, as after running all these tests – which might take anything from just a few minutes to more like 20 – the tool will deliver a simple and clear assessment of whether your machine is ready for AI or not.

There’s an added nuance, mind: if you get the ‘ready for AI’ result then local performance is good, and ‘not AI ready’ is self-explanatory – you can forget running local AI tasks – but there’s a middle result of ‘AI functional.’ This means your device is capable of running AI tasks locally, but it might be rather slow, depending on what you’re doing.

Opera AI Benchmark Result

(Image credit: Opera)

There’s more depth to these results for experts, that you can explore if you wish, but it’s great to get an at-a-glance estimation of your PC’s on-device AI chops. It’s also possible to download different (increasingly large) AI models to test with, too, with heftier versions catering for cutting-edge PCs with the latest hardware and NPUs.


Analysis: Why local AI processing is important

It’s great to have an easily accessible test that anyone can use to get a good idea of their PC’s processing chops for local AI work. Doing AI tasks locally, kept within the confines of the device, is obviously important for privacy – as you’re not sending any data off your machine into the cloud.

Furthermore, some AI features will use local processing partly, or indeed exclusively, and we’ve already seen the latter: Windows 11’s new cornerstone AI functionality for Copilot+ PCs, Recall, is a case in point, as it works totally on-device for security and privacy reasons. (Even so, it’s been causing a storm of controversy since it was announced by Microsoft, but that’s another story).

So, to be able to easily discern your PC’s AI grunt is a useful capability to have, though right now, downloading the Opera developer version is probably not a hassle you’ll want to go through. Still, the feature will be inbound for the full version of Opera soon enough we’d guess, so you likely won’t have to wait long for it to arrive.

Opera is certainly getting serious about climbing the rankings of the best web browsers by leveraging AI, with one of the latest moves being drafting in Google Gemini to help supercharge its Aria AI assistant.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More