Windows 11’s AI-powered feature to make games run more smoothly is for Copilot+ PCs only, we’re afraid

Windows 11 is getting a trick to help the best PC games run more smoothly, although this previously rumored feature comes with a catch – namely that it will only be available to those who have a Copilot+ PC with a Snapdragon X Elite processor.

The feature in question, which was leaked in preview builds of Windows 11 earlier this year, is called Auto Super Resolution (or Auto SR), and the idea is that it automatically upscales the resolution of a game (or indeed app) in real-time.

An upscaling feature like this effectively means the game – and it seems gaming is very much the focus (we’ll come back to that) – is run at a certain (lower) resolution, with the image upscaled to a higher resolution.

This means that something running at, say, 720p, can be upscaled to 1080p or Full HD resolution, and look nearly as good as native 1080p – but it can be rendered faster (because it’s really still 720p). If this sounds familiar, it’s because there are similar solutions already out there, such as Nvidia DLSS, AMD FSR, and Intel XeSS to name a few.

As outlined by Microsoft in its fresh details about Copilot+ PCs (highlighted by VideoCardz), the catch is that Auto SR is exclusive to these laptops. In fact, you need to be running the Qualcomm Snapdragon X Elite, so the lesser Plus version of this CPU is ruled out (for now anyway).

The other caveat to bear in mind here is that to begin with this is just for a “curated set of games,” so it’ll have a rather limited scope initially.


Analysis: The start of a long upscaling journey

When it was just a leak, there was some debate about whether Auto SR might be a feature for upscaling anything – games or apps – but Microsoft specifically talks about PC games here, and so that’s the intended use in the main. We also expected it to be some kind of all-encompassing tech in terms of game support, and that clearly isn’t the case.

Eventually, though, we’d think Auto SR will have a much broader rollout, and maybe that’ll happen before too long. After all, AI is being pushed heavily as helping gamers too – as a kind of gaming Copilot – so this is another string to that bow, and an important one we can imagine Microsoft working hard on.

Of course, the real fly in the ointment is the requirement for a Snapdragon X Elite chip, which rules out most PCs, of course. This is likely due to the demanding nature of the task, and the feature being built around the presence of a beefy NPU (Neural Processing Unit) to accelerate the AI workloads involved. Only Qualcomm’s new Snapdragon X has a peppy enough NPU to deal with this, or that’s what we can assume – but this won’t be the case for long.

Newer laptop chips from Intel, such as Lunar Lake (and Arrow Lake), and AMD’s Strix Point are inbound for later this year, and will deliver the goods in terms of the NPU and qualifying as the engine for a Copilot+ PC – and therefore being able to run Auto SR.

Naturally, we still need to see how well Microsoft implements this feature, and how upscaling games leveraging a powerful NPU works out. But as mentioned, the company has so much riding on AI, and the gaming side of the equation appears to be important enough, that we’d expect Microsoft will be trying its best to impress.

You might also like…

TechRadar – All the latest technology news

Read More

OpenAI’s Sora just made another brain-melting music video and we’re starting to see a theme

OpenAI's text-to-video tool has been a busy bee recently, helping to make a short film about a man with a balloon for a head and giving us a glimpse of the future of TED Talks – and now it's rustled up its first official music video for the synth-pop artist Washed Out (below).

This isn't the first music video we've seen from Sora – earlier this month we saw this one for independent musician August Kamp – but it is the first official commissioned example from an established music video director and artist.

That director is Paul Trillo, an artist who's previously made videos for the likes of The Shins and shared this new one on X (formerly Twitter). He said the video, which flies through a tunnel-like collage of high school scenes, was “an idea I had almost 10 years ago and then abandoned”, but that he was “finally able to bring it to life” with Sora.

It isn't clear exactly why Sora was an essential component for executing a fairly simple concept, but it helped make the process much simpler and quicker. Trillo points to one of his earlier music videos, The Great Divide for The Shins, which uses a similar effect but was “entirely 3D animated”.

As for how this new Washed Out video was made, it required less non-Sora help than the Shy Kids' Air Head video, which involved some lengthy post-production to create the necessary camera effects and consistency. For this one, Trillo said he used text-to-video prompts in Sora, then cut the resulting 55 clips together in Premiere Pro with only “very minor touch-ups”.

The result is a video that, like Sora's TED Talks creation (which was also created by Trillo), hints at the tool's strengths and weaknesses. While it does show that digital special effects are going to be democratized for visual projects with tight budgets, it also reveals Sora's issues with coherency across frames (as characters morph and change) and its persistent sense of uncanny valley.

Like the TED Talks video, a common technique to get around these limitations is the dreamy fly-through technique, which ensures that characters are only on-screen fleetingly and that any weird morphing is a part of the look rather than a jarring mistake. While it works for this video, it could quickly become a trope if it's over-used.

A music video tradition

Two people sitting on the top deck of a bus

(Image credit: OpenAI / Washed Out)

Music videos have long been pioneers of new digital technology – the Dire Straits video for Money For Nothing in 1985, for example, gave us an early taste of 3D animation, while Michael Jackson's Black Or White showed off the digital morphing trick that quickly became ubiquitous in the early 90s (see Terminator 2: Judgement Day). 

While music videos lack the cultural influence they once did, it looks like they'll again be a playground for AI-powered effects like the ones in this Washed Out creation. That makes sense because Sora, which OpenAI expects to release to the public “later this year”, is still well short of being good enough to be used in full-blown movies.

We can expect to see these kinds of effects everywhere by the end of the year, from adverts to TikTok promos. But like those landmark effects in earlier music videos, they will also likely date pretty quickly and become visual cliches that go out of fashion.

If Sora can develop at the same rate as OpenAI's flagship tool, ChatGPT, it could evolve into something more reliable, flexible, and mainstream – with Adobe recently hinting that the tool could soon be a plug-in for Adobe Premiere Pro. Until then, expect to see a lot more psychedelic Sora videos that look like a mashup of your dreams (or nightmares) from last night.

You might also like…

TechRadar – All the latest technology news

Read More

Turns out the viral ‘Air Head’ Sora video wasn’t purely the work of AI we were led to believe

A new interview with the director behind the viral Sora clip Air Head has revealed that AI played a smaller part in its production than was originally claimed. 

Revealed by Patrick Cederberg (who did the post-production for the viral video) in an interview with Fxguide, it has now been confirmed that OpenAI's text-to-video program was far from the only force involved in its production. The 1-minute and 21-second clip was made with a combination of traditional filmmaking techniques and post-production editing to achieve the look of the final picture.

Air Head was made by ShyKids and tells the short story of a man with a literal balloon for a head. While there's human voiceover utilized, from the way OpenAI was pushing the clip on social channels such as YouTube, it certainly left the impression that the visuals were was purely powered by AI, but that's not entirely true. 

As revealed in the behind-the-scenes clip, a ton of work was done by ShyKids who took the raw output from Sora and helped to clean it up into the finished product. This included manually rotoscoping the backgrounds, removing the faces that would occasionally appear on the balloons, and color correcting. 

Then there's the fact that Sora takes a ton of time to actually get things right. Cederberg explains that there were “hundreds of generations at 10 to 20 seconds a piece” which were then tightly edited in what the team described as a “300:1” ratio of what was generated versus what was primed for further touch-ups. 

Such manual work also included editing out the head which would appear and reappear, and even changing the color of the balloon itself which would appear red instead of yellow. While Sora was used to generate the initial imagery with good results, there was clearly a lot more happening behind the scenes to make the finished product look as good as it does, so we're still a long way out from instantly-generated movie-quality productions. 

Sora remains tightly under wraps save for a handful of carefully curated projects that have been allowed to surface, with Air Head among the most popular. The clip has over 120,000 views at the time of writing, with OpenAI touting as “experimentation” with the program, downplaying the obvious work that went into the final product. 

Sora is impressive but we're not convinced

While OpenAI has done a decent job of showcasing what its text-to-video service can do through the large language model, the lack of transparency is worrying. 

Air Head is an impressive clip by a talented team, but it was subject to a ton of editing to get the final product to where it is in the short. 

It's not quite the one-click-and you-'re-done approach that many of the tech's boosters have represented it as. It turns out that it is merely a tool which could be used to enhance imagery instead of create from scratch, which is something that is already common enough in video production, making Sora seem less revolutionary than it first appeared.

You may also like

TechRadar – All the latest technology news

Read More

Google IO 2024 lineup confirmed – 5 new things we’re expecting to see, from Wear OS 5 to Gemini wizardry

Google IO 2024 is approaching fast, with the big G's festival for Android 15, Wear OS 5, Android TV and more kicking off on May 14. And we now have an official schedule to give us some hints of the software (and maybe hardware) announcements in the pipeline.

The Google IO 2024 schedule (spotted by @MishaalRahman on X, formerly Twitter) naturally doesn't reveal any specifics, but it does confirm where we'll see some big new software upgrades.

The keynote, which will be well worth tuning into, will kick off at 10am PT / 5pm GMT on May 14 (which works out as 3am AEST in Australia). But the details of the follow-up sessions give us a taster of what will be shown off, including our first proper look at Wear OS 5.

That's confirmed in the 'Building for the future of Wear OS' session, which will help developers “discover the new features of Wear OS 5”. Considering the smartwatch platform appeared to be flirting with the Google Graveyard not long ago, that's good news. We'll presumably hear more about a release date at the event, and maybe even a Pixel Watch 3.

What else does the schedule reveal? Android 15 was always a shoo-in for this year's show, so it's no surprise that the OS will be covered alongside “generative AI, form factors” and more at Google IO 2024. 

Thirdly, AI will naturally be a huge general theme, with Google Gemini a consistent thread across the event. Developers will discover “new ways to build immersive 3D maps” and how to make “next-gen AI apps with Gemini models”. Gemini will also power new apps for Google Chat and create new content from images and video, thanks to Google's multi-modal Gemini Pro Vision model. 

Fans of Android Auto will also be pleased to hear that it'll likely get some upgrades, too, with one developer session titled “Android for Cars: new in-car experiences”. Likewise, Google TV and Android TV OS will get a mention, at the very least, with one session promising to show off “new user experience enhancements in Google TV and the latest additions to the next Android TV OS platform”. 

Lastly, ChromeOS will get some upgrades, with a session promising “new” features and some new “world-class experiences” for Chromebooks. Surprisingly, even Google Pay gets a mention in the schedule, even though it will officially be discontinued a few weeks after Google IO 2024 on June 4, in favor of Google Wallet. Who knows, perhaps we'll even be treated to a tour of the Google Graveyard, including its latest inhabitant, Google Podcasts.

Will there be hardware at Google IO 2024?

A phone on an orange background showing the Google IO 2024 homepage

(Image credit: Google)

Because Google IO 2024 is a developer conference, its sessions are all themed around software – but we'll almost certainly see lots of new hardware treats announced during the keynote, too.

On the phones front, the Google Pixel 8a has now almost fully leaked, pointing to an imminent announcement for the mid-ranger. Similarly, we've also seen leaked photos of the Google Pixel 9 alongside rumors of a Pixel 9 Pro (both of which could deliver iPhone-style satellite connectivity).

This week, rumors about a refreshed Pixel Tablet (rather than a Pixel Tablet 2) suggested it could also make its bow at Google's conference. A Google Pixel Fold 2 is also on the cards, though we have also heard whispers of a Pixel 9 Pro Fold instead.

As always, we can expect some surprises too, like when Google teased its live-translation glasses at Google IO 2022, which then sadly disappeared in a cloud of vaporware. Let's hope its new ideas for this year's conference stick around a little longer.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft targets another corner of Windows 11 with – you guessed it – adverts, and we’re getting a bit fed up with this

Microsoft is testing adding a fresh batch of ads to the Windows 11 interface, this time in the Start menu.

Recent digging in preview builds had suggested this move was in the cards, and now those cards have been dealt to testers in the Windows 11 preview Beta channel with a new build (version 22635).

The ads are being placed in the ‘Recommended’ panel of the Start menu, and consist of highlighted apps from the Microsoft Store that you might want to try.

These promoted pieces of software appear with a brief description in the Recommended section, alongside the other content such as your commonly-used (already installed) apps.

As Microsoft makes clear in the blog post introducing the build, this is only rolling out in the Beta channel, and just in the US. Also, you can turn off the app promotions if you wish.

Testers who want to do so need to open the Settings app, head to Personalization > Start, and switch off the slider for ‘Show recommendations for tips, app promotions, and more.’


Analysis: Just trying stuff out…

As mentioned, this idea was already flagged up as hidden in test builds, but now it’s a reality – at least for a limited set of testers in the US. In fact, Microsoft clarifies that it is “beginning to roll this out to a small set of Insiders [testers]” so it sounds like the firm is really being tentative. On top of that, Microsoft writes: “We regularly try out new experiences and concepts that may never get released with Windows Insiders to get feedback.”

In other words – don’t panic – we’re just trying out this concept a little bit. It probably won’t ever happen – move along, there’s nothing to see here. Anyway, you get the idea: Microsoft is very aware it needs to tread carefully here, and rightly so.

Advertising like this, wrapped up as suggestions or recommendations, is becoming all too common a theme with Windows 11. Prompting of one kind or another has been floating around in the recent past, whether it’s to encourage folks to sign up for a Microsoft Account, or to use OneDrive as part of a backup strategy, or slipping ads into Outlook is another recent example. Or indeed recommendations for websites to visit, in much the same vein as these app recommendations in this Beta build.

In this case, the idea appears to be driving traffic towards the Microsoft Store – which Microsoft has been making a lot of efforts with lately to improve performance (and the store has come on leaps and bounds in that regard, to be fair).

We don’t want to sound like a broken record, but sadly, we’re going to, as we’re of the firm belief that you can monetize a free product with advertising – no one can argue with that – but when a product is already paid for, shoving in ads on top – particularly with an OS, where you’re cluttering the interface – is just not on.

Microsoft may argue that these recommendations could prove useful, especially if they’re targeted for the user – though there could be privacy issues therein if that’s the way this ends up working – but still, we don’t think it’s right to be inserting these bits of adverts into the UI, no doubt turned on by default. Yes, you can turn them off – thankfully – but you shouldn’t have to in a paid OS.

It’s up to testers to feed back on this one, and let Microsoft know how they feel.

You might also like

TechRadar – All the latest technology news

Read More

Meta teases its next big hardware release: its first AR glasses, and we’re excited

Meta’s Reality Labs division – the team behind its VR hardware and software efforts – has turned 10 years old, and to celebrate the company has released a blog post outlining its decade-long history. However, while a trip down memory lane is fun, the most interesting part came right at the end, as Meta teased its next major new hardware release: its first-ever pair of AR glasses.

According to the blog post, these specs would merge the currently distinct product pathways Meta’s Reality Labs has developed – specifically, melding its AR and VR hardware (such as the Meta Quest 3) with the form factor and AI capabilities of its Ray-Ban Meta Smart Glasses to, as Meta puts it, “deliver the best of both worlds.”

Importantly for all you Quest fans out there, Meta adds that its AR glasses wouldn’t replace its mixed-reality headsets. Instead, it sees them being the smartphones to the headsets’ laptop/desktop computers – suggesting that the glasses will offer solid performance in a sleek form factor, but with less oomph than you’d get from a headset.

Before we get too excited, though, Meta hasn’t said when these AR specs will be released – and unfortunately they might still be a few years away.

When might we see Meta’s AR glasses?

A report from The Verge back in March 2023 shared an apparent Meta Reality Labs roadmap that suggested the company wanted to release a pair of smart glasses with a display in 2025, followed by a pair of 'proper' AR smart glasses in 2027.

The Meta Quest 3 dangling down as a user looks towards a sunny window while holding it

We’re ready for Meta’s next big hardware release (Image credit: Meta)

However, while we may have to wait some time to put these things on our heads, we might get a look at them in the next year or so,

A later report that dropped in February this year, this time via Business Insider, cited unnamed sources who said a pair of true AR glasses would be demoed at this year’s Meta Connect conference. Dubbed 'Orion' by those who claim to be in the know, the specs would combine Meta’s XR (a catchall for VR, AR, and MR) and AI efforts – which is exactly what Meta described in its recent blog post.

As always, we should take rumors with a pinch of salt, but given that this latest teaser came via Meta itself it’s somewhat safe to assume that Meta AR glasses are a matter of when, not if. And boy are we excited.

We want Meta AR glasses, and we want ‘em now 

Currently Meta has two main hardware lines: its VR headsets and its smart glasses. And while it’s rumored to be working on new entries to both – such as a budget Meta Quest 3 Lite, a high-end Meta Quest Pro 2, and the aforementioned third-generation Ray-Ban glasses with a screen – these AR glasses would be its first big new hardware line since it launched the Ray-Ban Stories in 2021.

And the picture Meta has painted of its AR glasses is sublime.

Firstly, while Meta’s current Ray-Ban smart glasses aren’t yet the smartest, a lot of major AI upgrades are currently in beta – and should be launching properly soon.

Ray-Ban meta glasses up close

The Ray-Ban Meta Smart Glasses are set to get way better with AI (Image credit: Future / Philip Berne)

Its Look and Ask feature combines the intelligence of ChatGPT – or in this instance its in-house Meta AI – with the image-analysis abilities of an app like Google Lens. This apparently lets you identify animals, discover facts about landmarks, and help you plan a meal based on the ingredients you have – it all sounds very sci-fi, and actually useful, unlike some AI applications.

We then take those AI-abilities and combine them with Meta’s first-class Quest platform, which is home to the best software and developers working in the XR space. 

While many apps likely couldn’t be ported to the new system due to hardware restrictions – as the glasses might not offer controllers, will probably be AR-only, and might be too small to offer as powerful a chipset or as much RAM as its Quest hardware – we hope that plenty will make their way over. And Meta’s existing partners would plausibly develop all-new AR software to take advantage of the new system.

Based on the many Quest 3 games and apps we’ve tried, even if just a few of the best make their way to the specs they’d help make Meta’s new product feel instantly useful. a factor that’s a must for any new gadget.

Lastly, we’d hopefully see Meta’s glasses adopt the single-best Ray-Ban Meta Smart Glasses feature: their design. These things are gorgeous, comfortable, and their charging case is the perfect combination of fashion and function. 

A closeup of the RayBan Meta Smart Glasses

We couldn’t ask for better-looking smart specs than these (Image credit: Meta)

Give us everything we have already design-wise, and throw in interchangeable lenses so we aren’t stuck with sunglasses all year round – which in the UK where I'm based are only usable for about two weeks a year – and the AR glasses could be perfect.

We’ll just have to wait and see what Meta shows off, either at this year’s Meta Connect or in the future – and as soon as they're ready for prime time, we’ll certainly be ready to test them.

You might also like

TechRadar – All the latest technology news

Read More

Mark Zuckerberg says we’re ‘close’ to controlling our AR glasses with brain signals

Move over eye-tracking and handset controls for VR headsets and AR glasses, according to Mark Zuckerberg – the company’s CEO – Meta is “close” to selling a device that can be controlled by your brain signals. 

Speaking on the Morning Brew Daily podcast (shown below), Zuckerberg was asked to give examples of AI’s most impressive use cases. Ever keen to hype up the products Meta makes – he also recently took to Instagram to explain why the Meta Quest 3 is better than the Apple Vision Pro – he started to discuss the Ray-Ban Meta Smart Glasses that use AI and their camera to answer questions about what you see (though annoyingly this is still only available to some lucky users in beta form).

He then went on to discuss “one of the wilder things we’re working on,” a neural interface in the form of a wristband – Zuckerberg also took a moment to poke fun at Elon Musk’s Neuralink, saying he wouldn’t want to put a chip in his brain until the tech is mature, unlike the first human subject to be implanted with the tech.

Meta’s EMG wristband can read the nervous system signals your brain sends to your hands and arms. According to Zuckerberg, this tech would allow you to merely think how you want to move your hand and that would happen in the virtual without requiring big real-world motions.

Zuckerberg has shown off Meta’s prototype EMG wristband before in a video (shown below) – though not the headset it works with – but what’s interesting about his podcast statement is he goes on to say that he feels Meta is close to having a “product in the next few years” that people can buy and use.

Understandably he gives a rather vague release date and, unfortunately, there’s no mention of how much something like this would cost – though we’re ready for it to cost as much as one of the best smartwatches – but this system could be a major leap forward for privacy, utility and accessibility in Meta’s AR and VR tech.

The next next-gen XR advancement?

Currently, if you want to communicate with the Ray-Ban Meta Smart Glasses via its Look and Ask feature or to respond to a text message you’ve been sent without getting your phone out you have to talk it it. This is fine most of the time but there might be questions you want to ask or replies you want to send that you’d rather keep private.

The EMG wristband allows you to type out these messages using subtle hand gestures so you can maintain a higher level of privacy – though as the podcast hosts note this has issues of its own, not least of which is schools having a harder time trying to stop students from cheating in tests. Gone are the days of sneaking in notes, it’s all about secretly bringing AI into your exam.

Then there are utility advantages. While this kind of wristband would also be useful in VR, Zuckerberg has mostly talked about it being used with AR smart glasses. The big success, at least for the Ray-Ban Meta Smart Glasses is that they’re sleek and lightweight – if you glance at them they’re not noticeably different to a regular pair of Ray-Bans.

Adding cameras, sensors, and a chipset for managing hand gestures may affect this slim design. That is unless you put some of this functionality and processing power into a separate device like the wristband. 

The inside displays are shown off in the photo, they sit behind the Xreal Air 2 Pro AR glasses shades

The Xreal Air 2 Pro’s displays (Image credit: Future)

Some changes would still need to be made to the specs themselves – chiefly they’ll need to have in-built displays perhaps like the Xreal Air 2 Pro’s screens – but we’ll just have to wait to see what the next Meta smart glasses have in store for us.

Lastly, there’s accessibility. By their very nature, AR and VR are very physical things – you have to physically move your arms around, make hand gestures, and push buttons – which can make them very inaccessible for folks with disabilities that affect mobility and dexterity.

These kinds of brain signal sensors start to address this issue. Rather than having to physically act someone could think about doing it and the virtual interface would interpret these thoughts accordingly.

Based on demos shown so far some movement is still required to use Meta’s neural interface so it’s far from the perfect solution, but it’s the first step to making this tech more accessible and we're excited to see where it goes next.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

Elon Musk’s Neuralink has performed its first human brain implant, and we’re a step closer to having phones inside our heads

Neuralink, Elon Musk's brain interface company, achieved a significant milestone this week, with Musk declaring on X (formerly Twitter), “The first human received an implant from yesterday and is recovering well.”

Driven by concerns that AI might soon outpace (or outthink) humans, Musk first proposed the idea of a brain-to-computer interface, then called Neural Lace, back in 2016. envisioning an implant that could overcome limitations inherent in human-to-computer interactions. Musk claimed that an interface that could read brain signals and deliver them directly to digital systems would massively outpace our typical keyboard and mouse interactions.

Four years later, Musk demonstrated early clinical trials with an uncooperative pig, and in 2021 the company installed the device in a monkey that used the interface to control a game of Pong.

It was, in a sense, all fun and games – until this week, and Musk's claim of a human trial and the introduction of some new branding.

Neuralink's first product is now called 'Telepathy' which, according to another Musk tweet, “Enables control of your phone or computer, and through them almost any device, just by thinking.”

As expected, these brain implants are not, at least for now, intended for everyone. Back in 2020, Musk explained that the intention is “to solve important spine and brain problems with a seamlessly implanted device.” Musk noted this week that “Initial users will be those who have lost the use of their limbs. Imagine if Stephen Hawking could communicate faster than a speed typist or auctioneer. That is the goal.”

Neural link devices like Telepathy are bio-safe implants comprising small disk-like devices (roughly the thickness of four coins stuck together) with ultra-fine wires trailing out of them that connect to various parts of the brain. The filaments read neural spikes, and a computer interface interprets them to understand the subject's intentions and translate them into action on, say, a phone, or a desktop computer. In this first trial, Musk noted that “Initial results show promising neuron spike detection,” but he didn't elaborate on whether the patient was able to control anything with his mind.

Musk didn't describe the surgical implantation process. Back in 2020, though, Neuralink introduced its Link surgery robot, which it promised would implant the Neuralink devices with minimal pain, blood, and, we're guessing, trauma. Considering that the implant is under the skin and skull, and sits on the brain, we're not sure how that's possible. It's also unclear if Neuralink used Link to install 'Telepathy.'

The new branding is not that far-fetched. While most people think of telepathy as people transmitting thoughts to one another, the definition is “the communication of thoughts or ideas by means other than the known senses.”

A phone in your head

Still, Musk has a habit of using hyperbole when describing Neuralink. During one early demonstration, he only half-jokingly said “It’s sort of like if your phone went in your brain.” He also later added that, “In the future, you will be able to save and replay memories.”

With the first Neuralink Telepathy device successfully installed, however, Musk appears to be somewhat more circumspect. There was no press conference, or parading of the patient before the reporters. All we have are these few tweets, and scant details about a brain implant that Musk hopes will help humans stay ahead of rapidly advancing AIs.

It's worth noting that for all of Musk's bluster and sometimes objectionable rhetoric, he was more right than he knew about where the state of AI would be by 2024. Back in 2016, there was no ChatGPT, Google Bard, or Microsoft CoPilot. We didn't have AI in Windows and Photoshop's Firefly, realistic AI images and videos, or realistic AI deepfakes. Concerns about AIs taking jobs are now real, and the idea of humans falling behind artificial intelligence sounds less like a sci-fi fantasy and more like our future.

Do those fears mean we're now more likely to sign up for our brain implants? Musk is betting on it.

You might also like

TechRadar – All the latest technology news

Read More

Ray-Ban Meta smart glasses finally get the AI camera feature we were promised, but there’s a catch

When the Ray-Ban Meta smart glasses launched they did so without many of the impressive AI features we were promised. Now Meta is finally rolling out these capabilities to users, but they’re still in the testing phase and only available in the US.

During their Meta Connect 2023 announcement, we were told the follow-up to the Ray-Ban Stories smart glasses would get some improvements we expected – namely a slightly better camera and speakers – but also some unexpected AI integration.

Unfortunately, when we actually got to test the specs out its AI features boiled down to very basic commands. You can instruct them to take a picture, record a video, or contact someone through Messenger or WhatsApp. In the US you could also chat to a basic conversational AI – like ChatGPT – though this was still nothing to write home about. 

While the glasses’ design is near-perfect, the speakers and camera weren’t impressive enough to make up for the lacking AI. So overall in our Ray-Ban Meta Smart Glasses review we didn’t look too favorably on the specs. 

The Ray-Ban Meta Smart Glasses Collection is stylish looking on this person's face

Press the button or ask the AI to take a picture (Image credit: Meta)

Our perception could soon be about to change drastically, however, as two major promised features are on their way: Look and Ask, and Bing integration.

Look and Ask is essentially a wearable voice-controlled Google Lens with a few AI-powered upgrades. While wearing the smart glasses you can say “Hey Meta, look and…” followed by a question about what you can see. The AI will then use the camera to scan your environment so it can provide a detailed answer to your query. On the official FAQ possible questions you can ask include “What can I make with these ingredients?” or “How much water do these flowers need?” or “Translate this sign into English.” 

To help the Meta glasses provide better information when you’re using its conversational and Look and Ask features the specs can also now access the internet via Bing. This should mean the specs can source more up-to-date data letting it answer questions about sports matches that are currently happening, or provide real-time info on what nearby restaurants are the best rated, among other things.

Still not perfect

Orange RayBan Meta Smart Glasses in front of a wall of colorful lenses including green, blue, yellow and pink

(Image credit: Meta)

It all sounds very science fiction, but unfortunately these almost magical capabilities come with a catch. For now, the new features – just like the existing conversational AI – are in beta testing. 

So the glasses might have trouble with some of your queries and provide inaccurate answers, or not be able to find an answer at all. What’s more, as Meta explains in its FAQ any AI-processed pictures you take while part of the beta will be stored by Meta and used to train its AI. So your Look and Ask snaps aren’t private.

Lastly, the Meta Ray-Ban smart glasses beta is only available in the US. So if you live somewhere else like me you won’t be able to try these features out – and probably won’t until 2024.

If you are in the US and happy with the terms of Meta’s Privacy Policy, you can sign up for the Early Access program and start testing these new tools. For everyone else hopefully these features won’t be in beta for long, or at least won’t be US-exclusive – otherwise we’ll be left continuing to wonder why we spent $ 299 / £299 / AU$ 449 on smart specs that aren’t all that much better than dumb Ray-Ban Wayfarers at half the cost.

You might also like

TechRadar – All the latest technology news

Read More

YouTube reveals powerful new AI tools for content creators – and we’re scared, frankly

YouTube has announced a whole bunch of AI-powered tools (on top of its existing bits and pieces) that are designed to make life easier for content creators on the platform.

As The Verge spotted, at the ‘Made on YouTube’ event which just took place, one of the big AI revelations made was something called ‘Dream Screen’, an image and video generation facility for YouTube Shorts.

See more

This lets a video creator just type in something that they’d like for a background. Such as, for example, a panda drinking a cup of coffee – given that request, the AI will take the reins and produce such a video background for the clip (or image).

This is how the process will be implemented to begin with – you prompt the AI, and it makes something for you – but eventually, creators will be able to remix content to produce something new, we’re told.

YouTube Studio is also getting an infusion of AI tools that will suggest content that could be made by individual creators, generating topic ideas for videos that might suit them, based on what’s trending with viewers interested in the kind of content that creator normally deals in.

A system of AI-powered music recommendations will also come into play to furnish audio for any given video.


Analysis: Grab the shovel?

Is it us, or does this sound rather scary? Okay, so content creators may find it useful and convenient to be able to drop in AI generated video or image backgrounds really quickly, and have some music layered on top, and so on.

But isn’t this going to just ensure a whole heap of bland – and perhaps homogenous – content flooding onto YouTube? That seems the obvious danger, and maybe one compounded by the broader idea of suggested content that people want to see (according to the great YouTube algorithm) being provided to creators on YouTube.

Is YouTube set to become a video platform groaning under the collective weight of content that gets quickly put together, thanks to AI tools, and shoveled out by the half-ton?

While YouTube seems highly excited about all these new AI utilities and tools, we can’t help but think it’s the beginning of the end for the video site – at least when it comes to meaningful, not generic, content.

We hope we’re wrong, but this whole brave new direction fills us with trepidation more than anything else. A tidal wave of AI-generated this, that, and the other, eclipsing everything else is clearly a prospect that should be heavily guarded against.

You might also like

TechRadar – All the latest technology news

Read More