Meta says fewer Quest 3s are gathering dust – is VR’s biggest issue a thing of the past?

During this year’s Game Developers Conference (GDC 2024) Meta has revealed that the Meta Quest 3 has higher retention rates than any of its previous VR headsets – suggesting one of VR’s biggest problems might be a thing of the past.

VR gadgets have become incredibly popular in recent years – just look at the sales success of the Oculus Quest 2, and the massive hype around the Apple Vision Pro – but there’s been a quiet killer for them all: retention. According to an internal report shared by The Verge in March 2023, Meta was concerned about the relatively low engagement of Quest 2 users and it was apparently stressed to staff by  Mark Rabkin, Meta’s vice president of VR, that the company needs to “be better at growth and retention.”

That emphasis seems to have paid off, with it now being said by Chris Pruett, Meta's Director of Content Ecosystem, that the Quest 3 has a higher retention rate than any previous Meta / Oculus headset.

Why are people using their Meta Quest 3 more? 

Meta hasn’t given any direct explanation of why its headsets are proving better at retaining owners’ attention than its predecessors, but we have more than a few theories.

Meta Quest 3 missing its faceplate showing its insides

The Quest 3’s better specs and software is a big win (Image credit: iFixit)

The first, and perhaps most important, is the Quest 3's simplicity. If it’s charged up you can just slip it on and start playing a VR game instantly – unlike older PCVR models. This reason is likely also why the original Oculus Quest had the highest retention of any Oculus headset ever according to John Carmack in 2019 (Via UploadVR)

Another likely reason the Quest 3 has been able to take things up a notch in terms of retention is software. The Quest store has been up and running for roughly five years, and in that time developers have created a superb VR catalog of cross-platform and exclusive software.

The Quest 3 has also raised the bar with good specs, and solid mixed reality passthrough, adding even more opportunities for app creators to develop meaningful software that owners want to use regularly. 

This, and the headset’s less bulky and comfier-to-wear design, are, as we see it, the two biggest reasons why we’ve started using the device more regularly than the Quest 2.

Lastly, there’s a belief that the Quest 3’s higher cost could be helping its retention levels. At $ 299 / £299 / AU$ 479 the Quest 2 was almost a tech impulse buy – especially considering it also came out not long before the pandemic, a period when people typically had more disposable income. 

Whereas at $ 499.99 / £479.99 / AU$ 799.99 – and launching at a time when disposable income is typically a lot lower – the Quest 3 is much more of a considered purchase. So if you aren’t planning to use the new Meta device fairly often, you’re more likely to talk yourself out of buying it.

A Meta Quest 3 owner playing tennis in VR while in their dorm room with their desk behind them.

(Image credit: Meta)

Why does higher retention matter? 

Beyond making it easier to get a VR squad together to play a multiplayer game, why does a higher retention rate matter to you or us?

From a hardware perspective, it suggests that the Quest 3 is doing something right – whether it's the mixed reality focus, its newfound balance of specs and cost, or a mixture of factors. This could clue us into what future devices might look like; specifically that they could try to follow the Quest 3’s lead by leaning further into mixed reality, or the mainline Quest headset maintaining a similar price point (in exchange for better specs) – which could pave the way for the rumored cheaper Meta Quest 3 Lite.

It may also encourage more VR software development, as it shows developers that there is a reliable market for meaningful VR software. So if you have a Quest headset already, you might see more and better apps launch in the future.

Given Meta made the announcement at GDC 2024, it's likely hoping that this latter point proves true. However, given the speed of hardware and software development, we'll likely have a little while to wait and see what the Quest 3’s newfound popularity means in practical terms.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT is broken again and it’s being even creepier than usual – but OpenAI says there’s nothing to worry about

OpenAI has been enjoying the limelight this week with its incredibly impressive Sora text-to-video tool, but it looks like the allure of AI-generated video might’ve led to its popular chatbot getting sidelined, and now the bot is acting out.

Yes, ChatGPT has gone insane–- or, more accurately, briefly went insane for a short period sometime in the past 48 hours. Users have reported a wild array of confusing and even threatening responses from the bot; some saw it get stuck in a loop of repeating nonsensical text, while others were subjected to invented words and weird monologues in broken Spanish. One user even stated that when asked about a coding problem, ChatGPT replied with an enigmatic statement that ended with a claim that it was ‘in the room’ with them.

Naturally, I checked the free version of ChatGPT straight away, and it seems to be behaving itself again now. It’s unclear at this point whether the problem was only with the paid GPT-4 model or also the free version, but OpenAI has acknowledged the problem, saying that the “issue has been identified” and that its team is “continuing to monitor the situation”. It did not, however, provide an explanation for ChatGPT’s latest tantrum.

This isn’t the first time – and it won’t be the last

ChatGPT has had plenty of blips in the past – when I set out to break it last year, it said some fairly hilarious things – but this one seems to have been a bit more widespread and problematic than past chatbot tomfoolery.

It’s a pertinent reminder that AI tools in general aren’t infallible. We recently saw Air Canada forced to honor a refund after its AI-powered chatbot invented its own policies, and it seems likely that we’re only going to see more of these odd glitches as AI continues to be implemented across the different facets of our society. While these current ChatGPT troubles are relatively harmless, there’s potential for real problems to arise – that Air Canada case feels worryingly like an omen of things to come, and may set a real precedent for human moderation requirements when AI is deployed in business settings.

OpenAI CEO Sam Altman speaking during Microsoft's February 7, 2023 event

OpenAI CEO Sam Altman doesn’t want you (or his shareholders) to worry about ChatGPT. (Image credit: JASON REDMOND/AFP via Getty Images)

As for exactly why ChatGPT had this little episode, speculation is currently rife. This is a wholly different issue to user complaints of a ‘dumber’ chatbot late last year, and some paying users of GPT-4 have suggested it might be related to the bot’s ‘temperature’.

That’s not a literal term, to be clear: when discussing chatbots, temperature refers to the degree of focus and creative control the AI exerts over the text it produces. A low temperature gives you direct, factual answers with little to no character behind them; a high temperature lets the bot out of the box and can result in more creative – and potentially weirder – responses.

Whatever the cause, it’s good to see that OpenAI appears to have a handle on ChatGPT again. This sort of ‘chatbot hallucination’ is a bad look for the company, considering its status as the spearpoint of AI research, and threatens to undermine users’ trust in the product. After all, who would want to use a chatbot that claims to be living in your walls?

TechRadar – All the latest technology news

Read More

Mark Zuckerberg says we’re ‘close’ to controlling our AR glasses with brain signals

Move over eye-tracking and handset controls for VR headsets and AR glasses, according to Mark Zuckerberg – the company’s CEO – Meta is “close” to selling a device that can be controlled by your brain signals. 

Speaking on the Morning Brew Daily podcast (shown below), Zuckerberg was asked to give examples of AI’s most impressive use cases. Ever keen to hype up the products Meta makes – he also recently took to Instagram to explain why the Meta Quest 3 is better than the Apple Vision Pro – he started to discuss the Ray-Ban Meta Smart Glasses that use AI and their camera to answer questions about what you see (though annoyingly this is still only available to some lucky users in beta form).

He then went on to discuss “one of the wilder things we’re working on,” a neural interface in the form of a wristband – Zuckerberg also took a moment to poke fun at Elon Musk’s Neuralink, saying he wouldn’t want to put a chip in his brain until the tech is mature, unlike the first human subject to be implanted with the tech.

Meta’s EMG wristband can read the nervous system signals your brain sends to your hands and arms. According to Zuckerberg, this tech would allow you to merely think how you want to move your hand and that would happen in the virtual without requiring big real-world motions.

Zuckerberg has shown off Meta’s prototype EMG wristband before in a video (shown below) – though not the headset it works with – but what’s interesting about his podcast statement is he goes on to say that he feels Meta is close to having a “product in the next few years” that people can buy and use.

Understandably he gives a rather vague release date and, unfortunately, there’s no mention of how much something like this would cost – though we’re ready for it to cost as much as one of the best smartwatches – but this system could be a major leap forward for privacy, utility and accessibility in Meta’s AR and VR tech.

The next next-gen XR advancement?

Currently, if you want to communicate with the Ray-Ban Meta Smart Glasses via its Look and Ask feature or to respond to a text message you’ve been sent without getting your phone out you have to talk it it. This is fine most of the time but there might be questions you want to ask or replies you want to send that you’d rather keep private.

The EMG wristband allows you to type out these messages using subtle hand gestures so you can maintain a higher level of privacy – though as the podcast hosts note this has issues of its own, not least of which is schools having a harder time trying to stop students from cheating in tests. Gone are the days of sneaking in notes, it’s all about secretly bringing AI into your exam.

Then there are utility advantages. While this kind of wristband would also be useful in VR, Zuckerberg has mostly talked about it being used with AR smart glasses. The big success, at least for the Ray-Ban Meta Smart Glasses is that they’re sleek and lightweight – if you glance at them they’re not noticeably different to a regular pair of Ray-Bans.

Adding cameras, sensors, and a chipset for managing hand gestures may affect this slim design. That is unless you put some of this functionality and processing power into a separate device like the wristband. 

The inside displays are shown off in the photo, they sit behind the Xreal Air 2 Pro AR glasses shades

The Xreal Air 2 Pro’s displays (Image credit: Future)

Some changes would still need to be made to the specs themselves – chiefly they’ll need to have in-built displays perhaps like the Xreal Air 2 Pro’s screens – but we’ll just have to wait to see what the next Meta smart glasses have in store for us.

Lastly, there’s accessibility. By their very nature, AR and VR are very physical things – you have to physically move your arms around, make hand gestures, and push buttons – which can make them very inaccessible for folks with disabilities that affect mobility and dexterity.

These kinds of brain signal sensors start to address this issue. Rather than having to physically act someone could think about doing it and the virtual interface would interpret these thoughts accordingly.

Based on demos shown so far some movement is still required to use Meta’s neural interface so it’s far from the perfect solution, but it’s the first step to making this tech more accessible and we're excited to see where it goes next.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

Qualcomm exec says next-gen Windows coming mid-2024 – but will it be Windows 12?

Microsoft’s next-gen version of Windows, whatever that might be called, is set to pitch up in the middle of 2024.

The Register reports that Qualcomm’s CEO, Cristiano Amon, made the revelation in an earnings call for the company. On the call, Amon mentioned the next incarnation of Windows when talking about the incoming Snapdragon X Elite chip, which is going to be the engine of some of the AI-powered laptops Microsoft keeps banging on about (this is the year of AI PCs, remember?).

Amon said: “We’re tracking to the launch of products with this chipset [Snapdragon X Elite] tied with the next version of Microsoft Windows that has a lot of the Windows AI capabilities … We’re still maintaining the same date, which is driven by Windows, which is mid-2024, getting ready for back-to-school.”

So, the release date of the middle of 2024 for the laptops driven by Qualcomm’s chip is pitched there because that’s when next-gen Windows will come out.

This echoes previous chatter from the grapevine that the middle of 2024 should be the release date for the next iteration of Windows, including a specific mention of June from one source (add salt, naturally, as even if this is Microsoft’s plan right now, it may not pan out).


Analysis: Navigating the nuances

There’s a lot of nuances to all these rumors and official declarations about the launch of next-gen Windows (we’ve also heard from Intel, as well as Qualcomm). Firstly, let’s clarify: will the next desktop OS from Microsoft be Windows 12, or Windows 11 24H2?

The simple answer to this is we don’t know, but all the current evidence is stacking up to indicate that the next release will be Windows 11 24H2 – although that doesn’t completely rule out the possibility of Windows 12. On balance, Windows 12 is probably more likely to arrive in 2025 though (if that’s what it ends up being called – the point is, this will be an all-new Windows, not just an update to Windows 11).

However, there will be a different kind of all-new Windows arriving in 2024, even if we get Windows 11 24H2 this year, and not Windows 12, as seems likely. Confused? Well, don’t be: what Microsoft is ushering in – for the middle of this year – is a new platform Windows is built on. This new take on the underpinnings of the desktop OS is called Germanium and it brings a whole lot of work under the hood for better performance and security. The kind of things you won’t see, but will still benefit from.

Germanium is the platform that AI PCs will be built on, and when Qualcomm’s CEO mentions Snapdragon X Elite-powered laptops arriving in the middle of 2024 with the next version of Windows, that’s what Amon is really talking about: Germanium.

In short, this doesn’t mean we’ll get next-gen Windows 12 in mid-2024, but that if it’s the Windows 11 24H2 update – which as mentioned is most likely the case, going by the rumors flying around – it’ll still be a new Windows (the underlying platform, not the actual OS you interact with).

The other twist is that Windows 11 24H2 (or indeed Windows 12, if that slim chance pans out) won’t be coming to everyone in the middle of the year. The plan is to bring out the new Germanium-powered Windows, whatever it’s called, on new laptops (AI PCs) first – perhaps in July, going by previous buzz from the grapevine – but it’ll be a while before existing Windows 11 PCs get the upgrade. That rollout to all users is rumored to be happening in September, but whatever the case, it’ll be later in the year before everyone using Windows 11 gets the upgrade.

You might also like…

TechRadar – All the latest technology news

Read More

Apple says AI features are coming to your iPhone ‘later this year’ – here’s what to expect

For the past year or two, the world has watched as a string of incredible artificial intelligence (AI) tools have appeared, and everyone has been wondering one thing: when will Apple join the party? Now, we finally have an answer.

On a recent earnings call (via The Verge), Apple CEO Tim Cook revealed that AI tools are coming to the company’s devices as soon as “later this year.” Cook then added that “I think there’s a huge opportunity for Apple with generative AI.” While the Apple chief didn’t reveal any specifics, the small amount he did discuss has already been enough to get tongues wagging and for speculation to run riot.

It’s no surprise that Apple is working on generative AI tools – Cook admitted as much back in August 2023, when he explained that Apple has been developing its own generative AI “for years.” But the latest admission is the first time we’ve seen anyone put a launch date on things, even if it is a very rough date.

Given that this is a software update (and a big one at that), it seems likely that Apple has is its Worldwide Developers Conference (WWDC) in mind. The company will use this June event to unveil its upcoming operating systems and software upgrades (like iOS 18). And with its audience mostly comprised of developers, it makes sense for Apple to tease something like generative AI that could give devs a new tool in their iOS arsenal.

As well as that, industry analyst Jeff Pu has previously claimed that iOS 18 will be one of Apple’s biggest software updates ever precisely because of its inclusion of generative AI, so Cook’s statements seem to confirm Pu’s claim. That means there could be a lot to look forward to at WWDC – and some big new features coming to your iPhone.

What's en route?

The most likely upgrade that Cook is referring to is a rebooted version of Apple's Siri voice assistant. Bloomberg's reliable Apple commentator Mark Gurman recently predicted that iOS 18 will be “one of the biggest iOS updates – if not the biggest – in the company's history” and that this will be largely tied to a “big upgrade to Siri”.

According to another respected leaker Revegnus, Apple is building a proprietary LLM (large language model) to “completely revamp Siri into the ultimate virtual assistant”. It's about time – while Siri was impressive when it landed over a decade ago, it's since plateaued. So we can expect a much more conversational, and powerful, voice assistant by the end of 2024.

Close-up of the Siri interface

(Image credit: Shutterstock / Tada Images)

But what else might benefit from the generative AI that Apple's been working on? Messages, Apple Music and Pages are all expected to receive significant AI-based improvements later this year, with some of Apple's rivals recently giving us hints of what to expect. Google Messages will soon get added Bard powers for texting help, while Spotify has already shown that the future of streaming is AI-powered DJs.  

Lastly, there's photography and video, but it seems likely that Apple will tread more carefully than Samsung and Google here. The Galaxy S24 cameras are all about AI skills, which are something of a mixed bag. While Instant Slow-Mo (which generates extra frames of video to turn standard 4K/60p video into slow motion clips) is very clever and useful, Generative Edit opens the floodgates to digital fakery (even with its watermarks).

It'll be fascinating to see how Apple treads this line across all aspects of the iPhone. But one other key iPhone feature, privacy, could also put the brakes on Apple getting too carried away with generative AI… 

Why Apple is taking its time

Siri

(Image credit: Unsplash [Omid Armin])

Apple has been consistently criticized for not launching its own generative AI, especially as arch-rival Microsoft has been so decisive in spreading its Copilot AI to almost every aspect of Windows and its own apps.

But there’s a likely reason for Apple’s sluggishness, and it comes down to user privacy. Apple takes a strong stance on this, and often touts its devices’ privacy-protecting capabilities as one of their main benefits. AI tools are known to sweep up user data and have been known for their privacy compromises, so it’s no surprise that Apple has taken its time here, presumably to ensure its AI is as pro-privacy as possible.

As well as that, Apple doesn’t usually rush into a new market before it is ready, instead preferring to wait a little longer before blowing its rivals away with something it thinks is far superior. We saw that with the original iPhone, for example, and also with the Apple Vision Pro, and it seems that generative AI is just the latest thing to get this treatment from Apple.

Whether Apple’s own AI actually is better than the likes of ChatGPT and Copilot remains to be seen, but it looks like we’ll find out sooner rather than later.

You might also like

TechRadar – All the latest technology news

Read More

Netflix says the Apple Vision Pro is way too niche for it to make an app for the headset

Apple describes the Vision Pro as a “spatial computing” device, but right now it's arguably as much a pair of cinema goggles that can play 3D movies. That billing has been undermined by the absence of a few apps from streaming's big players – and Netflix has just explained why it's steering clear of the headset, for now.

In an interview with Stratechery (via The Verge), Netflix's co-CEO Greg Peters revealed why Netflix hasn't made a native app (or even made its iPad app available) for the Vision Pro. In a completely fair yet somehow slightly withering observation, he states that the Vision Pro “is so subscale that it's not particularly relevant to most of our members”.

Peters adds that Netflix has to make sure “that we’re not investing in places that are not really yielding a return”, but that “we’re always in discussions with Apple to try and figure that out”. In other words, Netflix isn't ruling out making an app for the Vision Pro in the future, but only when Apple's headset becomes a lot more mainstream.

That could be some way off. Early estimates suggest the Vision Pro's first weekend sales were around 180,000 units, with demand likely to taper off significantly. When you consider that Netflix now has 260 million subscribers worldwide – helpfully bolstered by the success of its ad-supported tier – you can see why it might be taking a watch-and-wait approach.

Yet Netflix's conservative approach to the Vision Pro also reflects some historically frosty relations with Apple. Netflix hasn't let you sign up to its app through Apple TV for many years to avoid Apple taking a cut of the revenue. And Netflix also still hasn't fully integrated with the TV app on Apple's streaming box, which lets you see content from all of your streaming services in a single carousel.

Whether it'll be a similar story for Netflix on the Apple Vision Pro remains to be seen, but for now, the mixed-reality headset will be missing the world's biggest TV streaming app, alongside Spotify and YouTube. 

A sensible move or a snub?

The Disney app running on the Apple Vision Pro

(Image credit: Apple)

Right now, the Vision Pro is arguably a very expensive developer kit that's also available to buy in limited quantities – so Netflix's stance is completely understandable.

Greg Peters does add that Netflix and Apple are in regular contact, stating that “we’re always in discussions with Apple” and that “we’ll see where things go with Vision Pro”. 

That's far from a closed door – and yet Netflix hasn't even allowed its iPad app to run on Apple's headset. You can watch Netflix on a web browser on the Vision Pro, but that's hardly a premium experience.

Daring Fireball's Jon Gruber even recently suggested that a Netflix iPad app for Vision Pro did exist, but that the streaming giant had a change of heart – and that the decision was made out of “pure corporate spite”, rather than anything technical.

Whatever the reality behind Netflix not even offering its iPad app on the Vision Pro, Apple certainly has its work cut out to convince some of the world's biggest apps to join its $ 3,499 “spatial computing” party. It's rubbing many developers the wrong way with its potential approach to sideloading on the iPhone, and we'll likely need to wait until at least the Vision Pro 2 before it gets close to being mainstream.

You might also like

TechRadar – All the latest technology news

Read More

Elon Musk says xAI is launching its first model and it could be a ChatGPT rival

Elon Musk’s artificial intelligence startup company, xAI, will debut its first long-awaited AI model on Saturday, November 4.

The billionaire made the announcement on X (the platform formerly known as Twitter) stating the tech will be released to a “select group” of people. He even boasts that “in some important respects, it is the best that currently exists.”

It’s been a while since we’ve last heard anything from xAI. The startup hit the scene back in July, revealing it’s run by a team of former engineers from Microsoft, Google, and even OpenAI. Shortly after the debut on July 14, Musk held a 90-minute-long Twitter Spaces chat where he talked about his vision for the company. During the chat, Musk stated his startup will seek to create “a good AGI with the overarching purpose of just trying to understand the universe”. He wants it to run contrary to what he believes is problematic tech from the likes of Microsoft and Google. 

Yet another chatbot

AGI stands for artificial general intelligence, and it’s the concept of an AI having “intelligence” comparable to or beyond that of a normal human being. The problem is that it's more of an idea of what AI could be rather than a literal piece of technology. Even Wired in their coverage of AGIs states there’s “no concrete definition of the term”.

So does this mean xAI will reveal some kind of super-smart model that will help humanity as well as be able to hold conversations like a sci-fi movie? No, but that could be the lofty end goal for Elon Musk and his team. We believe all we’ll see on November 5 is a simple chatbot like ChatGPT. Let’s call it “ChatX” since the billionaire has an obsession with the letter “X”.  

Does “ChatX” even stand a chance against the likes of Google Bard or ChatGPT? The latter has been around for almost a year now and has seen multiple updates becoming more refined each time. Maybe xAI has solved the hallucination problem. That'll be great to see. Unfortunately, it's possible ChatX could just be another vehicle for Musk to spread his ideas/beliefs.

Analysis: A personal truth spinner

Musk has talked about wanting to have an alternative to ChatGPT that focuses on providing the “truth”, whatever that means. Musk has been a vocal critic of how fast companies have been developing their own generative AI models with seemingly reckless abandon. He even called for a six-month pause on AI training in March. Obviously, that didn’t happen as the technology advanced by leaps and bounds since then.

It's worth mentioning that Twitter, under Musk's management, has been known to comply with censorship requests by governments from around the world, so Musk's definition of truth seems dubious at best. Either way, we’ll know soon enough what the team's intentions are. Just don’t get your hopes up.

While we have you, be sure to check out TechRadar's list of the best AI writers for 2023.

You might also like

TechRadar – All the latest technology news

Read More

Amazon says you might have to pay for Alexa’s AI features in the future

Amazon might be mulling a subscription charge for Alexa’s AI features at some point down the road – though that may not be for some time yet, by the sound of things.

This nugget of info emerged from a Bloomberg interview with Dave Limp, who is SVP of Amazon Devices & Services currently, though he is leaving the company later this year. (Whispers on the grapevine are that Panos Panay, a big-hitting exec who just left Microsoft, will replace Limp).

Bloomberg’s Dave Lee broadly observed that the future of Alexa could involve a more sophisticated AI assistant, but one that device owners would need to fork out to subscribe to.

This would be an avenue of monetization, giving that the previous hope for spinning out some extra cash – having folks order more stuff online using Alexa, bolstering revenue that way – just hasn’t worked out for Amazon (not in any meaningful fashion, at least).

After Limp talked about Amazon pushing forward using generative AI to build out Alexa’s features, Lee fired out a question about whether there’ll come a time when those Alexa AI capabilities won’t be free – and are offered via a subscription instead.

Limp replied in no uncertain terms: “Yes, we absolutely think that,” noting the costs of training the AI model (properly), and then adding: “But before we would start charging customers for this – and I believe we will – it has to be remarkable.”


Analysis: Superhuman assistance?

Amazon Alexa new

Dave Limp (above) is currently SVP of Amazon Devices & Services, but is leaving the company later this year. (Image credit: Future / Lance Ulanoff)

So, there’s your weighty caveat. Limp makes it clear, in fact, that expectations would be built around the realization of a ‘superhuman’ assistant if Amazon was to charge for Alexa’s AI chops as outlined.

Limp clarifies that Alexa, as it is now, almost certainly won’t be charged for, and that the contemporary Alexa will remain free. He also suggested that Amazon has no idea of a pricing scheme yet for any future AI-powered Alexa that is super-smart.

This means the paid-for Alexa AI skills we’re talking about would be highly prized and a long way down the road for development with Amazon’s assistant. This isn’t anything that will remotely happen soon, but what it is, nonetheless, is a clear enough signal that this path of monetization is one Amazon is fully considering traveling down. Eventually.

As to exactly what timeframe we might be talking about, Limp couldn’t be drawn to commit beyond it not being “decades” or “years” away, with the latter perhaps hinting that maybe this could happen sooner than we may imagine.

We think it’ll be a difficult sell for Amazon in the nearer-term, though. Especially as plans are currently being pushed through to shove adverts into Prime Video early next year, and you’ll have to pay to avoid watching those ads. (As a subscriber to Prime, even though you’re paying for the video streaming service – and other benefits – you’ll still get adverts unless you stump up an extra fee).

If Amazon is seen to be watering down the value proposition of its services too much, or trying to force a burden of monetization in too many different ways, that’ll always run the risk of provoking a negative reaction from customers. In short, if the future of a super-sophisticated Alexa is indeed paying for AI skills, we’re betting this won’t be anytime soon – and the results better be darn impressive.

We must admit, we have trouble visualizing the latter, too, especially when as it currently stands, we can’t get Alexa to understand half the internet radio stations we want to listen to, a pretty basic duty for the assistant.

Okay, so Amazon did have some interesting new stuff to show off with Alexa’s development last week, but we remain skeptical on how that’ll pan out in the real-world, and obviously more so on how this new ‘superhuman’ assistant will be in the further future. In other words, we’ll keep heaping the salt on for the time being…

You might also like

TechRadar – All the latest technology news

Read More

Snapchat AI shenanigans caused by glitch not sentience, says company

The introduction of Snapchat’s AI – called ‘My AI’ – has already been met with resistance and controversy, and new reports from multiple users about the chatbot taking videos without permission could add fuel to the fire.

Several users took to social media to share stories of the Snapchat AI taking videos of their ceilings and walls without any input from them, and then posting the clip as a live Story to all their followers. These are actions that should only be available to human users, not AI, which is what set alarm bells ringing.

Some users on X, formerly known as Twitter, shared their thoughts on the issue, with some seeming to be worried, while others made light of the turn of events. CNN reported on other users sharing their concerns as well. “Why does My AI have a video of the wall and ceiling in their house as their story?” wrote one user. “This is very weird and honestly unsettling,” said another.

A spokesperson from Snap Inc. responded, confirming that the issue, which they said was quickly addressed, was just a glitch. “My AI experienced a temporary outage that’s now resolved,” according to the statement Snap gave to TechCrunch.

Snapchat announced its ChatGPT-powered AI chatbot service back in February 2023, which proved to be an unpopular move once it launched on April 20 of the same year. For instance, how to delete Snapchat Google searches increased to a staggering 488% worldwide by April 26. And the company itself warned users not to trust this feature with private and sensitive information or to provide users with accurate information in return, according to its own Snapchat Support page.

The AI wall 

While AI chat can be a fun and sometimes useful tool when used correctly, incidents like this are a reminder that the tech behind it is still in its early stages. And even more troublesome is the fact that companies are releasing this tech into the wild, knowing that it still has plenty of kinks to work out.

This isn’t the first time that a company has rushed out AI features that weren't ready for prime time – Google Bard’s launch saw employees mocking it, with even Google CEO Sundar Pichai admitting that it was like a “souped-up Civic” taking on “more powerful cars.”

And this definitely isn’t the first time suddenly implemented AI features garnered backlash from its user base – Discord had to backtrack on its reworded privacy policy regarding AI implementation and data collection.

It seems to be a wall that companies constantly hit against in their race to integrate AI features into their websites and services. And it seems that as long as the AI craze is still going strong, we’ll keep seeing this same scenario repeating.

You might also like

TechRadar – All the latest technology news

Read More