Early Apple Vision Pro testers complain about the headset’s weight

Apple’s Vision Pro may be overweight as a group of reporters complained about experiencing discomfort while wearing the headset in a recent hands-on demo.

On January 16, the company gave tech news sites Engadget and The Verge an opportunity to try out their upcoming device ahead of its release on February 2. The preview was largely positive with Engadget’s Cherlynn Low commending the Vision Pro’s ability to create an immersive entertainment experience. But as Low states, “the best heads-up display in the world will be useless if it can’t be worn for a long time” and that’s exactly what happened. 15 minutes into the demo, she began “to feel weighed down by the device” with a modicum of pain arriving soon after. This sentiment was repeated by The Verge’s Victoria Song who felt the Vision Pro pushing down on her brow, resulting in a mild headache.

This issue has been known for some time now with early testers complaining that the headset “feels too heavy” after wearing it for a couple of hours. TechRadar’s US Editor-in-Chief Lance Ulanoff, who has worn the Vision Pro a few times, admits that it “really needs [an] overhead strap” to support its weight. Fortunately, there is such a strap. It’s called the Dual Loop Band sporting a strap going over the top of your head and one around the back. 

It’s unknown how much of a difference the Dual Loop Band makes. The extra strap presumably worked well enough as neither report would go on to mention the weight as a problem again.

International release

But still, the issue will continue to exist. It’s unlikely Apple will address this in time for the American launch in February, but it's conceivable Apple could make changes for the international release.

Notable industry insider Ming Chi Kuo posted new details on the Vision Pro’s potential global roll-out to his Medium newsletter claiming it might come out just before WWDC 2024 in June. At the developers' event, Apple will also share information about visionOS with programmers to help promote a spatial computing ecosystem around the world. 

There are a couple of things getting in the way of the global release.

First, there aren’t a lot of Vision Pro units to begin with. Apple wants to make sure the US launch and subsequent roll-out goes as smoothly as possible. What's more, the company needs to adjust the headset’s software so it complies with international regulations. Kuo finishes his post by saying the faster these matters are addressed, “the sooner Vision Pro will be available in more countries”.

No word on exactly which nations will be a part of the initial group to get Apple’s shiny new gadget after the US launch. However, Bloomberg’s Mark Gurman claims the tech giant is considering Canada, China, and the UK will be among the first.

While we have you check out TechRadar's list of the best virtual reality headsets for 2024.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT use declines as users complain about ‘dumber’ answers, and the reason might be AI’s biggest threat for the future


Is ChatGPT old news already? It seems impossible, with the explosion of AI popularity seeping into every aspect of our lives – whether it’s digital masterpieces forged with the best AI art generators or helping us with our online shopping.

But despite being the leader in the AI arms race – and powering Microsoft’s Bing AI – it looks like ChatGPT might be losing momentum. According to SimilarWeb, traffic to OpenAI’s ChatGPT site dropped by almost 10% compared to last month, while metrics from Sensor Tower also demonstrated that downloads of the iOS app are in decline too.

As reported by Insider, paying users of the more powerful GPT-4 model (access to which is included in ChatGPT Plus) have been complaining on social media and OpenAI’s own forums about a dip in output quality from the chatbot.

A common consensus was that GPT-4 was able to generate outputs faster, but at a lower level of quality. Peter Yang, a product lead for Roblox, took to Twitter to decry the bot’s recent work, claiming that “the quality seems worse”. One forum user said the recent GPT-4 experience felt “like driving a Ferrari for a month then suddenly it turns into a beaten up old pickup”.

See more

Why is GPT-4 suddenly struggling?

Some users were even harsher, calling the bot “dumber” and “lazier” than before, with a lengthy thread on OpenAI’s forums filled with all manner of complaints. One user, ‘bitbytebit’, described it as “totally horrible now” and “braindead vs. before”.

According to users, there was a point a few weeks ago where GPT-4 became massively faster – but at a cost of performance. The AI community has speculated that this could be due to a shift in OpenAI’s design ethos behind the more powerful machine learning model – namely, breaking it up into multiple smaller models trained in specific areas, which can act in tandem to provide the same end result while being cheaper for OpenAI to run.

OpenAI has yet to officially confirm this is the case, as there has been no mention of such a major change to the way GPT-4 works. It’s a credible explanation according to industry experts like Sharon Zhou, CEO of AI-building company Lamini, who described the multi-model idea as the “natural next step” in developing GPT-4.

AIs eating AIs

However, there’s another pressing problem with ChatGPT that some users suspect could be the cause of the recent drop in performance – an issue that the AI industry seems largely unprepared to tackle.

If you’re not familiar with the term ‘AI cannibalism’, let me break it down in brief: large language models (LLMs) like ChatGPT and Google Bard scrape the public internet for data to be used when generating responses. In recent months, a veritable boom in AI-generated content online – including an unwanted torrent of AI-authored novels on Kindle Unlimited – means that LLMs are increasingly likely to scoop up materials that were already produced by an AI when hunting through the web for information.

An iPhone screen showing the OpenAI ChatGPT download page on the App Store

ChatGPT app downloads have slowed, indicating a decrease in overall public interest. (Image credit: Future)

This runs the risk of creating a feedback loop, where AI models ‘learn’ from content that was itself AI-generated, resulting in a gradual decline in output coherence and quality. With numerous LLMs now available both to professionals and the wider public, the risk of AI cannibalism is becoming increasingly prevalent – especially since there’s yet to be any meaningful demonstration of how AI models might accurately differentiate between ‘real’ information and AI-generated content.

Discussions around AI have largely focused on the risks it poses to society – for example, Facebook owner Meta recently declined to open up its new speech-generating AI to the public after it was deemed ‘too dangerous’ to be released. But content cannibalization is more of a risk to the future of AI itself; something that threatens to ruin the functionality of tools such as ChatGPT, which depend upon original human-made materials in order to learn and generate content.

Do you use ChatGPT or GPT-4? If you do, have you felt that there’s been a drop in quality recently, or have you simply lost interest in the chatbot? I’d love to hear from you on Twitter. With so many competitors now springing up, is it possible that OpenAI’s dominance might be coming to an end? 

TechRadar – All the latest technology news

Read More