Early Apple Vision Pro testers complain about the headset’s weight

Apple’s Vision Pro may be overweight as a group of reporters complained about experiencing discomfort while wearing the headset in a recent hands-on demo.

On January 16, the company gave tech news sites Engadget and The Verge an opportunity to try out their upcoming device ahead of its release on February 2. The preview was largely positive with Engadget’s Cherlynn Low commending the Vision Pro’s ability to create an immersive entertainment experience. But as Low states, “the best heads-up display in the world will be useless if it can’t be worn for a long time” and that’s exactly what happened. 15 minutes into the demo, she began “to feel weighed down by the device” with a modicum of pain arriving soon after. This sentiment was repeated by The Verge’s Victoria Song who felt the Vision Pro pushing down on her brow, resulting in a mild headache.

This issue has been known for some time now with early testers complaining that the headset “feels too heavy” after wearing it for a couple of hours. TechRadar’s US Editor-in-Chief Lance Ulanoff, who has worn the Vision Pro a few times, admits that it “really needs [an] overhead strap” to support its weight. Fortunately, there is such a strap. It’s called the Dual Loop Band sporting a strap going over the top of your head and one around the back. 

It’s unknown how much of a difference the Dual Loop Band makes. The extra strap presumably worked well enough as neither report would go on to mention the weight as a problem again.

International release

But still, the issue will continue to exist. It’s unlikely Apple will address this in time for the American launch in February, but it's conceivable Apple could make changes for the international release.

Notable industry insider Ming Chi Kuo posted new details on the Vision Pro’s potential global roll-out to his Medium newsletter claiming it might come out just before WWDC 2024 in June. At the developers' event, Apple will also share information about visionOS with programmers to help promote a spatial computing ecosystem around the world. 

There are a couple of things getting in the way of the global release.

First, there aren’t a lot of Vision Pro units to begin with. Apple wants to make sure the US launch and subsequent roll-out goes as smoothly as possible. What's more, the company needs to adjust the headset’s software so it complies with international regulations. Kuo finishes his post by saying the faster these matters are addressed, “the sooner Vision Pro will be available in more countries”.

No word on exactly which nations will be a part of the initial group to get Apple’s shiny new gadget after the US launch. However, Bloomberg’s Mark Gurman claims the tech giant is considering Canada, China, and the UK will be among the first.

While we have you check out TechRadar's list of the best virtual reality headsets for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Apple tells developers NOT to use “virtual reality” when talking about Vision Pro

The Vision Pro will go on sale next month, and we’ve just learned that Apple has requested that app developers for visionOS (the operating system that runs on the headset) don’t allude to visionOS apps as “AR” or “VR”. 

We first heard about Apple’s newest innovation in June 2023 – where it was marketed as a spatial computer that combines digital content and the user’s physical surroundings. It’s also equipped with some serious Apple graphics specs and visionOS, which Apple calls the “world’s first spatial computing system”

At first glance, the Vision Pro certainly appears to be similar to existing Virtual Reality (VR) and Augmented Reality (AR) headsets, so it’s interesting that Apple is at pains to ensure that it isn’t mistaken for one. The de facto ban on AR and VR references (as well as Extended Reality (XR) and Mixed Reality (MR)) was spotted in the guidelines of the new Xcode (Apple’s suite of developer tools) update that came after the announcement that Vision Pro devices will be in stores in early February

Vision Pro

(Image credit: Apple)

Apple lays down the law

This recommendation is pretty explicitly laid out on a new Apple Developer page which goes through what a developer needs to do to prepare their app for submission to the App Store. 

Apple insists that developers will also have to use the “visionOS” branding beginning with a lowercase “v” (similar to how they brand their flagship operating system for desktop and laptop devices, macOS), and to use the device’s full name, “Apple Vision Pro,” when referring to it. These aren’t as unexpected as Apple’s more notable instructions to avoid VR and AR, however. According to Apple, visionOS apps will not be considered VR, XR, or MR apps but as “spatial computing apps”.

It’s an interesting move for a number of reasons; coining a new term can be confusing to people, meaning that users will have to build familiarity and actually use the term for it to stick, but it also means that Apple can differentiate itself from the pack of AR/VR devices out there. 

It’s also a pivot from messaging that until now has relied on existing terms like augmented reality and virtual reality. Most of Apple’s current marketing refers to the Vision Pro as  a “spatial computing” platform, but at the Worldwide Developers Conference (WWDC) in 2023, Apple’s annual event for Apple platform developers, Apple CEO Tim Cook introduced the Vision Pro as an “entirely new AR platform.” Materially, this is mainly a marketing and branding move as Apple becomes more confident in its customers’ understanding of what the Vision Pro actually is. 9to5Mac reports that Apple engineers referred to visionOS as xrOS leading up to the device’s official announcement. 

Apple Vision Pro VR headset

(Image credit: Future / Lance Ulanoff)

Apple charts its own course

The pointed effort to distinguish itself from its competitors is an understandable move from Apple considering that some other tech giants have already attempted to dominate this space. 

Meta, Facebook and Instagram’s parent company, was one of the most noticeable examples. You might have a not-so-distant memory of a certain “metaverse”. The metaverse has seen a reception most would call lukewarm, even at its peak, and Apple is making a bold attempt to have its own association in people’s minds, with Apple’s VP of global marketing Greg Joswiak dismissing the word “metaverse” as one he’ll “never use” according to 9to5Mac.

I enjoy watching Apple make bolder moves into existing markets because it’s often when we’ve seen new industry standards emerge, which is always exciting – no matter whether you want to call it AR, VR, or spatial computing. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

6 new things we’ve learned about the Apple Vision Pro as its first video ad lands

We've had quite the wait for the Apple Vision Pro, considering it was unveiled back in June at Apple's annual WWDC event. Yesterday we finally got the news that the Vision Pro will be going on sale on Friday, February 2, with preorders open on Friday, January 19 – and some other new bits of information have now emerged, alongside its first video ad (below).

As Apple goes into full sales mode for this pricey mixed reality headset, it's answering some of the remaining questions we had about the device, and giving us a better idea of what it's capable of. Considering one of these will cost you $ 3,499 (about £2,750 / AU$ 5,225) and up, you're no doubt going to want all of the details you can get.

Here at TechRadar we've already had some hands-on time with the Vision Pro, and checked out how 3D spatial videos will look on it (which got a firm thumbs up). Here's what else we've found out about the Vision Pro over the last 24 hours.

1. Apple thinks it deserves to be in a sci-fi movie

Take a look at this brand new advert for the Apple Vision Pro and see how many famous movies you can name. There's a definite sci-fi angle here, with films like Back to the Future and Star Wars included, and Apple clearly wants to emphasize the futuristic nature of the device (and make strapping something to your face seem cool rather than nerdy).

If you've got a good memory then you might remember that one of the first adverts for the iPhone also made use of short clips cut from a multitude of films, featuring stars such as Marilyn Monroe, Michael Douglas, and Steve McQueen. Some 16 years on, Apple is once again using the power of the movies to push a next-gen piece of hardware.

2. The battery won't last for the whole of Oppenheimer

Apple Vision Pro

(Image credit: Apple)

Speaking of movies, you're going to need a recharge if you want to watch all of Oppenheimer on the Apple Vision Pro. Christopher Nolan's epic film runs for three hours and one minute, whereas the Vision Pro product page (via MacRumors) puts battery life at 2.5 hours for watching 2D videos.

That's when you're watching a video in the Apple TV app, and in one of the virtual environments that the Vision Pro is able to conjure up. Interestingly, the product page text saying that the device could run indefinitely as long as it was plugged into a power source has now been quietly removed.

3. The software is still a work in progress

Apple Vision Pro on a person's head

Preorders for the Vision Pro open this month (Image credit: Apple)

Considering the high price of the Apple Vision Pro, and talk of limited availability, this doesn't really feel like a mainstream device that Apple is expecting everyone to go out and buy. It's certainly no iPhone or Apple Watch – though a cheaper Vision Pro, rumored to be in the pipeline, could certainly change that dynamic somewhat.

With that in mind, the software still seems to be a work in progress. As 9to5Mac spotted in the official Vision Pro press release, the Persona feature is going to have a beta label attached for the time being – that's where you're represented in video calls by a 3D digital avatar that doesn't have a bulky mixed reality headset strapped on.

4. Here's what you'll be getting in the box

Apple Vision Pro

(Image credit: Apple)

As per the official press release from Apple, if you put down the money for a Vision Pro then you'll get two different bands to choose from and wrap around your head: they are the Solo Knit Band and the Dual Loop Band, though it's not immediately clear what the differences are between them.

Also in the box we've got a light seal, two light seal cushions, what's described as an “Apple Vision Pro Cover” for the front of the headset, an external battery back, a USB-C charging cable, a USB-C power adapter, and the accessory that we've all been wanting to see included – an official Apple polishing cloth.

5. Apple could release an app to help you fit the headset

Two hands holding the Apple Vision Pro headset

(Image credit: Apple)

When it comes to fitting the Apple Vision Pro snugly to your head, we think that Apple might encourage buyers to head to a physical store so that they can be helped out by an expert. However, it would seem that Apple also has plans for making sure you get the best possible fit at home.

As spotted by Patently Apple, a new patent filed by Apple mentions a “fit guidance” system inside an iPhone app. It will apparently work with “head-mountable devices” – very much like the Vision Pro – and looks designed to ensure that the user experience isn't spoiled by having the headset badly fitted.

6. There'll be plenty of content to watch

A person views an image on a virtual screen while wearing an Apple Vision Pro headset.

(Image credit: Apple)

Another little nugget from the Apple Vision Pro press release is that users will be able to access “more than 150 3D titles with incredible depth”, all through the Apple TV app. Apple is also introducing a new Immersive Video format, which promises 180-degree, three-dimensional videos in 8K quality.

This 3D video could end up being one of the most compelling reasons to buy an Apple Vision Pro – we were certainly impressed when we got to try it out for ourselves, and you can even record your own spatial video for playing back on the headset if you've got an iPhone 15 Pro or an iPhone 15 Pro Max.

You might also like

TechRadar – All the latest technology news

Read More

Two Windows 11 apps are being ditched – one you might miss, and another you’ve probably forgotten about

Microsoft is dropping two of the core apps which are installed with Windows 11 by default.

As of Windows 11 preview build 26020 (which has just been unleashed in the Canary channel), the WordPad and People apps have been given the elbow.

Although technically, while the People app itself is being dispensed with, that’s because its functionality (or at least much of it) is being transferred to Outlook for Windows, the new default mailbox app for Windows 11 devices (as of the start of 2024).

In short, you’ll still get the People app (contacts) in that mailbox client, but there’ll no longer be an actual People application that can be fired up separately.

WordPad, on the other hand, is being completely dispensed with, or rather it will be when the changes made in this preview build come to the release version of Windows 11.

Going forward from then, any clean installation of Windows 11 won’t have WordPad, and eventually, this app will be removed when users upgrade to a new version of Microsoft’s OS.

You won’t be able to reinstall WordPad once it has gone, either, so this will be a final farewell to the application, which was marked as a deprecated feature back in September 2023.

Also in build 26020, a raft of additions for Voice Access have strengthened Windows 11 on the accessibility front (as seen elsewhere in testing last month).On top of that, Narrator now has natural voices for 10 new locales (in preview), and that includes English (UK) and English (India), as well as the following: Chinese, Spanish (Spain), Spanish (Mexico), Japanese, French, Portuguese, German and Korean.

Furthermore, when the energy saver feature is enabled on a desktop PC (a machine that’s plugged in, rather than running on battery), a new icon is present in the system tray (far-right of the taskbar) to indicate it’s running and saving you a bit of power.

For the full list of changes, check out Microsoft’s blog post for the build.


Analysis: Word up

One thing to clarify here is not to confuse WordPad with Notepad, or Microsoft Word for that matter.

Word is the heavyweight word processor in Microsoft 365 (the suite formerly known as Office), and not a default app. Both WordPad and Notepad are currently default apps in Windows 11, but Notepad is staying firmly put – indeed Microsoft is busy improving this piece of software (adding an autosave feature most recently).

Notepad remains a useful and highly streamlined, much-liked app for jotting notes and the like, whereas WordPad is kind of a ‘lite’ version of Word, and as such a bit more complex in nature (but not anything like a full-on effort such as Word).

WordPad sort of falls between stools a little in that respect, and another reason Microsoft may have decided to drop the app is due to potential security risks (or that was a theory floating around last year, when the software was deprecated).

Even so, there are some folks who will miss WordPad, and with no option to reinstall, they’ll just have to look for a different lightweight word processor for Windows 11 – fortunately, we explore some good alternatives right here.

You might also like…

TechRadar – All the latest technology news

Read More

Microsoft is getting desperate for more Bing users – but this annoying Edge pop-up is definitely not the way to go about it

It seems Microsoft is up to its old tricks in trying to push people into using its products, once again, and this time the play is to persuade Edge users to switch their search engine to Bing.

As Windows Central spotted, developer Brad Sams (of Stardock fame) brought our attention to Microsoft’s latest bout of “anti-user behavior” in a post on X (formerly Twitter).

See more

Sams uses the Edge browser, but was prompted to switch to Bing as the default search engine rather than Google, as you can see in the above screenshot.

This is not the first time Microsoft has been promoting Bing in such a manner, alongside driving other services including Edge itself and OneDrive. (Search for a new browser in Edge, for example, and you’ll get a banner telling you there’s no need to download a different web browser, and the various reasons why).

The Bing search engine continues to struggle for market share against the might of Google, with Microsoft’s creation securing only 3.2% of the market as of November 2023, according to Statcounter.


Analysis: Bing headway – or lack of it

Microsoft hoped that Bing Chat, its AI now-renamed Copilot, would help to swell the ranks of Bing search users when it was launched early this year – but as we can see, that hasn’t happened. The Bing search engine had a 3% share at the beginning of 2023 going by Statcounter’s figures, so has notched that up 0.2% over the course of the year – a pretty miniscule uptick.

It’s safe to say, then, that the AI angle has not panned out for Bing search, although Microsoft has now started thinking about what its various products can do for Copilot, rather than what the chatbot can do for those products. (Witness the debut of Copilot in Windows 10, driving user numbers of the AI forward, rather than keeping Copilot as a carrot to drive migration to Windows 11).

At any rate, whatever piece of Microsoft’s vast jigsaw of products and services we’re talking about, we don’t want to see prompts in Edge, or Windows 11, or anywhere else, trying to twist the arms of users to switch to another Microsoft creation.

And fair enough, Google does this kind of thing too, pushing Chrome and its own search – but not as often as Microsoft in our experience. Can we please lay off the various prompts for 2024, Microsoft? Because if anything, throughout 2023 they seem to have become more prevalent again.

You might also like …

TechRadar – All the latest technology news

Read More

Google working on an AI assistant that could answer ‘impossible’ questions about you

Google is reportedly developing an AI assistant that will analyze personal photos, files, as well as Search results with the goal of telling “your life story”.

This news comes from CNBC which saw documents revealing that the tech giant recently held an “internal summit” where company execs and employees presented Project Ellman. According to the piece, the AI will offer a “bird’s–eye view” of someone’s life by grabbing files from your Google Account, and utilizing written biographies and adjacent content to understand context. This process includes sifting through the information files to pinpoint important moments. The Google employees claimed Project Ellman could deduce the day a user was born, who their parents are, and if they have any siblings. 

It doesn’t stop there because apparently, it's able to highlight chapters in your life like the years you spent at college or living in a certain city. Ellman can even learn your eating habits. If, for example, you upload a bunch of photos of pizza and pasta, the AI can infer that perhaps you’re a big fan of Italian food. The tech isn’t restricted to one person either as it can identify friends and family, plus social events you’ve been to.

Based on the report’s description alone, Project Ellman sounds very reminiscent of Memories on Google Photos, although on a much wider scale. 

Personal chatbot

CNBC states the presentation continued with demonstrating Ellman Chat, which was described as ChatGPT, but with the ability to “answer previously impossible questions”. Judging by the examples given, the questions aren’t necessarily impossible; just tricky especially if you're a forgetful person. For instance, you can ask the chatbot the last time your brother visited or for suggestions on a location you can move to based on the pictures you upload. 

Then we get to what may be one of Project Ellman’s secret purposes. By analyzing the screenshots users upload, the tech can make all sorts of predictions – from products you might buy, what interests you may have, plus future travel plans. The presenters also pitched the idea that it can learn what websites you frequent.

Project Ellman may know you better than you know yourself.

Analysis: All about you

We don’t think we have to tell you just how creepy all this sounds. We’re talking about an AI diving deep into your files, scrounging for every bit of data it can grab. Where is all that information going? 

Gemini, Google's new large language model (LLM) is implied to be the model that’ll power Project Ellman because it’s multimodal, or in other words, it can accept multiple forms of inputs besides text. Generative AIs need a constant stream of content to stay up to date. It seems like Google might be pole-vaulting over privacy boundaries, seeking more data to feed Gemini and keep it growing.

Granted, there’s no guarantee Ellman will ever see the light of day. A Google spokesperson told CNBC this is all an “early internal exploration”. If there are plans for a release, developers will take the time to ensure it’s helpful to people while keeping user privacy at the forefront. 

We urge you to take this statement with a grain of salt. Despite their supposed best efforts, the company has a storied history when it comes to privacy issues. The company gets into a lot of trouble for it. Just look at the Wikipedia page on the topic; it’s huge.

Hopefully, this is all overblown and the tech giant doesn’t launch a digital vacuum cleaner sucking up everything.

If you're looking for ways to start protecting your data, check out TechRadar's list of the best ad blockers for 2023

You might also like

TechRadar – All the latest technology news

Read More

What is Google Bard? Everything you need to know about the ChatGPT rival

Google finally joined the AI race and launched a ChatGPT rival called Bard – an “experimental conversational AI service” earlier this year. Google Bard is an AI-powered chatbot that acts as a poet, a crude mathematician and a even decent conversationalist.

The chatbot is similar to ChatGPT in many ways. It's able to answer complex questions about the universe and give you a deep dive into a range of topics in a conversational, easygoing way. The bot, however, differs from its rival in one crucial respect: it's connected to the web for free, so – according to Google – it gives “fresh, high-quality responses”.

Google Bard is powered by PaLM 2. Like ChatGPT, it's a type of machine learning called a 'large language model' that's been trained on a vast dataset and is capable of understanding human language as it's written.

Who can access Google Bard?

Bard was announced in February 2023 and rolled out for early access the following month. Initially, a limited number of users in the UK and US were granted access from a waitlist. However, at Google I/O – an event where the tech giant dives into updates across its product lines – Bard was made open to the public.

It’s now available in more than 180 countries around the world, including the US and all member states of the European Union. As of July 2023, Bard works with more than 40 languages. You need a Google account to use it, but access to all of Bard’s features is entirely free. Unlike OpenAI’s ChatGPT, there is no paid tier.

The Google Bard chatbot answering a question on a computerscreen

(Image credit: Google)

Opening up chatbots for public testing brings great benefits that Google says it's “excited” about, but also risks that explain why the search giant has been so cautious to release Bard into the wild. The meteoric rise of ChatGPT has, though, seemingly forced its hand and expedited the public launch of Bard.

So what exactly will Google's Bard do for you and how will it compare with ChatGPT, which Microsoft appears to be building into its own search engine, Bing? Here's everything you need to know about it.

What is Google Bard?

Like ChatGPT, Bard is an experimental AI chatbot that's built on deep learning algorithms called 'large language models', in this case one called LaMDA. 

To begin with, Bard was released on a “lightweight model version” of LaMDA. Google says this allowed it to scale the chatbot to more people, as this “much smaller model requires significantly less computing power”.

The Google Bard chatbot answering a question on a phone screen

(Image credit: Google)

At I/O 2023, Google launched PaLM 2, its next-gen language model trained on a wider dataset spanning multiple languages. The model is faster and more efficient than LamDA, and comes in four sizes to suit the needs of different devices and functions.

Google is already training its next language model, Gemini, which we think is one of its most exciting projects of the next 25 years. Built to be multi-modal, Gemini is touted to deliver yet more advancements in the arena of generative chatbots, including features such as memory.

What can Google Bard do?

In short, Bard is a next-gen development of Google Search that could change the way we use search engines and look for information on the web.

Google says that Bard can be “an outlet for creativity” or “a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills”.

Unlike traditional Google Search, Bard draws on information from the web to help it answer more open-ended questions in impressive depth. For example, rather than standard questions like “how many keys does a piano have?”, Bard will be able to give lengthy answers to a more general query like “is the piano or guitar easier to learn”?

The Google Bard chatbot answering a question on a computer screen

An example of the kind or prompt that Google’s Bard will give you an in-depth answer to. (Image credit: Google)

We initially found Bard to fall short in terms of features and performance compared to its competitors. But since its public deployment earlier this year, Google Bard’s toolkit has come on leaps and bounds. 

It can generate code in more than 20 programming languages, help you solve text-based math equations and visualize information by generating charts, either from information you provide or tables it includes in its responses. It’s not foolproof, but it’s certainly a lot more versatile than it was at launch.

Further updates have introduced the ability to listen to Bard’s responses, change their tone using five options (simple, long, short, professional or casual), pin and rename conversations, and even share conversations via a public link. Like ChatGPT, Bard’s responses now appear in real-time, too, so you don’t have to wait for the complete answer to start reading it.

Google Bard marketing image

(Image credit: Google)

Improved citations are meant to address the issue of misinformation and plagiarism. Bard will annotate a line of code or text that needs a citation, then underline the cited part and link to the source material. You can also easily double-check its answers by hitting the ‘Google It’ shortcut.

It works with images as well: you can upload pictures with Google Lens and see Google Search image results in Bard’s responses.

Bard has also been integrated into a range of Google apps and services, allowing you deploy its abilities without leaving what you’re working on. It can work directly with English text in Gmail, Docs and Drive, for example, allowing you to summarize your writing in situ.

Similarly, it can interact with info from the likes of Maps and even YouTube. As of November, Bard now has the limited ability to understand the contents of certain YouTube videos, making it quicker and easier for you to extract the information you need.

What will Google Bard do in future?

A huge new feature coming soon is the ability for Google Bard to create generative images from text. This feature, a collaborative effort between Google and Adobe, will be brought forward by the Content Authenticity Initiative, an open-source Content Credentials technology that will bring transparency to images that are generated through this integration.

The whole project is made possible by Adobe Firefly, a family of creative generative AI models that will make use of Bard's conversational AI service to power text-to-image capabilities. Users can then take these AI-generated images and further edit them in Adobe Express.

Otherwise, expect to see Bard support more languages and integrations with greater accuracy and efficiency, as Google continues to train its ability to generate responses.

Google Bard vs ChatGPT: what’s the difference?

Fundamentally the chatbot is based on similar technology to ChatGPT, with even more tools and features coming that will close the gap between Google Bard and ChatGPT.

Both Bard and ChatGPT are chatbots that are built on 'large language models', which are machine learning algorithms that have a wide range of talents including text generation, translation, and answering prompts based on the vast datasets that they've been trained on.

A laptop screen showing the landing page for ChatGPT Plus

(Image credit: OpenAI)

The two chatbots, or “experimental conversational AI service” as Google calls Bard, are also fine-tuned using human interactions to guide them towards desirable responses. 

One difference between the two, though, is that the free version of ChatGPT isn't connected to the internet – unless you use a third-party plugin. That means it has a very limited knowledge of facts or events after January 2022. 

If you want ChatGPT to search the web for answers in real time, you currently need to join the waitlist for ChatGPT Plus, a paid tier which costs $ 20 a month. Besides the more advanced GPT-4 model, subscribers can use Browse with Bing. OpenAI has said that all users will get access “soon”, but hasn't indicated a specific date.

Bard, on the other hand, is free to use and features web connectivity as standard. As well as the product integrations mentioned above, Google is also working on Search Generative Experience, which builds Bard directly into Google Search.

Does Google Bard only do text answers?

Until recently Google's Bard initially only answered text prompts with its own written replies, similar to ChatGPT. But one of the biggest changes to Bard is its multimodal functionality. This allows the chatbot to answer user prompts and questions with both text and images.

Users can also do the same, with Bard able to work with Google Lens to have images uploaded into Bard and Bard responding in text. Multimodal functionality is a feature that was hinted at for both GPT-4 and Bing Chat, and now Google Bard users can actually use it. And of course, we also have Google Bard's Adobe-powered AI image generator, which will be powered by Adobe Firefly.

TechRadar – All the latest technology news

Read More

Excited for Apple’s Vision Pro? Forget that, rumors have started about how the sequel will be better

Apple is rumored to be considering making changes to the next version of the Vision Pro – still some way off, given the first-gen model is yet to launch, of course – around slimming down the headset’s size and weight.

In Mark Gurman’s latest Power On newsletter (for Bloomberg), the well-known Apple leaker told us that the company is mulling some notable improvements for the next-gen Vision Pro on the comfort front.

Gurman observes that with some feedback from testers expressing concerns about neck strain due to the weight of the headset, Apple wants to make the next-gen device both lighter and more compact.

This may be a key focus for the next iteration of the Vision Pro, as Apple fears that the weight of the incoming first device “could turn off consumers already wary of mixed-reality headsets,” Gurman asserts. The Vision Pro can feel too heavy for some folks, even during shorter periods of use, we’re told.

Reducing the weight of next-gen Vision Pro is the priority by the sounds of it, with any size reduction likely to be much less noticeable (and harder to achieve).

As 9 to 5 Mac, which spotted this, further points out, Apple actually already made the incoming first-gen headset more compact – with a trade-off. Namely, the design doesn’t give room for people who wear prescription glasses to be able to fit those in.

So, that creates a separate issue in catering to spectacle wearers, and Apple’s solution is to implement a system of prescription lenses that magnetically attach to the 4K displays for the headset.

That’s not ideal, though, for a lot of reasons. It’s a headache for retailers in terms of stocking the huge number of lens prescriptions they’ll have to deal with – having to find the right one for a glasses wearer not just if they’re buying, but also if they’re simply wanting to try out the headset.

Another obvious downside is that the owner’s glasses prescription may well change in the future (ours certainly does, repeatedly), so again, there’s the hassle of having to get new lenses for your Vision Pro too.

It seems Apple is mulling the idea of shipping custom-built headsets directly with the correct prescription lenses preinstalled, but there could be problems with that, as well.

Gurman noted: “First, built-in prescription lenses could make Apple a health provider of sorts. The company may not want to deal with that. Also, that level of customization would make it harder for consumers to share a headset or resell it.”

Whether that whole thorny nest of glasses-related issues can be tackled with the Vision Pro 2, well, we’ll just have to see.

Apple Vision Pro

(Image credit: Apple)

Analysis: Long-term vision for success

So, it seems like the weight of the Vision Pro might be an issue from early testing feedback. That said, in his try-out session, our editor-in-chief found the headset “relatively comfortable” and so wasn’t critical on that front. But 9 to 5 Mac’s writer observed that while shorter sessions are likely to be fine, they could “absolutely see getting tired of wearing [the headset] after extended sessions.”

This may vary from person to person somewhat, it’s true, but it sounds like if Apple is indeed planning to make the next-gen headset lighter, the firm is recognizing that things in this department are less than ideal.

At any rate, while it’s good to hear this, we’ll only really know how the Vision Pro shapes up on the comfort front when it comes to full review time.

For us, though, the most uncomfortable part of the Vision Pro experience is the price. Even just looking at that price tag makes our hearts heavy, as we won’t ever be able to afford the thing.

At $ 3,500 in the US (around £2,900, AU$ 5,500) – and remember, the prescription lenses will add to that bill, especially if you need multiple lenses for different family members – the Vision Pro is just too rich for our blood. We just can’t see that price flying with consumers when Apple’s headset hits the shelves next year in the US (in theory early in 2024).

Especially with mixed reality and VR headsets in general being a niche enough prospect as it is. Indeed, Meta’s Quest 3 is so, so much more affordable in comparison, and for the money represents a great buy.

It’s not like Apple doesn’t realize all this, of course, and we’ve already heard chatter on the grapevine about how a cheaper Vision Pro model might be inbound – which more than any other improvement, would be fantastic to see.

You might also like

TechRadar – All the latest technology news

Read More

Meta AI is coming to your social media apps – and I’ve already forgotten about ChatGPT

Meta is going all out on artificial intelligence, first developing its own version of ChatGPT as well as implementing Instagram’s AI ‘personas’ to appeal to a younger audience. Now, the company has announced a new AI image generation and editing feature during Meta’s Connect event, which will be coming to Instagram soon. 

If you’re familiar with OpenAI’s ChatGPT or Google’s Bard, Meta AI will feel very familiar to you. The all-general purpose assistant can help with all sorts of planning and organizational tasks, and will now offer the ability to generate images via the prompt ‘/imagine’. 

You’ll also be able to show Meta AI on Instagram a photo you wish to post and ask it to apply a watercolour effect, make the image black and white and so on. Think of the Meta assistant as a more ‘social’ version of ChatGPT, baked right into your social media apps.

Alongside the assistant, the initial roster of 28 AI characters is beginning to roll out across the company’s messaging app. Most of these characters are based on celebrities like Kendall Jenner, Mr. Beast, Paris Hilton and my personal favourite, Snoop Dogg! You can chat with these ‘personas’ directly and finally ask Paris what lipgloss she uses. As you chat with the characters their profile image will animate based on the topic of conversation, which is pretty cool considering chatting with most AI chatbots is kind of… boring, at least from a visual standpoint.

ChatGPT may have started it, but Meta could finish it

It’s clear that Meta is taking AI integration very seriously, and I love to see it! By integrating its virtual assistant and AI tools into the apps billions of people use every day it’s guaranteed an existing user base, and in my opinion, shows that the company has taken the time to really understand why users would approach their product. 

Instead of just unleashing an assistant that will give you recipes and do your homework, it looks like Meta AI is tailored to suit everyday purposes and feels like a really clever way to implement the tool in people’s lives. The assistant is right there in the app if and when you need it, so you don’t have to leave the app to engage with the assistant.

Meta’s huge scale of potential users gives it a good chance of being the AI assistant people will use for the first time and could be the AI assistant people will end up using on a day-to-day basis. No extra app to download or account to make, and no swiping away from your conversation to get to what you need. I think Meta made a smart choice taking its time and has now come out the gate swinging – and I really do think ChatGPT creators OpenAI should be a little bit worried. 

You might also like

TechRadar – All the latest technology news

Read More

Amazon announces Alexa AI – 5 things you need to know about the voice assistant

During a recent live event, Amazon revealed Alexa will be getting a major upgrade as the company plans on implementing a new large language model (LLM) into the tech assistant.

The tech giant is seeking to improve Alexa’s capabilities by making it “more intuitive, intelligent, and useful”. The LLM will allow it to behave similarly to a generative AI in order to provide real-time information as well as understand nuances in speech. Amazon says its developers sought to make the user experience less robotic.

There is a lot to the Alexa update besides the LLM, as it will also be receiving a lot of features. Below is a list of the five things you absolutely need to know about Alexa’s future.

1. Natural conversations

In what may be the most impactful change, Amazon is making a number of improvements to Alexa’s voice in an effort to make it sound more fluid. It will lack the robotic intonation people are familiar with. 

You can listen to the huge difference in quality on the company’s Soundcloud page. The first sample showcases the voice Alexa has had for the past decade or so since it first launched. The second clip is what it’ll sound like next year when the update launches. You can hear the robot voice enunciate a lot better, with more apparent emotion behind.

2. Understanding context

Having an AI that understands context is important because it makes the process of issuing commands easier. Moving forward, Alexa will be able to better understand  nuances in speech. It will know what you’re talking about even if you don’t provide every minute detail. 

Users can issue vague commands – like saying “Alexa, I’m cold” to have the assistant turn up the heat in your house. Or you can tell the AI it’s too bright in the room and it will automatically dim the lights only in that specific room.

3. Improved smart home control

In the same vein of understanding context, “Alexa will be able to process multiple smart home requests.” You can create routines at specific times of the day plus you won’t need a smartphone to configure them. It can all be done on the fly. 

You can command the assistant to turn off the lights, lower the blinds in the house, and tell the kids to get ready for bed at 9 pm. It will perform those steps in that order, on the dot. Users also won’t need to repeat Alexa’s name over and over for every little command.

Amazon Alexa smart home control

(Image credit: Amazon)

4. New accessibility features 

Amazon will be introducing a variety of accessibility features for customers who have “hearing, speech, or mobility disabilities.” The one that caught our interest was Eye Gaze, allowing people to perform a series of pre-set actions just by look at their device. Actions include playing music or sending messages to contacts. Eye Gaze will, however, be limited to Fire Max 11 tablets in the US, UK, Germany, and Japan at launch.

There is also Call Translation, which, as the name suggests, will translate languages in audio and video calls in real-time. In addition to acting as an interpreter, this tool is said to help deaf people “communicate remotely more easily.” This feature will be available to Echo Show and Alexa app users across eight countries (the US, Mexico, and the UK just to mention a few) in 10 languages, including English, Spanish, and German.

5. Content creation

Since the new Alexa will operate on LLM technology, it will be capable of light content creation via skills. 

Through the Character.AI tool, users can engage in “human-like voice conversations with [over] than 25 unique Characters.” You can chat with specific archetypes, from a fitness coach to famous people like Albert Einstein. 

Music production will be possible, too, via Splash. Through voice commands, Splash can create a track according to your specifications. You can then customize the song further by adding a vocal track or by changing genres.

It’s unknown exactly when the Alexa upgrade will launch. Amazon says everything you see here and more will come out in 2024. We have reached out for clarification and will update this story if we learn anything new.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More