Google Gemini explained: 7 things you need to know the new Copilot and ChatGPT rival

Google has been a sleeping AI giant, but this week it finally woke up. Google Gemini is here and it's the tech giant's most powerful range of AI tools so far. But Gemini is also, in true Google style, really confusing, so we're here to quickly break it all down for you.

Gemini is the new umbrella name for all of Google's AI tools, from chatbots to voice assistants and full-blown coding assistants. It replaces both Google Bard – the previous name for Google's AI chatbot – and Duet AI, the name for Google's Workspace-oriented rival to CoPilot Pro and ChatGPT Plus.

But this is also way more than just a rebrand. As part of the launch, Google has released a new free Google Gemini app for Android (in the US, for now. For the first time, Google is also releasing its most powerful large language model (LLM) so far called Gemini Ultra 1.0. You can play with that now as well, if you sign up for its new Google One AI Premium subscription (more on that below).

This is all pretty head-spinning stuff, and we haven't even scratched the surface of what you can actually do with these AI tools yet. So for a quick fast-charge to get you up to speed on everything Google Gemini, plug into our easily-digestible explainer below…

1. Gemini replaces Google Bard and Duet AI

In some ways, Google Gemini makes things simpler. It's the new umbrella name for all of Google's AI tools, whether you're on a smartphone or desktop, or using the free or paid versions.

Gemini replaces Google Bard (the previous name for Google's “experimental” AI chatbot) and Duet AI, the collection of work-oriented tools for Google Workspace. Looking for a free AI helper to make you images or redraft emails? You can now go to Google Gemini and start using it with a standard Google account.

But if you want the more powerful Gemini Advanced AI tools – and access to Google's newest Gemini Ultra LLM – you'll need to pay a monthly subscription. That comes as part of a Google One AI Premium Plan, which you can read more about below.

To sum up, there are three main ways to access Google Gemini:   

2. Gemini is also replacing Google Assistant

Two phones on an orange background showing the Google Gemini app

(Image credit: Google)

As we mentioned above, Google has launched a new free Gemini app for Android. This is rolling out in the US now and Google says it'll be “fully available in the coming weeks”, with more locations to “coming soon”. Google is known for having a broad definition of “soon”, so the UK and EU may need to be patient.

There's going to be a similar rollout for iOS and iPhones, but with a different approach. Rather than a separate standalone app, Gemini will be available in the Google app.

The Android app is a big deal in particular because it'll let you set Gemini as your default voice assistant, replacing the existing Google Assistant. You can set this during the app's setup process, where you can tap “I agree” for Gemini to “handle tasks on your phone”.

Do this and it'll mean that whenever you summon a voice assistant on your Android phone – either by long-pressing your home button or saying “Hey Google” – you'll speak to Gemini rather than Google Assistant. That said, there is evidence that you may not want to do that just yet…

3. You may want to stick with Google Assistant (for now)

An Android phone on an orange background showing the Google Gemini app

(Image credit: Google)

The Google Gemini app has only been out for a matter of days – and there are early signs of teething issues and limitations when it comes to using Gemini as your voice assistant.

The Play Store is filling up with complaints stating that Gemini asks you to tap 'submit' even when using voice commands and that it lacks functionality compared to Assistant, including being unable to handle hands-free reminders, home device control and more. We've also found some bugs during our early tests with the app.

Fortunately, you can switch back to the old Google Assistant. To do that, just go the Gemini app, tap your Profile in the top-right corner, then go to Settings > Digital assistants from Google. In here you'll be able to choose between Gemini and Google Assistant.

Sissie Hsiao (Google's VP and General Manager of Gemini experiences) claims that Gemini is “an important first step in building a true AI assistant – one that is conversational, multimodal and helpful”. But right now, it seems that “first step” is doing a lot of heavy lifting.

4. Gemini is a new way to quiz Google's other apps

Two phones on an orange background showing the Google Gemini app

(Image credit: Google)

Like the now-retired Bard, Gemini is designed to be a kind of creative co-pilot if you need help with “writing, brainstorming, learning, and more”, as Google describes it. So like before, you can ask it to tell you a joke, rewrite an email, help with research and more. 

As always, the usual caveats remain. Google is still quite clear that “Gemini will make mistakes” and that, even though it's improving by the day, Gemini “can provide inaccurate information, or it can even make offensive statements”.

This means its other use case is potentially more interesting. Gemini is also a new way to interact with Google's other services like YouTube, Google Maps and Gmail. Ask it to “suggest some popular tourist sites in Seattle” and it'll show them in Google Maps. 

Another example is asking it to “find videos of how to quickly get grape juice out of a wool rug”. This means Gemini is effectively a more conversational way to interact with the likes of YouTube and Google Drive. It can also now generate images, which was a skill Bard learnt last week before it was renamed.

5. The free version of Gemini has limitations

Two phones on an orange background showing the Google Gemini Android app

(Image credit: Future)

The free version of Gemini (which you access in the Google Gemini app on Android, in the Google app on iOS, or on the Gemini website) has quite a few limitations compared to the subscription-only Gemini Advanced. 

This is partly because it's based on a simpler large language model (LLM) called Gemini Pro, rather than Google's new Gemini Ultra 1.0. Broadly speaking, the free version is less creative, less accurate, unable to handle multi-step questions, can't really code and has more limited data-handling powers.

This means the free version is best for basic things like answering simple questions, summarizing emails, making images, and (as we discussed above) quizzing Google's other services using natural language.

Looking for an AI assistant that can help with advanced coding, complex creative projects, and also work directly within Gmail and Google Docs? Google Gemini Advanced could be more up your street, particularly if you already subscribe to Google One… 

6. Gemini Advanced is tempting for Google One users

The subscription-only Gemini Advanced costs $ 19.99 / £18.99 / AU$ 32.99 per month, although you can currently get a two-month free trial. Confusingly, you get Advanced by paying for a new Google One AI Premium Plan, which includes 2TB of cloud storage.

This means Gemini Advanced is particularly tempting if you already pay for a Google One cloud storage plan (or are looking to sign up for it anyway). With a 2TB Google One plan already costing $ 9.99 / £7.99 / AU$ 12.49 per month, that means the AI features are effectively setting you back an extra $ 10 / £11 / AU$ 20 a month.

There's even better news for those who already have a Google One subscription with 5TB of storage or more. Google says you can “enjoy AI Premium features until July 21, 2024, at no extra charge”.

This means that Google, in a similar style to Amazon Prime, is combining its subscriptions offerings (cloud storage and its most powerful AI assistant) in order to make them both more appealing (and, most likely, more sticky too).

7. The Gemini app could take a little while to reach the UK and EU

Two phones on an orange background showing the Google Gemini app

(Image credit: Future)

While Google has stated that the Gemini Android app is “coming soon” to “more countries and languages”, it hasn't given any timescale for when that'll happen – and a possible reason for the delay is that it's waiting for the EU AI Act to become clearer.

Sissie Hsiao (Google's VP and General Manager of Gemini experiences) told the MIT Technology Review “we’re working with local regulators to make sure that we’re abiding by local regime requirements before we can expand.”

While that sounds a bit ominous, Hsiao added that “rest assured, we are absolutely working on it and I hope we’ll be able to announce expansion very, very soon.” So if you're in the UK or EU, you'll need to settle for tinkering with the website version for now.

Given the early reviews of the Google Gemini Android app, and its inconsistencies as a Google Assistant replacement, that might well be for the best anyway.

You might also like

TechRadar – All the latest technology news

Read More

Don’t know what’s good about Copilot Pro? Windows 11 users might soon find out, as Microsoft is testing Copilot ads for the OS

Windows 11 might be getting ads for Copilot Pro, or at least this possibility is being explored in testing right now it seems.

Copilot Pro, for those who missed it, was recently revealed as Microsoft’s powered-up version of the AI assistant that you have to pay for (via a monthly subscription). And if you haven’t heard about it, well, you might do soon via the Settings panel in Windows 11.

PhantomOfEarth on X (formerly Twitter) spotted the new move from Microsoft, with the introduction of a card for Copilot Pro on the Home page of the Settings app. It provides a brief explanation of what the service is alongside links to find out more (or to get a subscription there and then).

See more

Note that the leaker had to dig around to uncover the Copilot Pro advert, and it was only displayed after messing about with a configuration tool (in Dev and Beta builds). However, two other Windows 11 testers in the Beta channel have responded to say that they have this Copilot Pro card present without doing anything.

In other words, taking those reports at face value, it seems this Copilot Pro ad is on some kind of limited rollout to some testers. At any rate, it’s certainly present in the background of Windows 11 (Beta and Dev) and can be enabled.


Analysis: Adding more ads

The theory, then, is that this will be appearing more broadly to testers, before following with a rollout to everyone using Windows 11. Of course, ideas in testing can be abandoned, particularly if they get criticized a lot, so we’ll just have to watch this space (or rather, the space on the Home page of Settings).

Does it seem likely Microsoft will try to push ahead with a Copilot Pro advert? Yes, it does, frankly. Microsoft isn’t shy about promoting its own services within its products, that’s for sure. Furthermore, AI is set to become a huge part of the Windows 11 experience, and other Microsoft products for that matter, so monetizing it is going to be a priority in all likelihood.

So, a nudge to raise the profile of the paid version of Copilot seems to likely, if not inevitable. Better that it’s tucked away in Settings, we guess, than somewhere more in-your-face like the Start menu.

If you’re wondering what benefits Copilot Pro confers, they include faster performance and responses, along with more customization and options – but this shouldn’t take anything away from the free version of Copilot (or it doesn’t yet, anyway). What it does mean is that the very latest upgrades will likely be reserved for the Pro AI, as we’ve seen initially with GPT-4 Turbo coming to Copilot Pro and not the basic free Copilot.

Via Neowin

You might also like…

TechRadar – All the latest technology news

Read More

Samsung XR/VR headset – everything we know so far and what we want to see

We know for certain that a new Samsung XR/VR headset is in the works, with the device being made in partnership with Google. But much of the XR product’s details (XR, or extended reality, is a catchall for virtual, augmented, and mixed reality) are still shrouded in mystery. 

This so-called Apple Vision Pro rival (an XR headset from Apple) will likely have impressive specs – Qualcomm has confirmed its new Snapdragon XR2 Plus Gen 2 chip will be in the headset, and Samsung Display-made screens will probably be featured. It'll also likely have an equally premium price tag. Unfortunately, until Samsung says anything officially, we won’t know exactly how much it will cost, or when it will be released.

But using the few tidbits of official info, as well as our industry knowledge and the rumors out there, we can make some educated guesses that can clue you into the Samsung XR/VR headset’s potential price, release date, and specs – and we’ve got them down below. We’ve also highlighted a few of the features we’d like to see when it’s eventually unveiled to the public.

Samsung XR/VR headset: Price

The Samsung Gear VR headset on a red desk

The Samsung Gear VR, you needed a phone to operate it (Image credit: samsung)

We won’t know how much Samsung and Google’s new VR headset will cost until the device is officially announced, but most rumors point to it boasting premium specs – so expect a premium price.

Some early reports suggested Samsung was looking at something in the $ 1,000 / £1,000 / AU$ 1,500 range (just like the Meta Quest Pro) though it may have changed its plans. After the Apple Vision Pro reveal, it’s believed Samsung delayed the device most likely to make it a better Vision Pro rival in Samsung’s eyes – the Vision Pro is impressive, as you can find out from our hands-on Apple Vision Pro review.

If that’s the case, the VR gadget might not only more closely match the Vision Pro’s specs it might adopt the Vision Pro’s $ 3,499 (about £2,725 / AUS$ 5,230) starting price too, or something close to it.

Samsung XR/VR headset: Release date

Much like its price, we don’t know anything concrete about the incoming Samsung VR headset's release date yet. But a few signs point to a 2024 announcement – if not a 2024 release.

Firstly, there was the teaser Samsung revealed in February 2023 when it said it was partnering with Google to develop an XR headset. It didn’t set a date for when we’d hear more, but Samsung likely wouldn’t make this teasing announcement if the project was still a long way from finishing. Usually, a more full reveal happens a year or so from the teaser – so around February 2024.

There was a rumor that Samsung’s VR headset project was delayed after the Vision Pro announcement, though the source maintained that the headset would still arrive in 2024 – just mid-to-late 2024, rather than February.

Three people on stage at Samsung Unpacked 2023 teasing Samsung's future of XR

The Samsung Unpacked 2023 XR headset teaser (Image credit: Samsung)

Then there’s the Snapdragon XR2 Plus Gen 2 chipset announcement. Qualcomm was keen to highlight Samsung and Google as partners that would be putting the chipset to use. 

It would be odd to highlight these partners if its headset was still a year or so from launching. Those partners may have preferred to work with a later next-gen chip, if the XR/VR headset was due in 2025 or later. So this would again point to a 2024 reveal, if not a precise date this year.

Lastly, there have also been suggestions that the Samsung VR headset might arrive alongside the Galaxy Z Flip 6 – Samsung's folding phone that's also due to arrive in 2024.

Samsung XR/VR headset: Specs

A lot of the new Samsung VR headset’s specs are still a mystery. We can assume it’ll use Samsung-made displays (it would be wild if Samsung used screens from one of its competitors) but the type of display tech (for example, QLED, OLED or LCD), resolution, and size are still unknown.

We also don’t know what size battery it’ll have, or its storage space, or its RAM. Nor what design it will adopt – will it look like the Vision Pro with an external display, like the Meta Quest 3 or Quest Pro, or something all-new?

Key Snapdragon XR2 Plus Gen 2 specs, including that it has support fo 4.3k displays, 8x better AI performance, and 2.5x better GPU performance

(Image credit: Qualcomm)

But we do know one thing. It’ll run (as we predicted) on a brand-new Snapdragon XR2 Plus Gen 2 chip from Qualcomm – an updated version of the chipset used by the Meta Quest Pro, and slightly more powerful than the XR2 Gen 2 found in the Meta Quest 3.

The upshot is that this platform can now support two displays at 4.3K resolution running at up to 90fps. It can also manage over 12 separate camera inputs that VR headsets will rely on for tracking – including controllers, objects in the space, and face movements – and it has more advanced AI capabilities, 2.5x better GPU performance, and Wi-Fi 7 (as well as 6 and 6E).

What we want to see from the new Samsung XR/VR headset

1. Samsung’s XR/VR headset to run on the Quest OS 

Girl wearing Meta Quest 3 headset interacting with a jungle playset

We’d love to see the best Quest apps on Samsung’s VR headset (Image credit: Meta)

This is very much a pipe dream. With Google and Samsung already collaborating on the project it’s unlikely they’d want to bring in a third party – especially if this headset is intended to compete with Apple and Meta hardware.

But the Quest platform is just so good; by far the best we’ve seen on standalone VR headsets. It’s clean, feature-packed, and home to the best library of VR games and apps out there. The only platform that maybe beats it is Steam, but that’s only for people who want to be tethered to a PC rig.

By partnering with Meta, Samsung’s headset would get all of these benefits, and Meta would have the opportunity to establish its OS as the Windows or Android of the spatial computing space – which might help its Reality Labs division to generate some much-needed revenue by licensing the platform to other headset manufacturers.

2. A (relatively) affordable price tag

Oculus Quest 2 on a white background

The Quest 2 is the king of VR headsets, because it’s affordable  (Image credit: Shutterstock / Boumen Japet)

There’s only been one successful mainstream VR headset so far: the Oculus Quest 2. The Meta-made device has accounted for the vast, vast majority of VR headset sales over the past few years (eclipsing the total lifetime sales of all previous Oculus VR headsets combined in just five months) and that’s down to one thing; it’s so darn cheap.

Other factors (like a pandemic that forced everyone inside) probably helped a bit. But fundamentally, getting a solid VR headset for $ 299 / £299 / AU$ 479 is a very attractive offer. It could be better specs-wise but it’s more than good enough and offers significantly more bang for your buck than the PC-VR rigs and alternative standalone headsets that set you back over $ 1,000 when you factor in everything you need.

Meta’s Quest Pro, the first headset it launched after the Quest 2 that has a much more premium $ 999 / £999 / AU$ 1,729 price (it launched at $ 1,500 / £1,500 / AU$ 2,450) has seemingly sold significantly worse. We don’t have exact figures but using the Steam Hardware Survey figures for December 2023 we can see that while 37.87% of Steam VR players use a Quest 2 (making it the most popular option, and more than double the next headset) only 0.44% use a Quest Pro – that’s about 86 times less.

The Apple Vision Pro headset on a grey background

The Apple Vision Pro is too pricey (Image credit: Apple)

So by making its headset affordable, Samsung would likely be in a win-win situation. We win because its headset isn’t ridiculously pricey like the $ 3,499 (around £2,800 / AU$ 5,300) Apple Vision Pro. Samsung wins because its headset has the best chance of selling super well.

We’ll have to wait and see what’s announced by Samsung, but we suspect we’ll be disappointed on the price front. A factor that could keep this device from becoming one of the best VR headsets out there.

3. Controllers and space for glasses 

We’ve combined two smaller points into one for this last ‘what we want to see’.

Hand tracking is neat, but ideally it’ll just be an optional feature on the upcoming Samsung VR headset rather than the only way to operate it – which is the case with the Vision Pro. 

Most VR apps are designed with controllers in mind, and because most headsets now come with handsets that have similar button layouts it’s a lot easier to port software to different systems. 

Meta Quest 3 controllers floating in a turquoise colored void.

The Meta Quest 3’s controllers are excellent, copy these Samsung (Image credit: Meta )

There are still challenges, but if your control scheme doesn’t need to be reinvented, developers have told us that’s a massive time-saver. So having controllers with this standard layout could help Samsung get a solid library of games and apps on its system by making it easier for developers to bring their software to it.

We’d also like it to be easy for glasses wearers to use the new Samsung VR headset. The Vision Pro’s prescription lenses solution is needlessly pricey when headsets like the Quest 2 and Quest 3 have a free in-built solution for the problem – an optional spacer or way to slightly extend the headset so it’s further from your face leaving room for specs.

Ideally, Samsung’s VR headset would also have a free and easy solution for glasses wearers, too.

You might also like

TechRadar – All the latest technology news

Read More

We may finally know when Apple’s Vision Pro will launch – but big questions remain

If you’re an Apple fan and your new year’s resolution is to save your money this January, we’ve got some bad news: a new rumor says Apple’s Vision Pro headset will go on sale in just a few weeks’ time. However – and perhaps fortunately for your finances – there are some serious questions floating around the rumor.

The mooted January launch date comes from Wall Street Insights, a news outlet for Chinese investors (via MacRumors). According to a machine-translated version of the report, “Apple Vision Pro is expected to be launched in the United States on January 27, 2024.”

The report adds that “Supply chain information shows that Sony is currently the first supplier of silicon-based OLEDs for the first-generation Vision Pro, and the second supplier is from a Chinese company, which will be the key to whether Vision Pro can expand its production capacity.”

With the supposed launch date just 25 days away, it might not be long before we see Apple’s most significant new product in years. Yet, despite the apparent certainty in the report, there are reasons to be skeptical about its accuracy.

Date uncertainty

For one thing, January 27 is a Saturday, an unlikely day for an Apple product launch. It could be that Wall Street Insights is referring to January 27 in China which, thanks to time zone differences, aligns with Friday January 26 in the United States. That’s a much more probable release date, as it doesn't coincide with the weekend, when many of the media outlets that would cover the Vision Pro will be providing reduced news coverage. Yet the report specifically mentions the date in the US, meaning that questions remain.

Moving past the specific date, an early 2024 launch date has been put forward by a number of reputable Apple analysts. Ming-Chi Kuo, for example, has suggested a late January or early February timeframe, while Bloomberg reporter Mark Gurman has zeroed in on February as the release month.

Either way, it’s clear that the Vision Pro is almost upon us. Apple has reportedly been training retail staff how to use the device, which implies that the company is almost ready to pull the trigger.

We’ll see how accurate the Wall Street Insights report is in a few weeks’ time. Regardless of whether or not it has the correct date, we’re undoubtedly on the brink of seeing Apple’s most anticipated new product in recent memory.

TechRadar – All the latest technology news

Read More

What is Google Bard? Everything you need to know about the ChatGPT rival

Google finally joined the AI race and launched a ChatGPT rival called Bard – an “experimental conversational AI service” earlier this year. Google Bard is an AI-powered chatbot that acts as a poet, a crude mathematician and a even decent conversationalist.

The chatbot is similar to ChatGPT in many ways. It's able to answer complex questions about the universe and give you a deep dive into a range of topics in a conversational, easygoing way. The bot, however, differs from its rival in one crucial respect: it's connected to the web for free, so – according to Google – it gives “fresh, high-quality responses”.

Google Bard is powered by PaLM 2. Like ChatGPT, it's a type of machine learning called a 'large language model' that's been trained on a vast dataset and is capable of understanding human language as it's written.

Who can access Google Bard?

Bard was announced in February 2023 and rolled out for early access the following month. Initially, a limited number of users in the UK and US were granted access from a waitlist. However, at Google I/O – an event where the tech giant dives into updates across its product lines – Bard was made open to the public.

It’s now available in more than 180 countries around the world, including the US and all member states of the European Union. As of July 2023, Bard works with more than 40 languages. You need a Google account to use it, but access to all of Bard’s features is entirely free. Unlike OpenAI’s ChatGPT, there is no paid tier.

The Google Bard chatbot answering a question on a computerscreen

(Image credit: Google)

Opening up chatbots for public testing brings great benefits that Google says it's “excited” about, but also risks that explain why the search giant has been so cautious to release Bard into the wild. The meteoric rise of ChatGPT has, though, seemingly forced its hand and expedited the public launch of Bard.

So what exactly will Google's Bard do for you and how will it compare with ChatGPT, which Microsoft appears to be building into its own search engine, Bing? Here's everything you need to know about it.

What is Google Bard?

Like ChatGPT, Bard is an experimental AI chatbot that's built on deep learning algorithms called 'large language models', in this case one called LaMDA. 

To begin with, Bard was released on a “lightweight model version” of LaMDA. Google says this allowed it to scale the chatbot to more people, as this “much smaller model requires significantly less computing power”.

The Google Bard chatbot answering a question on a phone screen

(Image credit: Google)

At I/O 2023, Google launched PaLM 2, its next-gen language model trained on a wider dataset spanning multiple languages. The model is faster and more efficient than LamDA, and comes in four sizes to suit the needs of different devices and functions.

Google is already training its next language model, Gemini, which we think is one of its most exciting projects of the next 25 years. Built to be multi-modal, Gemini is touted to deliver yet more advancements in the arena of generative chatbots, including features such as memory.

What can Google Bard do?

In short, Bard is a next-gen development of Google Search that could change the way we use search engines and look for information on the web.

Google says that Bard can be “an outlet for creativity” or “a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills”.

Unlike traditional Google Search, Bard draws on information from the web to help it answer more open-ended questions in impressive depth. For example, rather than standard questions like “how many keys does a piano have?”, Bard will be able to give lengthy answers to a more general query like “is the piano or guitar easier to learn”?

The Google Bard chatbot answering a question on a computer screen

An example of the kind or prompt that Google’s Bard will give you an in-depth answer to. (Image credit: Google)

We initially found Bard to fall short in terms of features and performance compared to its competitors. But since its public deployment earlier this year, Google Bard’s toolkit has come on leaps and bounds. 

It can generate code in more than 20 programming languages, help you solve text-based math equations and visualize information by generating charts, either from information you provide or tables it includes in its responses. It’s not foolproof, but it’s certainly a lot more versatile than it was at launch.

Further updates have introduced the ability to listen to Bard’s responses, change their tone using five options (simple, long, short, professional or casual), pin and rename conversations, and even share conversations via a public link. Like ChatGPT, Bard’s responses now appear in real-time, too, so you don’t have to wait for the complete answer to start reading it.

Google Bard marketing image

(Image credit: Google)

Improved citations are meant to address the issue of misinformation and plagiarism. Bard will annotate a line of code or text that needs a citation, then underline the cited part and link to the source material. You can also easily double-check its answers by hitting the ‘Google It’ shortcut.

It works with images as well: you can upload pictures with Google Lens and see Google Search image results in Bard’s responses.

Bard has also been integrated into a range of Google apps and services, allowing you deploy its abilities without leaving what you’re working on. It can work directly with English text in Gmail, Docs and Drive, for example, allowing you to summarize your writing in situ.

Similarly, it can interact with info from the likes of Maps and even YouTube. As of November, Bard now has the limited ability to understand the contents of certain YouTube videos, making it quicker and easier for you to extract the information you need.

What will Google Bard do in future?

A huge new feature coming soon is the ability for Google Bard to create generative images from text. This feature, a collaborative effort between Google and Adobe, will be brought forward by the Content Authenticity Initiative, an open-source Content Credentials technology that will bring transparency to images that are generated through this integration.

The whole project is made possible by Adobe Firefly, a family of creative generative AI models that will make use of Bard's conversational AI service to power text-to-image capabilities. Users can then take these AI-generated images and further edit them in Adobe Express.

Otherwise, expect to see Bard support more languages and integrations with greater accuracy and efficiency, as Google continues to train its ability to generate responses.

Google Bard vs ChatGPT: what’s the difference?

Fundamentally the chatbot is based on similar technology to ChatGPT, with even more tools and features coming that will close the gap between Google Bard and ChatGPT.

Both Bard and ChatGPT are chatbots that are built on 'large language models', which are machine learning algorithms that have a wide range of talents including text generation, translation, and answering prompts based on the vast datasets that they've been trained on.

A laptop screen showing the landing page for ChatGPT Plus

(Image credit: OpenAI)

The two chatbots, or “experimental conversational AI service” as Google calls Bard, are also fine-tuned using human interactions to guide them towards desirable responses. 

One difference between the two, though, is that the free version of ChatGPT isn't connected to the internet – unless you use a third-party plugin. That means it has a very limited knowledge of facts or events after January 2022. 

If you want ChatGPT to search the web for answers in real time, you currently need to join the waitlist for ChatGPT Plus, a paid tier which costs $ 20 a month. Besides the more advanced GPT-4 model, subscribers can use Browse with Bing. OpenAI has said that all users will get access “soon”, but hasn't indicated a specific date.

Bard, on the other hand, is free to use and features web connectivity as standard. As well as the product integrations mentioned above, Google is also working on Search Generative Experience, which builds Bard directly into Google Search.

Does Google Bard only do text answers?

Until recently Google's Bard initially only answered text prompts with its own written replies, similar to ChatGPT. But one of the biggest changes to Bard is its multimodal functionality. This allows the chatbot to answer user prompts and questions with both text and images.

Users can also do the same, with Bard able to work with Google Lens to have images uploaded into Bard and Bard responding in text. Multimodal functionality is a feature that was hinted at for both GPT-4 and Bing Chat, and now Google Bard users can actually use it. And of course, we also have Google Bard's Adobe-powered AI image generator, which will be powered by Adobe Firefly.

TechRadar – All the latest technology news

Read More

Amazon announces Alexa AI – 5 things you need to know about the voice assistant

During a recent live event, Amazon revealed Alexa will be getting a major upgrade as the company plans on implementing a new large language model (LLM) into the tech assistant.

The tech giant is seeking to improve Alexa’s capabilities by making it “more intuitive, intelligent, and useful”. The LLM will allow it to behave similarly to a generative AI in order to provide real-time information as well as understand nuances in speech. Amazon says its developers sought to make the user experience less robotic.

There is a lot to the Alexa update besides the LLM, as it will also be receiving a lot of features. Below is a list of the five things you absolutely need to know about Alexa’s future.

1. Natural conversations

In what may be the most impactful change, Amazon is making a number of improvements to Alexa’s voice in an effort to make it sound more fluid. It will lack the robotic intonation people are familiar with. 

You can listen to the huge difference in quality on the company’s Soundcloud page. The first sample showcases the voice Alexa has had for the past decade or so since it first launched. The second clip is what it’ll sound like next year when the update launches. You can hear the robot voice enunciate a lot better, with more apparent emotion behind.

2. Understanding context

Having an AI that understands context is important because it makes the process of issuing commands easier. Moving forward, Alexa will be able to better understand  nuances in speech. It will know what you’re talking about even if you don’t provide every minute detail. 

Users can issue vague commands – like saying “Alexa, I’m cold” to have the assistant turn up the heat in your house. Or you can tell the AI it’s too bright in the room and it will automatically dim the lights only in that specific room.

3. Improved smart home control

In the same vein of understanding context, “Alexa will be able to process multiple smart home requests.” You can create routines at specific times of the day plus you won’t need a smartphone to configure them. It can all be done on the fly. 

You can command the assistant to turn off the lights, lower the blinds in the house, and tell the kids to get ready for bed at 9 pm. It will perform those steps in that order, on the dot. Users also won’t need to repeat Alexa’s name over and over for every little command.

Amazon Alexa smart home control

(Image credit: Amazon)

4. New accessibility features 

Amazon will be introducing a variety of accessibility features for customers who have “hearing, speech, or mobility disabilities.” The one that caught our interest was Eye Gaze, allowing people to perform a series of pre-set actions just by look at their device. Actions include playing music or sending messages to contacts. Eye Gaze will, however, be limited to Fire Max 11 tablets in the US, UK, Germany, and Japan at launch.

There is also Call Translation, which, as the name suggests, will translate languages in audio and video calls in real-time. In addition to acting as an interpreter, this tool is said to help deaf people “communicate remotely more easily.” This feature will be available to Echo Show and Alexa app users across eight countries (the US, Mexico, and the UK just to mention a few) in 10 languages, including English, Spanish, and German.

5. Content creation

Since the new Alexa will operate on LLM technology, it will be capable of light content creation via skills. 

Through the Character.AI tool, users can engage in “human-like voice conversations with [over] than 25 unique Characters.” You can chat with specific archetypes, from a fitness coach to famous people like Albert Einstein. 

Music production will be possible, too, via Splash. Through voice commands, Splash can create a track according to your specifications. You can then customize the song further by adding a vocal track or by changing genres.

It’s unknown exactly when the Alexa upgrade will launch. Amazon says everything you see here and more will come out in 2024. We have reached out for clarification and will update this story if we learn anything new.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

You know what my Oculus Quest 2 setup needs? More weird controller attachments

When I think of controller attachments, I instantly imagine the crappy Wii remote add-ons I had as a kid. 

At first, I loved them – I wouldn’t touch Wii Sports unless my controller looked like a tennis racket or golf club – but over time, I came to despise them. The cheap plastic constructions would always break after a few uses, and they objectively made playing games harder because they’d block the sensor on the end of the remote.

I’ve recently had the chance to try out the HelloReal Grip-to-putter with my Oculus Quest 2 and Meta Quest Pro VR headsets, and it’s reopened my eyes to the immersion that accessories can bring to virtual reality – whether it’s gaming, working out, or just plain working.

My Grip-to-putter thoughts 

The HelloReal Grip-to-putter is a golf club controller attachment and the perfect companion for Walkabout Mini Golf – one of my favorite VR experiences

The HelloReal Grip-to-putter with an oculus quest 2 on one end and a colorful grip at the other

(Image credit: Future)

You slot your Quest 2 or Quest Pro controller into the open end where it sits snuggly – for additional assurance that your handset won’t fly off when you swing the club HelloReal has included instructions on securing it using the controller’s wrist straps. Once it’s in place you can boot up your favorite VR golfing app and enjoy swinging a club that feels much more like the real thing than your controller ever did.

The Grip-to-putter gets its name from the grip-to-putt feature in Walkabout Mini Golf. When this setting is switched on in the app’s menu your club’s end will vanish until you hold down the side grip button on the controller. This lets you get a few practice swings without the risk of accidentally hitting your virtual ball before you’re ready. 

HelloReal’s attachment includes a contraption that will hold down the controller’s grip button when you press the trigger that sits just above the padded end. While playing Walkabout with the putter took a little getting used to – because the mechanics are a bit different with the add-on – I found that it made the whole experience significantly more immersive.

Have to feel to believe 

As you can see from the images included above, the HelloReal putter looks nothing like a golf club beyond the fact it’s vaguely pole shaped. But it doesn’t matter what the add-on looks like, just what it feels like – and HelloReal has got the golf club feeling down to a tee. The padded grip and weight distribution of the putter are perfect. 

A closeup of the HelloReal Grip-to-putter with an Oculus Quest 2 attached and a black claw pressing down the grip button

(Image credit: Future)

Once I slipped my headset on, I fully believed I was holding a real golf club. And this got me thinking – I need more realistic feeling VR accessories to use at home.

Inspired by the Wii’s heyday, I can already imagine some of the VR gaming accessories I could get, such as attachments that mimic the feel of swords and axes or sporting-inspired add-ons for VR fishing and tennis.

For the VR fitness fans out there, wouldn’t it be great to get a weighted club attachment that makes your Supernatural workout a little tougher? Maybe someday, we could get a boxing glove-inspired accessory that brings Litesport VR and other boxing workouts to life.

While working in the metaverse, perhaps we could use blank slates and styluses that make us believe we’re writing on paper when taking virtual notes. OK, this add-on is a little bleak, but if metaverse working is inevitable, it might make it more enjoyable than I found it before – I much prefer traditional pen and paper to using a keyboard. It would also feel more real than the controller styluses Meta includes in the Quest Pro’s box, which enable you to write in VR, albeit clunkily. If you didn’t realize the styluses were in the box, it might be because they’re tiny and exceptionally easy to lose.

Cost and effect 

These sorts of realism-boosting accessories are already deployed by commercial VR experiences you can find in some malls and theme parks to great effect – but they do admittedly have a downside if you want to bring them home. Cost.

A VR playter running around on an Omni One VR treadmill

The Omni One VR treadmill is a next-level VR accessory (Image credit: Virtuix)

Different add-ons have different prices, with gadgets like the Omni One VR treadmill at the ‘ridiculous’ (over $ 2,500, around £2,000 / AU$ 3,900) end, and accessories like the Grip-to-putter at a more reasonable $ 58.99 (around £46 / AU$ 91). Admittedly, $ 58.99 still isn’t ‘cheap’, but if you plan to use your VR accessory a lot, you'll likely feel it offers solid bang for your buck. 

So if you are a VR power user – or even just pick it up once a week – and have been weighing up buying a few accessories for it, then I’d say go for it (provided they’re good quality). Burnt by the Wii, I’d been instantly dismissing every add-on as a gimmick, but after trying the HelloReal putter, I’ve been scouring the internet for other weird goodies I could pick up to improve my VR setup.

You might also like

TechRadar – All the latest technology news

Read More

Meta Quest 3: price, design, and everything we know so far

The Oculus Quest 3, now officially known as Meta Quest 3, has been announced, and we won't have to wait much longer to get our hands on the highly-anticipated VR headset. 

Meta's more budget-friendly Quest 3 headset, separate from the pricier, premium Meta Quest Pro, will be more expensive than the Oculus Quest 2. That is to be expected for a new 2023 headset, and to soften the blow Meta has lowered the price of the Quest 2 so you can at least try VR if you're on a tighter budget even if it's not the latest and greatest version of it. And the Quest 3 will be greatest, with Meta calling it its “most powerful headset” yet.

There's plenty of new and official information on the Meta Quest 3 now, and it's certainly shaping up to be a contender when it comes to topping our list of the best VR headsets you can buy. For now, read on to learn everything we know about the upcoming headset.

Meta Quest 3: What you need to know

  • What is the Meta Quest 3? Meta’s follow-up VR headset to the Quest 2
  • Meta Quest 3 release date: “Fall” 2023, but most likely September or October
  • Meta Quest 3 specs: We don’t have all the details, but Meta says it’s its “most powerful headset” yet
  • Meta Quest 3 design: Similar to the Quest 2 but slimmer and the controllers lack tracking rings

Meta Quest 3 price

So far, Meta has only confirmed the price of the Quest 3's 128GB model, which will retail for $ 499 / £499. The official announcement page also makes mention of “an additional storage option for those who want some extra space.”

We're not sure exactly as to how much extra storage will be provided by this upgraded model, but it's very likely to be a total of 256GB there. No price for this model has been announced either, but expect something in the region of $ 599 / £599 or potentially up to $ 699 / £699.

Meta Quest 3 release date

The Quest 3 has been confirmed to launch in “Fall 2023,” which means, barring any delays, we should see it launch somewhere between the months of September and December 2023. Meta's official page for the Quest 3 states that further details will be revealed at the Meta Connect event happening on September 27.

Given the release schedule of the Meta Quest Pro, which was detailed at Meta Connect 2022 and launched soon after, we expect that the Meta Quest 3 will launch shortly after the Connect in September or October. We'll have to wait and see what Meta decides though.

Oculus Quest 3 specs and features

Meta Quest 3 floating next to its handsets

(Image credit: Meta )

We now have official information, straight from Meta's mouth, about the specs and features we can expect for Meta Quest 3. It was always a safe bet that the headset would continue to be a standalone device, and that's certainly remained true. You won't need a PC or external device to play the best VR games out there.

The most immediate improvement for Quest 3 is a high-fidelity color passthrough feature, which should allow you to view your immediate surroundings via a high quality camera. Not only will this help you plan out your playing space, but should also aid augmented and mixed reality experiences become even more immersive.

Quest 3 has also been confirmed to sport a 40% slimmer optic profile over the last-gen Quest 2. That'll reduce the weight of the device and should allow for comfier play sessions overall. Similarly, its Touch Plus controllers have been reworked with a more ergonomic design. Other improvements in this area include enhanced hand tracking and controller haptic feedback, similar to the DualSense wireless controller for PS5.

Meta Quest 3's front exploding outwards, revealing all of its internal parts

(Image credit: Meta )

It's been speculated that the Quest 3 will adopt uOLED displays (an upgraded version of OLED). Though, we've also seen conflicting reports that instead hint at OLED displays, and mini LED displays. What analysts seem to agree on is some kind of visual enhancement will come to the Quest 3 – so expect improved display quality and higher resolutions.

So far, Meta's own details remain vague on this front. We know that Quest 3 will feature a higher resolution display than Quest 2, paired with pancake lenses for greater image clarity and an overall reduction in device weight. These lenses should also improve the display of motion, hopefully reducing motion sickness and the dreaded image ghosting effect that plagues many a VR headset, even the PSVR 2.

Lastly, Meta has confirmed that the Quest 3 will be powered by the latest Snapdragon chipset from Qualcomm. In Meta's own words, the new chipset “delivers more than twice the graphical performance as the previous generation Snapdragon GPU in Quest 2.” We should expect a pretty significant leap in visual quality, then.

Oculus Quest 3 – what we’d like to see

In our Oculus Quest 2 review, it was hard to find fault with a VR headset that proved immersive, comfortable and easy to use. And yet, while it clearly leads the pack in the VR market, it still falls foul of some of the pitfalls that the technology as a whole suffers from. Here’s a list of updates we want to see on the Oculus Quest 3:

Improved motion sickness prevention
One of those technological pitfalls, and perhaps an unavoidable one, is the motion sickness that can often ensue when using any VR headset. Depending on your tolerance for whirring and blurring, the Quest 2 can be one helluva dizziness-inducer. While there isn’t yet a clear path to making any VR headset immune to user dizziness, it’s nonetheless something we’d like to see improved on the Oculus Quest 3.

A better fit
The same goes for the fit of the device. While the Quest 2 is indeed a comfortable weight when on the head, it can still be a little claustrophobic to achieve a good, tight fit. Again, it’s a problem encountered by almost all VR headsets, and a base-level issue that the next generation of hardware should at least attempt to better address. Those aforementioned design rumors suggest the new Oculus device could solve some of these issues.

Improved Oculus Store
Other improvements we’d like to see include a more effective in-VR Oculus Store. While the equivalent store on browser and in the app makes it easy to discover new releases and search for upcoming games, the store inside the headset itself seems to roll the dice on what apps are shown with no way to quickly navigate to new content. This makes it difficult to pre-order games and discover new titles to purchase when using the device, which is a pivotal part of ensuring the headset maintains replayability.

A virtual hand pointing at the Quest store menu using the Quest's hand-tracking

(Image credit: Oculus / Facebook)

A neighborhood-like social space 
While the Quest 2 has a competent party invitation system to get you game-to-game with your friends, there isn’t a social space to engage with others in-between. It would be interesting to see the Quest 3 introduce a virtual social space, in the same vein as NBA 2K’s neighborhood area, to share some downtime with others. What’s with the multi-person furniture in the current home environment if there’s nobody to share it with? Luckily, Meta's new metaverse project – ambitious as it seems – suggests virtual social spaces will be at the forefront of all future Quest headsets.

Improved media sharing
Sharing screenshots and videos on Oculus devices has never been easy, and it’s an issue that the Quest 2 has tried to address with a few video updates.  The process could still be more streamlined, so we'd like to see the Oculus 3 make the whole deal more accessible.1080p video, app integration, proper audio syncing – that’d all be golden.

TechRadar – All the latest technology news

Read More

Apple Vision Pro price, release date and everything we know about the VR headset

The Apple Vision Pro is one of the biggest tech announcements of recent years – and with the dust is still settling on the tech giant's first AR/VR headset, many questions remain. What's the Vision Pro's actual release date? What do we know about its specs? And how will you use it if you wear glasses?

Apple Vision Pro specs

– Mixed reality headset
– Dual M2 and R1 chip setup
– 4K resolution per eye
– No controllers, uses hand tracking and voice inputs
– External battery pack
– Two-hour battery life
– Starts at $ 3,499 (around £2,800 / AU$ 5,300)
– Runs on visionOS

We've rounded up the answers to those questions and more in this guide to everything we know (so far) about the Apple Vision Pro. You can also read our hands-on Apple Vision Pro review for a more experiential sense of what it's like to wear the headset. 

Now that visionOS, which is the headset's operating system, is in the hands of developers, a bigger picture is forming of exactly how this “spatial computer” (as Apple calls its) will work and fit into our lives.

Still, actually using the Vision Pro as a next-gen Mac, TV, FaceTime companion and more is a long way off. It'll cost $ 3,499 (around £2,800 / AU$ 5,300) when it arrives “early next year”, and that'll only be in the US initially.

Clearly, the Vision Pro is a first-generation, long-term platform that is going to take a long time to reach fruition. But the journey there is definitely going to be fun as more of its mysteries are uncovered – so here's everything we know about Apple's AR/VR headset so far.

Apple Vision Pro latest news

Apple Vision Pro: what you need to know

Vision Pro release date: Sometime “early next year” according to Apple.

Vision Pro headset price: Starts at $ 3,499 (around £2,800 / AU$ 5,300).

Vision Pro headset specs: Apple's headset uses two chipsets, an M2 and a new R1 to handle regular software and its XR capabilities respectively. It also has dual 4K displays.

Vision Pro headset design: The Vision Pro has a similar design to other VR headsets, with a front panel that covers your eyes, and an elastic strap. One change from the norm is that it has an outer display to show the wearer's eyes.

Vision Pro headset battery life: It lasts for up to two hours on a full charge using the official external battery pack.

Vision Pro headset controllers: There are no controllers – instead you'll use your eyes, hands, and voice to control its visionOS software.

Apple Vision Pro: price and release date

Apple says the Vision Pro will “start” at $ 3,499 (that's around £2,800 / AU$ 5,300). That wording suggests that more expensive options will be available, but right now we don't know what those higher-priced headsets might offer over the standard model.

As for release date for the Vision Pro, Apple has only given a vague “early next year.” That's later than we'd been expecting, with leaks suggesting it would launch in the next few months – perhaps around the same time as the iPhone 15 – but that isn't the case. As 2024 gets closer we expect Apple will give us an update on when we'll be able to strap a Vision Pro onto our heads.

Interestingly, Apple's website only mentions a US release. Apple has yet to confirm if the Vision Pro will launch in regions outside of the US, and when that'll happen.

Apple Vision Pro: design

The Apple Vision shares a lot of similarities with the current crop of best VR headsets. It has a large face panel that covers your eyes, and is secured to your head with a strap made from elasticated fabric, plastic and padding.

But rather than the similarities, let's focus on the Vision Pro's unique design features.

The biggest difference VR veterans will notice is that the Vision Pro doesn't have a battery; instead, it relies on an external battery pack. This is a sort of evolution of the HTC Vive XR Elite's design, which allowed the headset to go from being a headset with a battery in its strap to a battery-less pair of glasses that relies on external power.

Vision Pro

(Image credit: Apple)

This battery pack will provide roughly two hours of use on a full charge according to Apple, and is small enough to fit in the wearer's pocket. It'll connect to the headset via a cable, which is a tad unseemly by Apple’s usual design standards, but what this choice lacks in style it should make up for in comfort. 

We found the Meta Quest Pro to be really comfy, but wearing it for extended periods of time can put a strain on your neck – just ask our writer who wore the Quest Pro for work for a whole week.

Apple Vision Pro VR headset's battery pack on a table

The Vision Pro’s battery pack (Image credit: Future / Lance Ulanoff)

If you buy a Vision Pro you'll find that your box lacks something needed for other VR headsets: controllers. That's because the Vision Pro relies solely on tracking your hand and eye movements, as well as voice inputs, to control its apps and experiences. It'll pick up these inputs using its array of 12 cameras, five sensors, and six microphones.

The last design detail of note is the Vision Pro's Eyesight display. It looks pretty odd, maybe even a bit creepy, but we're reserving judgment until we've had a chance to try it out.

Apple Vision Pro's Eyesight feature showing you the wearer's eyes.

Eyesight in action (Image credit: Apple)

When a Vision Pro wearer is using AR features and can see the real world, nearby people will see their eyes 'through' the headset's front panel (it's actually a screen showing a camera view of the eyes, but based on Apple's images you might be convinced it's a simple plane of glass). If they're fully immersed in an experience, onlookers will instead see a cloud of color to signify that they're exploring another world.

Apple Vision Pro: specs and features

As the rumors had suggested, the Apple Vision Pro headset will come with some impressive specs to justify its sky-high price.

First, the Vision Pro will use two chipsets to power its experiences. One is an M2 chip, the same one you'll find in the Apple iPad Pro (2022), and some of the best MacBooks and Macs

This powerful processor will handle the apps and software you're running on the Vision Pro. Meanwhile, the R1 chipset will deal with the mixed reality side of things, processing the immersive elements of the Vision Pro that turn it from a glorified wearable Mac display to an immersive “spatial computer”.

Apple Vision Pro

(Image credit: Apple)

On top of these chips, the Vision Pro has crisp 4K micro-OLED displays – one per eye – that offer roughly 23 million pixels each. According to Apple the Vision Pro's display fits 64 pixels into the same space that the iPhone's screen fits one single pixel, and this could eliminate the annoying screen-door effect that affects other VR headsets. 

This effect occurs when you're up close to a screen and you can start to see the gaps between the pixels in the array; the higher the pixel density, the closer you can get before the screen door effect becomes noticeable.

These components will allow you to run an array of Apple software through Apple's new visionOS platform (not xrOS as was rumored). This includes immersive photos and videos, custom-made Disney Plus experiences, and productivity apps like Keynote.

You'll also be able to play over 100 Apple Arcade titles on a virtual screen that's like your own private movie theatre.

Apple Vision OS app screen

(Image credit: Apple)

You'll be able to connect your Vision Pro headset to a Mac via Bluetooth. When using this feature you'll be able to access your Mac apps and see your screen on a large immersive display, and it'll sit alongside other Vision Pro apps you're using. Apple says this setup will help you be more productive than you've ever been.

With the power of the M2 chip, Apple's headset should be able to run most Mac apps natively – Final Cut Pro and Logic Pro recently arrived on M2 iPads. For now, however, Apple hasn't revealed if these and other apps will be available natively on the Vision Pro, or if you'll need a Mac to unlock the headset's full potential. We expect these details will be revealed nearer to the headset's 2024 launch.

Apple Vision Pro: your questions answered

We've answered all of the basic questions about the Apple Vision Pro's release date, price, specs and more above, but you may understandably still have some more specific or broader ones. 

To help, we've taken all of the most popular Vision Pro questions from Google and social media and answered them in a nutshell below.

Apple Vision Pro

(Image credit: Apple)

What is the point of Apple Vision Pro?

Apple says that the point of the Vision Pro is to introduce a “new era of spatial computing”. It’s a standalone, wearable computer that aims to deliver new experiences for watching TV, working, reliving digital memories, and remotely collaborating with people in apps like FaceTime.

But it’s still early days. And there arguably isn’t yet a single ‘point’ to the Vision Pro. At launch, it’ll be able to do things like give you a huge, portable monitor for your Apple laptop, or create a home cinematic experience in apps like Disney Plus. However, like the first Apple Watch, it’ll be up to developers and users to define the big new use cases for the Vision Pro.

Apple Vision Pro

(Image credit: Apple)

How much does an Apple Vision Pro cost?

The Apple Vision Pro will cost $ 3,499 when it goes on sale in the US “early next year”. It won’t be available in other countries until “later next year”, but that price converts to around £2,815 / AU$ 5,290.

This makes the Vision Pro a lot more expensive than rival headsets. The Meta Quest Pro was recently given a price drop to $ 999 / £999 / AU$ 1,729. Cheaper and less capable VR-only headsets, like the incoming Meta Quest 3, are also available for $ 499 / £499 / AU$ 829. But there is also no direct comparison to the kind of technology offered by the Vision Pro.   

Apple Vision Pro

(Image credit: Apple)

Does Apple Vision Pro work with glasses?

The Apple Vision Pro does work for those who wear glasses, although there are some things to be aware of. If you wear glasses you won’t wear them with the headset. Instead, you’ll need to buy some separate optical inserts that attach magnetically to the Vision Pro’s lenses.  Apple hasn’t yet announced the pricing for these, currently only stating that “vision correction accessories are sold separately”.

Apple says it’ll offer a range of vision correction strengths that won’t compromise the display quality or the headset’s eye-tracking performance. But it also warns that “not all prescriptions are supported” and that a “valid prescription is required”. So while the Vision Pro does work well for glasses wearers, there are some potential downsides.

A virtual display hovering above an Apple MacBook

(Image credit: Apple)

Is Apple Vision Pro a standalone device?

The Apple Vision Pro is a standalone device with its own visionOS operating system and doesn’t need an iPhone or MacBook to run. This is why Apple calls the headset a “spatial computer”.

That said, having an iPhone or MacBook alongside a Vision Pro will bring some benefits. For example, to create a personalized spatial audio profile for the headset’s audio pods, you’ll need an iPhone with a TrueDepth camera. 

The Vision Pro will also give MacBook owners a large virtual display that hovers above their real screen, an experience that won’t be available on other laptops. So while you don’t need any other Apple devices to use the Vision Pro, owning other Apple-made tech will help maximize the experience.

The merging of an Apple TV menu and a real room in the Apple Vision Pro

(Image credit: Apple)

Is Apple Vision Pro VR or AR?

The Apple Vision Pro offers both VR and AR experiences, even if Apple doesn’t use those terms to describe them. Instead, Apple says it creates “spatial experiences” that “blend the digital and physical worlds”. You can control how much you see of both using its Digital Crown on the side.

Turning the Digital Crown lets you control how immersed you are in a particular app. This reveals the real world behind an app’s digital overlays, or extends what Apple calls ‘environments’. These spread across and beyond your physical room, for example giving you a view over a virtual lake.

While some of the examples shown by Apple look like traditional VR, the majority err towards augmented reality, combining your real-world environment (captured by the Vision Pro’s full-color passthrough system) with its digital overlays.

Apple Vision Pro

(Image credit: Apple)

Is Apple Vision Pro see through?

The front of the Apple Vision Pro isn’t see-through or fully transparent, even though a feature called EyeSight creates that impression. The front of the headset is made from laminated glass, but behind that lens is an outward-facing OLED screen. 

It’s this screen that will show a real-time view of your eyes (captured by the cameras inside the headset) to the outside world if you’re in augmented reality mode. If you’re enjoying a fully immersive, VR-like experience like watching a movie, this screen will instead show a Siri-like graphic.

To help you look out through the headset, the Apple Vision Pro has a passthrough system that uses cameras on the outside of the goggles to give you a real-time, color feed of your environment. So while the headset feels like it’s see-through, your view of the real world is digital. 

The Apple Vision Pro headset on a black background

(Image credit: Apple)

How does Vision Pro work?

The Apple Vision Pro uses a combination of cameras, sensors, and microphones to create a controller-free computing experience that you control using your hands, eyes, and voice.

The headset’s forward-facing cameras capture the real world in front of you, so this can be displayed on its two internal lenses (Apple says these give you “more pixels than a 4K TV for each eye”). The Vision Pro’s side and downward-facing cameras also track your hand movements, so you can control it with your hands – for example, touching your thumb and forefinger together to click.

But the really unique thing about the Vision Pro is its eye-tracking, which is powered by a group of infrared cameras and LED illuminators on the inside of the headset. This mean you can simply look at app icons or even smaller details to highlight them, then use your fingers or voice to type.

TechRadar – All the latest technology news

Read More

Windows 11 has a hidden ‘emergency restart’ feature you probably don’t know about

Windows 11 has an ‘emergency restart’ feature that’s tucked away, and you’ve likely never seen it, but the function could come in handy if your PC freezes up.

Indeed, this option has been hidden deep in the restart machinery of Microsoft’s OS since Windows Vista, apparently (so yes, it’s in Windows 10 as well as 11, and all the other outdated incarnations of Windows going back to the big ‘V’).

What exactly does this feature do? It reboots your PC when all has gone awry, with the warning: “Click OK to immediately restart. Any unsaved data will be lost. Use this only as a last resort.”

Can’t you just reboot your PC anyway, using the Start menu (Power button)? Indeed you can, and that’s the way to go normally, but the emergency restart option is for situations where the interface has partly fallen over when your system has frozen, and the Start menu is unresponsive (or a crashed app is interfering with the reboot process, stalling it).

In such cases, as Betanews discovered, you can press CTRL+ALT+DELETE together, and here’s the clever bit, hold down the CTRL key and click on the Power button at the bottom-right of the screen (the icon that’s a little circle with a line at 12 o’clock).

That will put you into the Emergency Restart screen, with the message mentioned above, so you can then click OK and an emergency reboot will be performed.


Analysis: A useful extra escape route – but not without risks

This is a pretty cool ability to have, because if you can’t action a normal reboot (via the Start menu) for whatever reason – including a crashed application messing that option up, as mentioned – you can (hopefully) access this emergency restart.

Now, Microsoft only advises it as a last resort (and this is maybe why the feature isn’t documented, too) because it’s a short and sharp reboot that doesn’t bother with any of the pleasantries that a normal restart executes. Meaning all that housekeeping stuff that really should be done before shutting down the system. It quickly kills everything and turns off the system without safeguards, but that comes with some risks (data corruption is the most obvious potential peril that springs to mind).

However, and this is the key bit, it’s still a (somewhat) safer option than physically powering off your PC when it has locked up (by pressing the reset button, if your computer has one, or holding down the power button – or simply yanking out the plug, which is the real last resort).

So, if you can’t reboot any other way, this is a useful last-ditch method to know about. Of course, if your PC has frozen to the extent that even CTRL+ALT+DELETE doesn’t do anything, then you’ll have no choice but to turn to the power switch (or plug).

While we’re on the subject of cool Windows 11 shortcuts you might not know about, here’s another one we were reminded of on Twitter this morning. As Jen Gentlemen, Senior Program Manager at Microsoft, points out, any time you want live captions to appear in a game or when watching a video (if the source content doesn’t have its own captions), just press the Windows key + CTRL + L together to swiftly turn them on.

Via PC Gamer, PC World

TechRadar – All the latest technology news

Read More