Apple starts work on macOS 16 – and it sounds like a bigger deal than a MacBook Pro redesign

While we’re eagerly awaiting the public release of new operating systems like iOS 18 and macOS 15 (Sequoia) later this year, it seems like Apple has already begun work on macOS 16 (and iOS 19, for that matter). This fresh rumor, coupled with whispers of a MacBook Pro refresh for later in 2024, has us buzzing about the future of Apple’s best tech.

Reputable leaker and industry commentator Mark Gurman noted in his most recent ‘Power On’ newsletter (for Bloomberg) that Apple has started development of all its major operating systems for 2025, meaning macOS 16, iOS 19, watchOS 12, and visionOS 3.

Mind you, we’d expect that Apple would be kicking off work on next year’s big software refreshes at this point, though it’s still exciting to hear that the development of macOS 16 is underway. It’s too early even to speculate about what next year’s version of macOS could look like, and Gurman doesn’t drop any hints as to possible features, but if Sequoia has shown us anything, we’re certain we are in for another big, AI-driven refresh.

Indeed, by the time we get to 2025, we wonder whether Apple might be planning to incorporate AI in a much bigger way with macOS 16, maybe bringing in features that will change the way we use our Mac devices entirely! Given the pace of development in the world of artificial intelligence, this can’t be ruled out.

New MacBooks on the horizon?

Software aside, as for the future of Mac hardware, we’re already hearing rumors about the M4 refresh due to happen with Apple’s Mac lineup, with some reports speculating that the MacBook Pro could be the first Mac in line for the new chip (which is only in the iPad Pro right now).

According to Gurman’s earlier reports, we may only see the MacBook Pro 14-inch base model get an M4 refresh this year (with the vanilla M4), with the other models (with M4 Pro and Max) only debuting early in the following year. Furthermore, we’re not likely to get any major hardware changes to Apple’s MacBook ranges for the next couple of years, and it sounds like the big move with the MacBook Pro – when it gets OLED, which is likely to be a good time for a full redesign – may not happen until 2026.

So, Apple might feel the need to make up for only ushering in more minor improvements on the hardware front, by taking a big leap on the software front – meaning a much-improved macOS 16 (with lots of fresh AI powers as mentioned, most likely). Take all this as the speculation it very much is, mind you.                

You might also like…

TechRadar – All the latest technology news

Read More

These new AI smart glasses are like getting a second pair of ChatGPT-powered eyes

The Ray-Ban Meta glasses have a new rival for the title of best smart glasses, with the new Solos AirGo Visions letting you quiz ChatGPT about the objects and people you're looking at.

Unlike previous Solos glasses, the AirGo Vision boast a built-in camera and support for OpenAI's latest GPT-4o model. These let the glasses identify what you're looking at and respond to voice prompts. For example, you could simply ask, “what am I looking at?” or give the AirGo Visions a more specific request like “give me directions to the Eiffel Tower.”

Another neat feature of the new Solos glasses is their modular frame design, which means you can change some parts – for example, the camera or lenses – to help them suit different situations. These additional frames start from $ 89 (around £70 / AU$ 135).   

If talking to a pair of camera-equipped smart glasses is a little too creepy, you can also use the camera to simply take holiday snaps. The AirGo Visions also feature built-in speakers to answer your questions or play music.

While there's no official price or release date for the full version of the AirGo Visions, Solos will release a version without the camera for $ 249 (around £200 / AU$ 375) in July. That means we can expect a camera-equipped pair to cost at least as much as the Ray-Ban Meta glasses, which will set you back $ 299 / £299 / AU$ 449.

How good are AI-powered smart glasses?

While we haven't yet tried the Solos AirGo Visions, it's fair to say that smart glasses with AI assistants are a work in progress. 

TechRadar's Senior Staff Writer Hamish Hector recently tried the Meta AI's 'Look and Ask' feature on his Ray-Ban smart glasses and found the experience to be mixed. He stated that “the AI is – when it works – fairly handy,” but that “it wasn’t 100% perfect, struggling at times due to its camera limitations and an overload of information.”

The smart glasses failed in some tests, like identifying trees, but their ability to quickly summarize a confusing, information-packed sign about the area’s parking restrictions showed how useful they can be in some situations.

As always, with any AI-powered responses, you'll want to corroborate any answers to filter out errors and so-called hallucinations. But there's undoubtedly some potential in the concept, particularly for travelers or anyone who is visually impaired.

The Solos AirGo Visions' support for OpenAI's latest GPT-4o model should make for an interesting comparison with the Ray-Ban Meta smart glasses when the camera-equipped version lands. Until then, you can check out our guide to the best smart glasses you can buy right now.

You might also like

TechRadar – All the latest technology news

Read More

Windows 11 cleans house as long-standing apps like WordPad and Cortana get the axe in new preview build

Microsoft is gearing up to roll out a pretty substantial update for Windows 11, 24H2, with the update currently making its way through the final stages of testing. According to recent reports, it will see the end of several long-standing Microsoft products, including Cortana and WordPad, along with a few of Windows 11’s other old features. 

There is a provisional list of Windows features that are in the process of being deprecated on the official Microsoft Learn blog, although not all of them have a confirmed date. However, Swedish tech news site Sweclocker has shared that the first 24H2 release candidate version is now available via the Windows Insider Program. The final version that will roll out to all Windows 11 users is expected to be released in September or October.

We wrote about the announcements of WordPad’s and Cortana’s deprecation a while back, with Cortana giving way to Microsoft’s new all-purpose digital AI assistant, Copilot. Tips is another app that’s going to be absent in this build, along with Step Recorder. Steps Recorder is a built-in Windows assistance tool that has the ability to record user actions and analyze them to help troubleshoot their device.

These are just some of the apps and features being sent to the Microsoft Graveyard, but the preview release candidate build also brings new features, as detailed in an official Windows Blogs post. This includes HDR background support, the ability to create 7-zip and TAR archives directly in File Explorer, and improvements to Bluetooth connectivity for certain devices. 

Copilot is also getting a ramp-up in this update, with the dedicated app rolling out to all Windows 11 users. It will also grant users the ability to move, resize, and snap the Copilot window. 

screenshot of Windows Copilot features

(Image credit: Microsoft)

Reflecting on bygones and Windows 11's future

Cortana wasn’t the biggest hit with Windows users and I doubt many will miss it, but there was a pretty vocal response from users who lamented the news that WordPad was on its way out. WordPad is a basic text editor that’s been a default application on Windows devices since the 90s, and many people have grown fond of it, especially as an increasing number of familiar apps have become more complex and been injected with often-unwanted AI features. 

If enough people continue to voice their thoughts and positive sentiments about WordPad, we might see it return as an optional download from the Microsoft Store – like what happened with the Paint app that’s since gone on to have a second life. PCGamer speculates that for most of these apps and features, with the exception of Cortana, perhaps Microsoft doesn’t feel like continuing the upkeep of these apps and would prefer to dedicate those resources elsewhere – a move that might see more users take up Microsoft 365 subscriptions. 

Some of these features and apps, like Steps Recorder, won’t be especially missed by me, but I do personally hope that Microsoft reconsiders giving WordPad a permanent chop. It would be an easy win that would remind users that Microsoft doesn’t completely plug its ears when it comes to users’ opinions and that it’s still willing to leave things that aren’t broken – even if they’re not the biggest money makers. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Hardly any of us are using AI tools like ChatGPT, study says – here’s why

If you're feeling a bit overwhelmed or left behind by ChatGPT and other AI tools, fear not – a big new international study has found that most of us aren't using generative AI tools on a regular basis.

The study from Reuters Institute and Oxford University (via BBC), which surveyed over 12,000 people across six countries, seemingly reveals how little that AI hype has percolated down to real-world use, for now. 

Even among the people who have used generative AI tools like ChatGPT, Google Gemini or Microsoft Copilot, a large proportion said they'd only used them “once or twice”. Only a tiny minority (7% in the US, 2% in the UK) said they use the most well-known AI tool, ChatGPT, on a daily basis.

A significant proportion of respondents in all countries (including 47% in the US, and 42% in the UK) hadn't even heard of ChatGPT, a figure that was much higher for other AI apps. But after ChatGPT, the most recognized tools were Google Gemini, Microsoft Copilot, Snapchat My AI, Meta AI, Bing AI and YouChat.

Trailing further behind those in terms of recognition were generative AI imagery tools like Midjourney, plus Claude and the xAI's Grok for X (formerly Twitter). But while the regular use of generative AI tools is low, the survey does provide some interesting insights on what the early dabblers are using them for.

A laptop showing a table of AI survey responses

This table from the survey shows answers to the question: “You said you have used a generative AI chatbot or tool. Which, if any, of the following have you tried to use it for (even if it didn’t work)?” (Image credit: Reuters Institute and Oxford University)

Broadly speaking, the use cases were split into two categories; “creating media” and, more worryingly given the issue of AI hallucinations, “getting information”. In the former, the most popular answer was simply “playing around or experimenting” (11%), followed by “writing an email or letter” (9%) and “making an image” (9%).

The top two answers in the 'getting information' category were “answering factual questions” (11%) and “asking advice” (10%), both of which were hopefully followed by some corroboration from other sources. Most AI chatbots still come with prominent warnings about their propensity for making mistakes – for example, Google says Gemini “could provide inaccurate information or could even make offensive statements”.

AI tools are arguably better for brainstorming and summarizing, and these were the next most popular uses cases in the survey – with “generating ideas” mentioned by 9% of respondents and “summarizing text” cited by 8% of people.

But while the average person is still seemingly at the dabbling stage with generative AI tools, most people in the survey are convinced that the tools will ultimately have a big impact on our daily lives. When asked if they thought that “generative AI will have a large impact on ordinary people in the next five years”, 60% of 18-24 year olds thought it would, with that figure only dropping to 41% among those who were 55 and older.

Why are AI tools still so niche?

A laptop screen showing responses to an AI survey

ChatGPT was easily the most well-known AI tool in the survey, but regular users were still in the minority. (Image credit: Reuters Institute and Oxford University)

All surveys have their limitations, and this one focuses mostly on standalone generative AI tools rather than examples of the technology that's baked into existing products – which means that AI is likely more widely used than the study suggests.

Still, its broad sample size and geographic breadth does give us an interesting snapshot of how the average person views and uses the likes of ChatGPT. The answer is that it remains very niche among consumers, with the report's lead author Dr Richard Fletcher suggesting to the BBC that it shows there's a “mismatch” between the “hype” around AI and the “public interest” in it.

Why might that be the case? The reality is that most AI tools, including ChatGPT, haven't yet convinced us that they're frictionless or reliable enough to become a default part of our tech lives. This is why the focus of OpenAI's new GPT-4o model (branding being another issue) was a new lifelike voice assistant, which was designed to help lure us into using it more regularly.

Still, while even tech enthusiasts still have reservations about AI tools, this appears to be largely irrelevant to tech giants. We're now seeing generative AI being baked into consumer products on a daily basis, from Google Search's new AI summaries to Microsoft's Copilot coming to our messaging apps to iOS 18's rumored AI features for iPhones.

So while this survey's respondents were “generally optimistic about the use of generative AI in science and healthcare, but more wary about it being used in news and journalism, and worried about the effect it might have on job security”, according to Dr Fletcher, it seems that AI tech is going to become a daily part of our lives regardless – just not quite yet.

You might also like

TechRadar – All the latest technology news

Read More

OpenAI’s ChatGPT might soon be thanking you for gold and asking if the narwhal bacons at midnight like a cringey Redditor after the two companies reach a deal

If you've ever posted a comment or post on Reddit, there's a chance that it will be used as material for training OpenAI's AI models after the two companies confirmed that they've reached a deal that enables this exchange. 

Reddit will be given access to OpenAI's technology to build AI features, and for that (as well as an undisclosed monetary amount), it's giving OpenAI access to Reddit posts in real-time that can be used by tools like ChatGPT to formulate more human-like responses. 

OpenAI will be able to access real-time information from Reddit's data API, software that enables the retrieval of and interaction with information from Reddit's platform, providing OpenAI with structured and unique content from Reddit. This is similar to an agreement Reddit reached with Google at the beginning of the year, allowing Google to train its own AI models on Reddit's data, reported to be worth $ 60 million. 

According to the official Reddit blog post publicizing the deal, the deal will help people discover and engage with Reddit's communities thanks to the Reddit content brought to ChatGPT and other new OpenAI products. Through Reddit's APIs, OpenAI's tools will be able to understand and showcase Reddit's content better, particularly when it comes to recent topics. 

Man sitting at a table working on a laptop

(Image credit: Shutterstock/GaudiLab)

Reddit, the company, and Reddit, the community of users

Users and moderators on Reddit will apparently be offered new features thanks to applications powered by OpenAI's large language models (LLMs). OpenAI will also start advertising on Reddit as an ad partner. 

The blog post put out by Reddit also claims that the deal is in the spirit of keeping the internet open, as well as fostering learning and research to keep it that way. It also cites that it wants to continue to build up its community, recognizing its uniqueness and how Reddit serves as a place for conversation online. Reddit claims that this deal was signed to improve everyone's Reddit experience using AI.

It remains to be seen whether users are convinced of these benefits, but previous changes of this type and scale haven't gone down particularly well. In June 2023, over 7,000 subreddit communities went dark to protest changes to Reddit's API pricing for developers

It also hasn't explicitly been stated by either company that Reddit data will be used to train OpenAI's models, but I think many people assume this will be the case – or that it’s already happening. In contrast, it was disclosed that Reddit would give Google “more efficient ways to train models,” and then there's the fact that OpenAI founder Sam Altman is himself a Reddit shareholder. This doesn't confirm anything specific and, as reported by The Verge, “This partnership was led by OpenAI’s COO and approved by its independent Board of Directors.”

OpenAI CEO Sam Altman speaking during Microsoft's February 7, 2023 event

(Image credit: JASON REDMOND/AFP via Getty Images)

Official statements expressing the benefits of the partnership

Speaking about the partnership and as quoted in the blog post, representatives from both companies said: 

“Reddit has become one of the internet’s largest open archives of authentic, relevant, and always up to date human conversations about anything and everything. Including it in ChatGPT upholds our belief in a connected internet, helps people find more of what they’re looking for, and helps new audiences find community on Reddit.”

– Steve Huffman, Reddit Co-Founder and CEO

“We are thrilled to partner with Reddit to enhance ChatGPT with uniquely timely and relevant information, and to explore the possibilities to enrich the Reddit experience with AI-powered features.”

– Brad Lightcap, OpenAI COO

They're not wrong, and many people make search queries appended with the word “Reddit” as Reddit threads will often provide information directly relevant to what you're searching for. 

It's an interesting development, and OpenAI's sourcing of information – both in terms of accuracy and concerning training data – has been the main topic of discussion around the ethics of its practices for some time. I suppose at least this way, Reddit users are being made aware that their information can be used by OpenAI – even if they don’t really have a choice in the matter. 

The announcement blog post reassures users that Reddit believes that “privacy is a right,” and that it has published a Public Content Policy that gives more detail about Reddit's approach to accessing public content and user protections. We'll have to see if this will be upheld as time goes on, and what the partnership looks like in practice, but I hope both companies will take users' concerns seriously. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Ads in Windows 11 are becoming the new normal and look like they’re headed for your Settings home page

Microsoft looks like it’s forging ahead with its mission to put more ads in parts of the Windows 11 interface, with the latest move being an advert introduced to the Settings home page.

Windows Latest noticed the ad, which is for the Xbox Game Pass, is part of the latest preview release of the OS in the Dev channel (build 26120). For the uninitiated, the Game Pass is Microsoft’s subscription service that grants you access to a host of games for a monthly or yearly subscription fee.

Not every tester will see this advert, though, at least for now, as it’s only rolling out to those who have chosen the option to ‘Get the latest updates as soon as they're available’ (and that’s true of the other features delivered by this preview build). Also, the ad only appears for those signed into a Microsoft account.

Furthermore, Microsoft explains in a blog post introducing the build that the advert for the Xbox Game Pass will only appear to Windows 11 users who “actively play games” on their PC. The other changes provided by this fresh preview release are useful, too, including fixes for multiple known issues, some of which are related to performance hiccups with the Settings app. 

A close up of a keyboard and a woman gaming at a PC in neon lighting

(Image credit: Shutterstock/Standret)

Pushing too far is a definite risk for Microsoft

While I can see this fresh advertising push won’t play well with Windows 11 users, Windows Latest did try the new update and reports that it’s a significant improvement on the previous version of 24H2. So that’s good news at least, and the tech site further observes that there’s a solution for an installation failure bug in here (stop code error ‘0x8007371B’ apparently).

Windows 11 24H2 is yet to roll out officially for all users, but it’s expected to be the pre-installed operating system on the new Snapdragon X Elite PCs that are scheduled to be shipped in June 2024. A rollout to all users on existing Windows 11 devices will happen several months later, perhaps in September or October. 

I’m not the biggest fan of Microsoft’s strategy regarding promoting its own services – and indeed outright ads as is the case here – or the firm’s efforts to push people to upgrade from Windows 10 to Windows 11. Unfortunately, come next year, Windows 10 users will be facing a choice of migrating to Windows 11, or losing out on security updates when support expires for the older OS (in October 2025). That is, if they can upgrade at all – Windows 11’s hardware requirements make this a difficult task for some older PCs.

I hope for my sake personally, and for all Windows 11 users, that Microsoft considers showing that it values us all by not subjecting us to more and more adverts creeping into different parts of the operating system.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

Signed, sealed, delivered, summarized: new Gemini-powered AI feature for Gmail looks like it’s close to launch

A summarize feature powered by Gemini, Google’s recently debuted generative AI model and digital assistance tool, is coming to the Gmail app for Android – and it could make reading and understanding emails much faster and easier. The feature is expected to roll out soon to all users, and they’ll be able to provide feedback by rating the quality of the generated email summaries. 

The feature has been suspected to be in the works for some time now, as documented by Android Authority, and now it’s apparently close to launch.

One of Android Authority’s code sleuth sources managed to get the feature working on the Gmail app version 2024.04.21.626860299 after a fair amount of tinkering. It’s not disclosed what steps they took, so if you want to replicate this, you’ll have to do some experimenting, but the fact that the summarize feature can be up and running shows that Android Gmail users may not have to wait very long.

There is a screenshot featured in Android Authority’s report showing a chat window where the user asks Gemini to summarize an email that they currently have open, and Gemini obliging. Apparently, this feature will be available via a ‘Summarize this email’ button under an email’s subject line, I assume triggering the above prompt, and this should return a compact summary of the email. This could prove to be especially helpful when dealing with a large number of emails, or for particularly long emails with many details.

Once the summary is provided, users will be shown thumbs up and thumbs down buttons under Gemini’s output, similar to OpenAI’s ChatGPT after it gives its reply to a user’s query. This will give Google a better understanding of how helpful the feature is to users and how it could be improved. There will also be a button that allows you to copy the email summary to your clipboard, according to the screenshot. 

A man working at an office and looking at his screen while using Gmail

(Image credit: Shutterstock/fizkes)

When to expect the new feature

The speculation is that the feature could be rolled out during Google’s I/O 2024 event, its annual developer conference, which is scheduled for May 14, 2024. Google is also expected to show off the next iteration of its Pixel A series, the Pixel 8A, it could show its development of augmented reality (AR) technology, and new software and service developments, especially for its devices and ChromeOS (the operating system that powers the best Chromebooks). 

Many Gmail users could potentially find this new summarize feature to be time-saving and that it streamlines their emails, but as with any generative AI, there are concerns about the accuracy of the generated text. If Gemini omits or misinterprets important information, it could lead to oversights or misunderstandings. I’m glad that Google has the feedback system in place, as this will show if the feature is actually serving its purpose well. We’ll have to wait and see, and the proof will be in the pudding whether it results in improved productivity and is reasonably accurate when it’s finally released. 

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

OpenAI’s Sora just made its first music video and it’s like a psychedelic trip

OpenAI recently published a music video for the song Worldweight by August Kamp made entirely by their text-to-video engine, Sora. You can check out the whole thing on the company’s official YouTube channel and it’s pretty trippy, to say the least. Worldweight consists of a series of short clips in a wide 8:3 aspect ratio featuring fuzzy shots of various environments. 

You see a cloudy day at the beach, a shrine in the middle of a forest, and what looks like pieces of alien technology. The ambient track coupled with the footage results in a uniquely ethereal experience. It’s half pleasant and half unsettling. 

It’s unknown what text prompts were used on Sora; Kamp didn’t share that information. But she did explain the inspiration behind them in the description. She states that whenever she created the track, she imagined what a video representing Worldweight would look like. However, she lacked a way to share her thoughts. Thanks to Sora, this is no longer an issue as the footage displays what she had always envisioned. It's “how the song has always ‘looked’” from her perspective.

Embracing Sora

If you pay attention throughout the entire runtime, you’ll notice hallucinations. Leaves turn into fish, bushes materialize out of nowhere, and flowers have cameras instead of petals. But because of the music’s ethereal nature, it all fits together. Nothing feels out of place or nightmare-inducing. If anything, the video embraces the nightmares.

We should mention August Kamp isn’t the only person harnessing Sora for content creation. Media production company Shy Kids recently published a short film on YouTube called “Air Head” which was also made on the AI engine. It plays like a movie trailer about a man who has a balloon for a head.

Analysis: Lofty goals

It's hard to say if Sora will see widespread adoption judging by this content. Granted, things are in the early stages, but ready or not, that hasn't stopped OpenAI from pitching its tech to major Hollywood studios. Studio executives are apparently excited at the prospects of AI saving time and money on production. 

August Kamp herself is a proponent of the technology stating, “Being able to build and iterate on cinematic visuals intuitively has opened up categorically new lanes of artistry for me”. She looks forward to seeing “what other forms of storytelling” will appear as artificial intelligence continues to grow.

In our opinion, tools such Sora will most likely enjoy a niche adoption among independent creators. Both Kamp and Shy Kids appear to understand what the generative AI can and cannot do. They embrace the weirdness, using it to great effect in their storytelling. Sora may be great at bringing strange visuals to life, but in terms of making “normal-looking content”, that remains to be seen.

People still talk about how weird or nightmare-inducing content made by generative AI is. Unless OpenAI can surmount this hurdle, Sora may not amount to much beyond niche usage.

It’s still unknown when Sora will be made publicly available. OpenAI is holding off on a launch, citing potential interference in global elections as one of its reasons. Although, there are plans to release the AI by the end of 2024.

If you're looking for other platforms, check out TechRadar's list of the best AI video makers for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Vision Pro spatial Personas are like Apple’s version of the metaverse without the Meta

While the initial hype over Apple Vision Pro may have died down, Apple is still busy developing and rolling out fresh updates, including a new one that lets multiple Personas work and play together.

Apple briefly demonstrated this capability when it introduced the Vision Pro and gave me my first test-drive last year but now spatial Personas is live on Vision Pro mixed-reality headsets.

To understand “spatial Personas” you need to start with the Personas part. You capture these somewhat uncanny valley 3D representations of yourself using Vision Pro's spatial (or 3D) cameras. The headset uses that data to skin a 3D representation of you that can mimic your face, head, upper torso, and hand movements and be used in FaceTime and other video calls (if supported).

Spatial Personas does two key things: it gives you the ability to put two (or more) avatars in one space and lets them interact with either different screens or the same one and does so in a spatially aware space. This is all still happening within the confines of a FaceTime call where Vision Pro users will see a new “spatial Persona” button.

To enable this feature, you'll need the visionOS 1.1 update and may need to reboot the mixed reality headset. After that you can at any time during a FaceTime Persona call tap on the spatial icon to enable the featue.

Almost together

Apple Vision Pro spatial Personas

(Image credit: Apple)

Spatial Personas support collaborative work and communal viewing experiences by combining the feature with Apple's SharePlay. 

This will let you “sit side-by-side” (Personas don't have butts, legs or feet, so “sitting” is an assumed experience) to watch the same movie or TV show. In an Environment (you spin the Vision Pro's digital crown until your real world disappears in favor of a selected environment like Yosemite”) you can also play multi-player games. Most Vision Pro owners might choose “Game Room”, which positions the spatial avatars around a game table. A spatial Persona call can become a real group activity with up with five spatial Personas participating at once.

Vision Pro also supports spatial audio which means the audio for the Persona on the right will sound like it's coming from the right. Working in this fashion could end up feeling like everyone is in the room with you, even though they're obviously not.

Currently, any app that supports SharePlay can work with spatial Personas but not every app will allow for single-screen collaboration. If you use window share or share the app, other personas will be able to see but not interact with your app window.

Being there

Apple Vision Pro spatial Personas

Freeform lets multiple Vision Pro spatial Personas work on the same app. (Image credit: Apple)

While your spatial Personas will appear in other people's spaces during the FaceTime call, you'll remain in control of your viewing experience and can still move your windows and Persona to suit your needs, while not messing up what people see in the shared experience.

In a video Apple shared, it shows two spatial Personas positioned on either side of a Freeform app window, which is, in and of itself somewhat remarkable. But things take a surprising turn when each of them can reach out with their Persona hands to control the app with gestures. That feels like a game-changer to me.

In some ways, this seems like a much more limited form of Meta CEO Mark Zuckerberg's metaverse ideal, where we live work and play together in virtual reality. In this case, we collaborate and play in mixed reality while using still somewhat uncanny valley avatars. To be fair, Apple has already vastly improved the look of these things. They're still a bit jarring but less so than when I first set mine up in February.

I haven't had a chance to try the new feature, but seeing those two floating Personas reaching out and controlling an app floating a single Vision Pro space is impressive. It's also a reminder that it's still early days for Vision Pro and Apple's vision of our spatial computing future. When it comes to utility, the pricey hardware clearly has quite a bit of road ahead of it.

You might also like

TechRadar – All the latest technology news

Read More

Google Gemini AI looks like it’s coming to Android tablets and could coexist with Google Assistant (for now)

Google’s new generative AI model, Gemini, is coming to Android tablets. Gemini AI has been observed running on a Google Pixel Tablet, confirming that Gemini can exist on a device alongside Google Assistant… for the time being, at least. Currently, Google Gemini is available to run on Android phones, and it’s expected that it will eventually replace Google Assistant, Google’s current virtual assistant that’s used for voice commands.

When Gemini is installed on Android phones, users would be prompted to choose between using Gemini and Google Assistant. It’s unknown if this restriction will apply to tablets when Gemini finally arrives for them – though at the moment it appears not. 

Man sitting at a table working on a laptop

(Image credit: Shutterstock/GaudiLab)

A discovery in Google Search's code

The news was brought to us via 9to5Google, which did an in-depth report on the latest beta version (15.12) of the Google Search app in the Google Play Store and discovered it contains code referring to using Gemini AI on a “tablet,” and would offer the following features: 

The code also shows that the Google app will host Gemini AI on tablets, instead of a standalone app that currently exists for Android phones. Google might be planning on a separate Gemini app for tablets and possibly other devices, especially if its plans to phase out Google Assistant are still in place. 

9to5Google also warns that this is still as it’s still a beta version of the Google Search app, Google can still change its mind and not roll out these features.

A woman using an Android phone.

(Image credit: Shutterstock/brizmaker)

Where does Google Assistant stand?

When 9to5Google activated Gemini on a Pixel Tablet, it found that Google Assistant and Gemini would function simultaneously. Gemini for Android tablets is yet to be finalized, so Google might implement a similar restriction that prevents both Gemini and Google Assistant running at the same time on tablets. When both were installed and activated, and the voice command “Hey Google” was used, Google Assistant was brought up instead of Gemini.

This in turn contradicted screenshots of the setup screen showing that Gemini will take precedence over Google Assistant if users choose to use it.

The two digital assistants don’t have the same features yet and we know that the Pixel Tablet was designed to act as a smart display that uses Google Assistant when docked. Because Google Assistant will be used when someone asks Gemini to do something it’s unable to do, we may see the two assistants running in parallel for the time being, until Gemini has all of Google Assistant's capabilities, such as smart home features. 

Meanwhile, Android Authority reports that the Gemini experience on the Pixel Tablet is akin to the Pixel Fold and predicts that Google’s tablets will be the first Android to gain Gemini capabilities. This makes sense, as Google may want to use Gemini exclusivity to encourage more people to buy Pixel tablets in the future. The Android tablet market is a highly competitive one, and advanced AI capabilities may help Pixel tablets stand out.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More