Apple confirms the Vision Pro will get international launch this year

The Apple Vision Pro VR headset has wowed many of those who have given it a whirl, but one major issue keeping the device out of people’s hands is availability – if you live outside the US and want to buy one, you’re out of luck. That’s set to change, though, with Tim Cook confirming that the headset will launch in at least one other country this year: China.

Apple has previously hinted at a wider Apple Vision Pro release coming in 2024, and while speaking at the China Development Forum in Beijing over the weekend Tim Cook specifically mentioned China as one of the lucky countries that's set to get the headset before the end of the year.

Beyond confirming that a China release is on the cards, Cook didn’t reveal a specific release date or pricing: however, a Vision Pro release outside the US before June 2024 was hinted at by analysts earlier this year, and it's unlikely that China will be the only country where it's released. 

Is a worldwide Vision Pro release coming soon?

China, Japan, Canada, Australia, and the UK, among a few others, are regions where Apple is looking to hire a 'Briefing Experience Specialist' for the Vision Pro. When the job postings were spotted it was believed they hinted at where the wider Vision Pro release would begin, and Cook’s China confirmation suggests that this may indeed be the case.

As such, we wouldn’t be surprised if the UK and Australia (and the rest) saw an Apple Vision Pro release around the same time as China. As mentioned above, Apple analyst Ming-Chi Kuo has previously suggested Apple is likely to launch the Vision Pro outside of the US before WWDC – which is due in June – so we might only be a month or two away from seeing the Vision Pro in more people’s hands. 

If you're interested in picking up the Apple headset, we'd suggest first reading our Apple Vision Pro review to help you decide if the $ 3,500 device (around £2,800 / AU$ 5,350) is worth it for you.

You might also like

TechRadar – All the latest technology news

Read More

Google Drive is finally getting a dark mode – and this makes me happy

It looks like Google Drive could finally get a dark mode option for its web version, meaning perusing documents could become a lot easier on the eye for people who like their web pages muted rather than a searing while. 

This information comes courtesy of 9to5Google, which reports that one of its Google accounts received an update that prompts users to try out a “New Dark mode” so that they can “enjoy Drive in the dark”. The option to trigger this dark mode is reportedly under the ‘Appearance’ option in the Settings menu of Drive, but I’ve not seen this in either my personal Drive or my workspace Drive. 

However, from the images 9to5Google provided, it looks like the dark mode in Drive is rolling out bit by bit, and will be a fairly straightforward integration of the mode that one can find in Android, Chrome and other Google apps. No icons are changed in terms of design or color, rather the background switches from white to black, with text flipping to white – all fairly standard. 

There’s some difference in shading between the inner portion of Drive, where one will find documents and files, compared to the sidebar and search bar; the former is black, while the latter is slightly grey in tone. 

Is this a huge deal? Not really, but for people who work late into the evening, the ability to switch from light mode to dark can be a blessing on tired eyes. And having a dark mode can offer a more pleasant experience for some people in general, regardless of the time of the day. 

I’m definitely up for more dark mode options in Google services and beyond. Where once I thought dark mode was overhyped, I started using it on some of the best Android phones and my iPhone 15 Pro Max and haven't really looked back – it makes scrolling through various apps in bed more comfortable, though common sense would say you’re better of putting your phone down when in bed and picking up a book instead. 

My hope is that by bringing dark mode Drive, Google will better integrate dark options into more of its apps and services, especially in Gmail, which has a dark mode but won’t apply it to actual emails when using the web versions, which is jarring. So fingers crossed for a more ubiquitous dark mode from Google.

You might also like

TechRadar – All the latest technology news

Read More

Here’s more proof Apple is going big with AI this year

The fact that Apple is going to debut a new generative artificial intelligence (AI) tool in iOS 18 this year is probably one of the worst-kept secrets in tech at the moment. Now, another morsel has leaked out surrounding Apple’s future AI plans, and it could shed light on what sort of AI features Apple fans might soon get to experience.

As first reported by Bloomberg, earlier this year Apple bought Canadian startup DarwinAI, with dozens of the company’s workers joining Apple once the deal was completed. It’s thought that Apple made this move in an attempt to bolster its AI capabilities in the last few months before iOS 18 will be revealed, which is expected to happen at the company’s Worldwide Developers Conference (WWDC) in June.

Bloomberg’s report says that DarwinAI “has developed AI technology for visually inspecting components during the manufacturing process.” One of its “core technologies,” however, is making AI faster and more efficient, and that could be the reason Apple chose to open its wallet. Apple intends its AI to run entirely on-device, presumably to protect your privacy by not sharing AI inputs with the cloud, and this would benefit from DarwinAI’s tech. After all, Apple won’t want its flagship AI features to result in sluggish iPhone performance.

Apple’s AI plans

Siri

(Image credit: Unsplash [Omid Armin])

This is apparently just the latest move Apple has made in the AI arena. Thanks to a series of leaks and statements from Apple CEO Tim Cook, the company is known to be making serious efforts to challenge AI market leaders like OpenAI and Microsoft.

For instance, it’s been widely reported that Apple will soon unveil its own generative AI tool, which has been dubbed Ajax and AppleGPT during its development process. This could give a major boost to Apple’s Siri assistant, which has long lagged behind competitors such as Amazon Alexa and Google Assistant. As well as that, we could see generative AI tools debut in apps like Pages and Apple Music, rivaling products like Microsoft’s Copilot and Spotify’s AI DJ.

Tim Cook has dropped several hints regarding Apple’s plans, saying customers can expect to see a host of AI features “later this year.” The Apple chief has called AI a “huge opportunity” for his company and has said that Apple intends to “break new ground” in this area. When it comes to specifics, though, Cook has been far less forthcoming, presumably preferring to reveal all at WWDC.

It’s unknown whether Apple will have time to properly integrate DarwinAI’s tools into iOS 18 before it is announced to the world, but it seems certain it will make use of them over the coming months and years. It could be just one more piece of the AI puzzle that Apple is attempting to solve.

You might also like

TechRadar – All the latest technology news

Read More

This N64-powered VR setup is the complete opposite of an Apple Vison Pro

Most tech creators are on a never-ending upgrade quest – always after higher frame rates, pixel counts and processing speeds, in a lighter, sleeker form factor – but a select few instead choose to look backwards. And that’s how we got a Nintendo 64-powered Oculus Rift setup that's pretty much the antithesis of the Apple Vision Pro

It's the latest project from James Lambert – previously known for the Portal 64 demake that aimed to bring the classic Valve game to Nintendo’s 1996 console – and in a video shared to his YouTube channel he detailed exactly how he managed to create this unlikely VR pairing.

The first step was picking the right VR headset. Lambert opted for the Oculus Rift DK1 because its tracking is “relatively simple” – there are only a few sensors in the headset – and they can all pass data through USB – which works great with Lambert's custom-built N64 USB adapter.

He was then able to easily output video from his modded N64 to the headset via an HDMI cable running between their HDMI ports. And while the DK1 has a very low 640 x 800 pixel resolution per eye, Lambert jokes that it’s “not the bottleneck here” while gesturing with the N64.

Another unavoidable issue is that while the Oculus Rift can read sensor data at a rate of about 1,000 samples per second, the N64 setup Lambert used can only read data at a rate of about 60 samples per second. But after calibrating everything as best as he could, Lambert was able to send semi-accurate tracking data from the headset to the aging console. 

Lambert goes into much more detail about how he got the setup to work despite these technical limitations in his 10-minute video, and it’s well worth a watch – but the TL;DR is that while the Rift can indeed run on an N64, the end result isn’t a VR experience you’d actually want to try for very long.

A technical feat we don't want to experience

The phenomenally low resolution – just 320 x 480 pixels per eye – gives the world a very pixelated look. This coupled with the input lag when turning your head and low frame rate makes the whole thing “pretty motion-sickness inducing” according to Lambert.

Another disappointment was that he was unable to complete the Nintendo experience by getting the Power Glove to function with the Rift-N64 setup – having previously managed to get the wearable NES controller to work on the N64 in a previous video. Lambert does seem keen to keep tinkering with his N64 VR setup, though so perhaps that's an upgrade we’ll see in a version 2.0.

As with other modded hardware projects, you unfortunately can’t go out and buy an N64 VR setup for yourself – unless you’re willing to put in a lot of time recreating Lambert’s mods. But even projects as eccentric as this are a technical marvel, and we’re excited to see what odd hardware will power a VR setup next – following in the footsteps of Doom, maybe we’ll see a lawnmower power a Valve Index, or another console will enter the mix with a PSP running a PlayStation VR headset.

You might also like

TechRadar – All the latest technology news

Read More

This upcoming feature on Google Keep may finally sway me away from Apple Notes for good

Google Keep is a popular task management and note-taking tool integrated with Google Suite so you can create and tick off to-do lists as you work on your computer or phone. The mobile version of Google Keep could be about to get a new feature that may tempt people away from their other note-taking apps – lock screen access to your notes.

According to 9to5Google, the team behind Google Keep has been pushing to become the default note-taking app on Android devices. In the same way, Apple Notes is the default note-taking app on every iPhone, iPad, and Mac. If Google Keep does become the de facto note-taking app of choice on Android devices, this opens the door to the app having more features that can be integrated more intimately into your phone. 

Alongside lock screen access to recent notes, we could also see improved stylus support so you can jot down your thoughts quickly and do fun doodles with a bit more control of your strokes. In version 5.24 of the app, there’s a new section of the settings menu that lists the lock screen access as ‘coming soon’, which gives me hope that we’ll see the feature sooner rather than later. 

I have no memory, I need lock screen access, please

As an extremely forgetful person who needs to make lists for everything, I am so excited about the possibility of being able to look at my lock screen and see all my important to-dos at a glance, especially if the feature becomes available to non-Android users too. 

You can have shopping lists, reminders, positive affirmations, and reflections all on your lock screen and tick them off as you go through them without even needing to unlock your phone. I currently use Google Keep on my work computer exclusively to tick things off as I go through the day. If I can have my professional to-do list not just on a mobile app but very visible on my lock screen, I can keep tabs on what needs to be done while on my commute to work, and jot down tasks to carry over to the next day on the way back home. 

Apple Notes has been my default note-taking app mostly because I’m an iPhone user, and while it has had a few improvements here and there (like adding grids, text formatting options, and being able to drop in photos into the app) it’s ultimately nothing special in the world of note-taking apps. If Google Keep can implement lock screen access outside of just Android phones, you’d better believe I’m shifting all my shopping list reminders over immediately and saying goodbye to Apple Notes for good. 

TechRadar – All the latest technology news

Read More

Microsoft finally teaches Copilot AI some new tricks – but is this enough to stop Windows 11 users getting impatient?

Windows 11 just received improvements in testing to make its Copilot AI more useful with implementing changes in the actual OS environment – in other words, the features that we’re all waiting for.

Copilot has a pretty limited repertoire in terms of what the AI can do for manipulating Windows settings (as opposed to its standard tricks in terms of replying to queries, image creation and so forth).

However, the bag of settings tricks just got considerably heavier, with a raft of additions having just been made to preview build 26058 of Windows 11 (in the Canary and Dev testing channels).

That build was actually released a week ago, but Microsoft just ushered in these extra improvements as Neowin noticed.

So, what can Copilot do for you now? There are a number of important accessibility changes, so for example the AI can be instructed to turn on Narrator or Live Captions, or voice functionality (Voice Access or typing).

And you can get Copilot to take out the trash (empty the Recycle Bin), turn on battery saver mode, or even tell you the IP address of your device.

Here’s the full list of the new capabilities of Copilot when it comes to engaging with Windows settings:

  • Ask for available wireless networks
  • Ask for system or device information
  • Ask for battery information
  • Ask to clean storage
  • Ask to empty Recycle Bin
  • Ask to toggle Battery Saver
  • Ask to show startup apps
  • Ask for your IP address
  • Ask for system, device, or storage information

And the new accessibility features are as follows:

  • Ask to turn on Narrator
  • Ask to open Voice Access
  • Ask to turn on Magnifier
  • Ask to change text size
  • Ask to start Live Captions
  • Ask to turn on high-contrast
  • Ask to start voice typing

This expands on Copilot’s existing powers to tweaks settings, which already includes taking a screenshot, or changing between the dark and light themes, for example.


Analysis: Expansion pack

There are 16 new abilities introduced in testing here, which should be coming through to the finished version of Windows 11 soon enough. That more than doubles the existing abilities of Copilot at the moment – there are just 12 ways to operate Windows 11 settings via the AI right now – so it’s a welcome expansion.

At the same time, progress on this front feels rather sluggish, given that Copilot and more broadly AI is such a major focus for Microsoft, ever since Bing Chat burst onto the scene about a year ago.

Windows 11 users were sold Copilot partly on its features related to operating various settings and modes easily and conveniently, rather than having to dive into a search deep in the Settings app (or hunting elsewhere in the interface). And thus far, not a lot of capabilities have been added, really.

We’re hoping Microsoft will get its foot to the floor on this side of the Copilot experience later this year, with the Windows 11 24H2 update, but for now, a doubling of numbers is at least a sign of some decent forward momentum.

You might also like…

TechRadar – All the latest technology news

Read More

Yes, Apple Vision Pro is being returned to stores – but this could actually be a good thing

We’ve officially passed the two-week return window for the Apple Vision Pro, which allowed people who purchased the headset on launch day to hand it back. Social media buzz has suggested that the Vision Pro was being returned in droves. However, inside sources suggest this may not be the case – and offer an interesting insight into who is returning their headset, and why. 

In our Apple Vision Pro review, we touched on the positives and negatives of using the device and rounded up our top three reasons why users may end up returning the headset. As Apple’s first attempt at a mixed-reality headset, the product was always going to be rather polarizing. It lacks the backing of familiarity that other Apple products like a new iPhone or MacBook always have at this point. 

Not to mention the fact that the Apple Vision Pro is expensive. Retailing at $ 3,499/ £2,788, AU$ 6349, it’s easy to imagine more than a few returns are down to buyer's remorse – I know I would slink back to the Apple Store as soon as I found even the slightest discomfort or annoyance (or looked at my bank account, frankly). Especially if I couldn’t get my prescription sorted out for the headset or just found it really uncomfortable. 

In fact, AppleInsider reached out to sources within Apple’s retail chain for more info on the headset returns and noted that discomfort is probably one of the biggest concerns when it comes to it. “Most of our returns, by far, are within a day or two. They're the folks that get sick using it,” one source told AppleInsider’s Mike Wuerthele. “The pukers, the folks that get denied by prescription-filling, that kind of thing. They know real quick.”

Influencer investments – gotta get that content!

The second group of people that seem to be making up most of the returns are influencers and YouTubers. Again, the Vision Pro is a product many people want to get their hands on, so it would make sense that online tech ‘gurus’ would want to jump on the trend at launch. 

With the two-week return window offered by Apple, that’s more than enough time to milk the headset for as much content as possible then give it back, and get your money back too. If you’re a tech content creator, it’s easier to look at the Vision Pro as a short-term investment rather than a personal splurge. 

“It's just the f***ing YouTubers so far,” one retail employee told Wuerthele. 

According to AppleInsider's sources, however, the return process isn’t as simple as just boxing the headset up and dropping it off. Each return is accompanied by a detailed, lengthy survey that will allow users to go in-depth on their reason for return and their experience with the product. This is great news in the long run because it could mean any future iterations of the Apple Vision Pro will be designed and built with this feedback in mind – and the Vision Pro is already arguably a public beta for what will presumably eventually become the ‘Apple Vision’.

Beyond AppleInsider's coverage, prolific Apple leaker and Bloomberg writer Mark Gurman has (unsurprisingly) chipped into the discussion surrounding Vision Pro returns. He reported much the same; some people think it's uncomfortable or induces sickness, while for others it's simply too much money. 

Gurman spoke to a Los Angeles professional who bought and returned the headset, who said 'I loved it. It was bananas,' but then went on to explain that he simply hadn't found himself using it that often, and that the price was just too much: “If the price had been $ 1,500 to $ 2,000, I would have kept it just to watch movies, but at essentially four grand, I’ll wait for version two.”

If users are returning it because they’re not using it as much as they thought they would, certain aspects are making them feel nauseous, or the headset is just really uncomfortable on their head, Apple can take this feedback in mind and carry it forward. It’s a common criticism of VR headsets in general, to be fair – perhaps some people just aren’t built for using this type of product?

You might also like…

TechRadar – All the latest technology news

Read More

Tired of Windows File Explorer? This app makes it way easier to navigate everything on your PC

If you think that Windows 11’s File Explorer could be better, you’re not alone – and there’s a popular third party alternative, the Files app. The Files app (which despite its name, has no relation to Microsoft’s own File Explorer) just got an upgrade that makes it an even better tool for navigating your file systems, with the latest version of the app allowing users to navigate big folders more easily. 

The Files app update 3.2 brings user interface (UI) improvements like a list view layout for files and folders, the capability to edit album covers of media files via folder properties, and support for higher quality thumbnails. Along with UI improvements, users can also expect many fixes and general improvements.

According to Windows Central, the Files app’s occasional instability while handling large file folders was one of the biggest user complaints with it and this update addresses that, too. The app should now be more functional when users attempt to use it with bigger file folders.

A young woman is working on a laptop in a relaxed office space.

(Image credit: Getty Images)

How the Files app measures up as a file explorer 

Windows Central does state that it doesn’t think the Files app is just ready to completely replace the default Windows Files Explorer, but that “it can be a powerful and useful companion app.” It offers unique features that File Explorer itself doesn’t offer and, to many users, it’s got a sleeker look. This app is available for both Windows 10 and Windows 11, but the app’s performance can vary from system to system. Window Central writes of its own investigation of the File app’s performance and it does report that the app has issues with performance and stability on some PCs. You can check the full change log of what Files version 3.2 delivers if you’d like to know more.

Many users would like to see Windows’ old File Explorer include many of the File app’s features, and maybe Microsoft is watching. It recently released its own proprietary PC Cleaner app, a system cleaner tool that offers lots of the tools of popular paid third-party system cleaners for free. Also, Microsoft’s been at the receiving end of some heat both from industry professionals and competitors, as well as regulators in the European Union with its recent introduction of the Digital Markets Act (DMA). Offering tools like PC Cleaner and a souped-up File Explorer could be a way for it to win back some user trust and goodwill. 

The existence of third-party apps like this is good for users two-fold because it can motivate first-party developers to improve their products faster, and it also gives users more choice over how they use their devices. The Files app looks like it sees regular updates and improvements, and definitely sounds like it could be worth users’ while given that it has no malware issues and if you get good performance upon installing it.

If you’d like to try out Files for yourself, bear in mind that it isn’t free: the app comes with a one-time charge of $ 8.99/£7.49, although thankfully there aren’t any subscription fees. You can download it directly from the Microsoft Store

You might also like

TechRadar – All the latest technology news

Read More

We’ll likely get our first look at Android 15 this week – here’s what to expect

The first preview version of Android 15 may launch on Thursday, February 15 if a recently discovered developer comment is to be believed.

It was originally posted to Google’s Android Open Source Project website on February 13, although the page hosting the message has since been deleted. If you go to the page right now, you’ll be greeted with an error message. Fortunately, 9To5Google has a screenshot of the comment and it states, in no uncertain terms, that the “first Developer preview is scheduled for Feb 15”. They even refer to it as “Android V” which the publication explains is a reference to the system’s codename, “Vanilla Ice Cream”. 

Early Android builds are typically exclusive to Pixel devices and 9To5Google believes this will be the case with the preview. Because it is meant primarily for developers, the build probably won’t see a public release due to software instability. That said, we do expect to see people crack open the preview and spill all of its contents onto the internet, revealing what Android 15 is capable of.

It’s unknown what this early version of the OS will bring; however, we can look at previous reports to give you an idea of what may be arriving.

Features to expect

Back in December 2023, three features were found hidden in the files of a then-recent Android 14 beta that could appear to be for Android 15.

The first one is called Communal Space which lets users add widgets to the lock screen. At the time of the initial report, only Google Calendar, Google Clock, and the main Google App could be added, but we believe there's a good chance more will be supported at launch. The second is the introduction of a battery health percentage read-out akin to what the iPhone 15 has. It’ll offer a crystal clear indication “of how much your phone’s battery has degraded” compared to when it was fresh out of the box.  

Communal Space on Pixel tablet

(Image credit: Mishaal Rahman/Android Authority)

The third feature is called Private Space and, according to Android Police, may be Google’s take on Samsung’s Secure Folder. It hides apps on your smartphone away from prying eyes. This can be especially helpful if you happen to share a device with others. 

Then in January, more news came out claiming Android 15 might have a feature allowing users to effortlessly share wireless audio streams. On the surface, it sounds similar to Bluetooth Auracast, a unique form of Bluetooth LE Audio for transmitting content. We wouldn’t be surprised if it was Bluetooth Auracast considering it has yet to be widely adopted by smartphone manufacturers. 

Bluetooth Auracast being shared by two children, on over-ear wireless headphones

(Image credit: Bluetooth SIG)

The last update came in early February revealing Android 15 may soon require all apps on the Google Play Store to support an edge-to-edge mode making it a mandatory setting. The presumed goal here is to better enable full-screen viewing. Edge-to-edge is typically only seen on certain types of apps like video games. Navigation bars and thick black stripes at the top of screens could become a thing of the past as Google establishes a new optimized standard for landscape viewing on Android.

That's currently all we know about Android 15. Hopefully, that one developer's slip-up is just the start of Android 15 reveals. While we have you check out TechRadar's list of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT is getting human-like memory and this might be the first big step toward General AI

ChatGPT is becoming more like your most trusted assistant, remembering not just what you've told it about yourself, your interests, and preferences, but applying those memories in future chats. It's a seemingly small change that may make the generative AI appear more human and, perhaps, pave the way for General AI, which is where an AI brain can operate more like the gray matter in your head.

OpenAI announced the limited test in a blog post on Tuesday, explaining that it's testing the ability of ChatGPT (in both the free version and ChatGPT Plus) to remember what you tell it across all chats. 

ChatGPT can with this update remember casually, just picking up interesting bits along the way, like my preference for peanut butter on cinnamon raisin bagels, or what you explicitly tell it to remember. 

The benefit of ChatGPT having a memory is that new conversations with ChatGPT no longer start from scratch. A fresh prompt could have, for the AI, implied context. A ChatGPT with memory becomes more like a useful assistant who knows how you like your coffee in the morning or that you never want to schedule meetings before 10 AM.

In practice, OpenAI says that the memory will be applied to future prompts. If you tell ChatGPT that you have a three-year-old who loves giraffes, subsequent birthday card ideation chats might result in card ideas featuring a giraffe.

ChatGPT won't simply parrot back its recollections of your likes and interests, but will instead use that information to work more efficiently for you.

It can remember

Some might find an AI that can remember multiple conversations and use that information to help you a bit off-putting. That's probably why OpenAI is letting people easily opt out of the memories by using the “Temporary Chat” mode, which will seem like you're introducing a bit of amnesia to ChatGPT.

Similar to how you can remove Internet history from your browser, ChatGPT will let you go into settings to remove memories (I like to think of this as targeted brain surgery) or you can conversationally tell ChatGPT to forget something.

For now, this is a test among some free and ChatGPT Plus users but OpenAI offered no timeline for when it will roll out ChatGPT memories to all users. I didn't find the feature live in either my free ChatGPT or Plus subscription.

OpenAI is also adding Memory capabilities to its new app-like GPTs, which means developers can build the capability into bespoke chatty AIs. Those developers will not be able to access memories stored within the GPT.

Too human?

An AI with long-term memory is a dicier proposition than one that has a transient, at best, recall of previous conversations. There are, naturally, privacy implications. If ChatGPT is randomly memorizing what it considers interesting or relevant bits about you, do you have to worry about your details appearing in someone else's ChatGPT conversations? Probably not. OpenAI promises that memories will be excluded from ChatGPT's training data.

OpenAI adds in its blog, “We're taking steps to assess and mitigate biases, and steer ChatGPT away from proactively remembering sensitive information, like your health details – unless you explicitly ask it to.” That might help but ChatGPT must understand the difference between useful and sensitive info, a line that might not always be clear.

This update could ultimately have significant implications. ChatGPT can in prompt-driven conversations already seem somewhat human, but its hallucinations and fuzzy memories about, sometimes, even how the conversation started make it clear that more than a few billion neurons still separate us.

Memories, especially information delivered casually back to you throughout ChatGPT conversations, could change that perception. Our relationships with other people are driven in large part by our shared experiences and memories of them. We use them to craft our interactions and discussions. It's how we connect. Surely, we'll end up feeling more connected to a ChatGPT that can remember our distaste of spicy food and our love of all things Rocky Balboa.

You might also like

TechRadar – All the latest technology news

Read More