MacOS Sequoia’s wildest update – iPhone mirroring – might be more useful than you think

When Apple introduced macOS Sequoia and its new iPhone Mirroring capability, I didn't get it. Now, though, after seeing it in action and considering some non-obvious use cases, I may be ready to reconsider.

Apple unveiled the latest AI-infused version of macOS during its WWDC 2024 keynote, which also saw major updates to iOS, iPadOS, visionOS, tvOS, and watchOS. It also served as the launch platform for Apple Intelligence, an Apple-built and branded version of artificial intelligence. I get that Apple's been building AI PCs for a while (ever since the M1 chip, they've included an on-board neural engine), and there are many features, including a better Siri, powerful photo editing features, and smart writing help, to look forward to but I found myself fixating elsewhere.

Apple was putting the iPhone on your Mac, or, rather, an iPhone screen floating in the middle of the lovely macOS Sequoia desktop. In a way, this is the most significant redesign of the new platform. It puts an entirely different OS – a mobile one, no less – on top of a laptop or desktop. 

Wow. And also, why?

I admit that I had a hard time conceiving what utility you could gain from having a second, live interface on an already busy desktop. Apple has said in the past that they build features, in some cases, based on user requests. Who had ever asked for this?

After the keynote, I had the chance to take a deeper dive, which helped me better understand this seemingly unholy marriage and why, in some cases, it might make perfect sense.

Making it so

WWDC 2024

(Image credit: Future / Lance Ulanoff)

Apple built a new app to connect your iOS 18-running iPhone to your macOS Sequoia Mac. In a demo I saw, it took one click to make it happen. Behind the scenes, the two systems are building a secure Bluetooth and WiFi connection. On the iPhone, there is a message that mirroring is live. On the Mac, well, there's the iPhone screen, complete with the dynamic Island cutout (a strange choice if you ask me – why virtualize dead space?).

I was honestly shocked at the level of iPhone functionality Apple could bring to the Mac desktop.

You can use the Mac trackpad to swipe through iPhone apps.

You can click to launch apps and run them inside the iPhone screen on your Mac desktop.

Pinch and zoom on the Mac trackpad works as expected with the iPhone apps.

There's even full drag-and-drop capability between the two interfaces. So you could take a video from the Go Pro app on your mirrored iPhone screen and drag and drop it into another app, like Final Cut Pro on the Mac.

Essentially, you are reaching through one big screen to get to another smaller one – on a different platform – that is sitting locked beside your desktop. It's stange and cool, but is it necessary?

WWDC 2024

(Image credit: Future / Lance Ulanoff)

Not everything makes sense. You can search through your mirrored phone screen, but why not just search on your desktop?

You can use the mirrored iPhone screen in landscape mode and play games. However, there's no obvious way to tell someone trying to play a game that uses the iPhone gyroscope that this is a bad idea.

I like that there's enough awareness that while the iPhone screen can look exactly like the screen on the phone, you can click to access a slightly larger frame that allows you to control the mirrored screen.

It's not the kind of mirroring that locks you in. To end it, you just pick up and unlock the phone to end the connection.

Even seeing all this, though, I wondered how people might use iPhone Mirroring.

Even seeing all this, though, I wondered how people might use iPhone Mirroring. There's the opportunity to play some games that aren't available on Mac. Multi-player word game fans might like that if they get a notification, they can open the mirrored phone screen, make a move, and then return to work.

When macOS Sequoia ships later this fall, you'll even be able to resize the mirrored iPhone window, which I guess could be useful for landscape games.

Notifications from your phone sounds redundant, especially for those of us in the iCloud ecosystem where all our Apple products get the same iMessages. But the system is smart enough to know it shouldn't repeat notifications on both screens, and you'll have the option to decide which iPhone notifications appear on your Mac.

Some notifications only appear on your iPhone, and others appear in both places, but you can't always act on them on the Mac.  This new feature might bridge that gap. A fellow journalist mentioned that iPhone mirroring would finally give him a way to jump from a notification he saw on his Mac for his baby cam app, where this is no cam app, to the live feed on the iPhone. This finally struck me as truly useful.

Is that enough of a reason to have your iPhone screen pasted on your Mac desktop? I don't know.  It might take up too much real estate on my MacBook Air 13-inch, but it would be kind of cool on a 27-inch iMac, if I had one.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft’s trick for speeding up PC games in Windows 11 works with only 12 games to start with – but far more are actually supported

We now know a lot more about how Microsoft’s Automatic Super Resolution (Auto SR) feature for speeding up gaming frame rates in Windows 11 will work, and what games it will initially support.

VideoCardz noticed a new entry in Microsoft’s support database on the topic of Auto SR, which underlines the requirements, as well as detailing what games will come as fully verified for the tech.

For those who missed it, Auto SR is an upscaling feature, meaning it runs a game at a lower resolution, upscaling to a higher one, so that you get a close-to-native-resolution image quality with a faster frame rate – using AI to pull off this trickery.

The notable catches are that you need a Copilot+ PC and indeed a Snapdragon X processor, one of the ARM-based chips that’ll power laptops launching next month. (You’ll also need Windows 11 24H2, which launches with those AI PCs).

As for the games which are verified and tested by Microsoft for Auto SR, the initial collection is as follows:

  • 7 Days to Die
  • BeamNG Drive
  • Borderlands 3
  • Control
  • Dark Souls III
  • God of War
  • Kingdom Come: Deliverance
  • Resident Evil 2
  • Resident Evil 3
  • Sekiro: Shadows Die Twice
  • Sniper Ghost Warrior Contracts 2
  • The Witcher 3: Wild Hunt

Analysis: Useful clarifications – and caveats

It’s interesting to see the fully verified games, and even if it’s only a small selection of a dozen right now, there are some big-name titles. However, the really interesting bit is the clarification that Automatic Super Resolution is a sweeping upscaling feature that can be applied to any game (DX11 or DX12).

We always assumed it would be a system-wide feature – after all, that was the whole point, compared to more targeted upscaling solutions that require support from the game dev such as Nvidia DLSS – and indeed this is the case. It’s just Microsoft worried us with its mention of Auto SR just applying to a “curated set of games” last week when it launched the feature, but these are just the verified games guaranteed to work well.

The majority of games should be fine with Auto SR in theory, but some may be wonky, or some may not work at all, and to that end, Microsoft is collaborating with the Worksonwoa.com website that lists games that can use the feature successfully – and also those that can’t use Auto SR for whatever reason. (This is the same website that also tells you whether your favorite PC game will run on Windows on ARM).

There are some nuances to note here, and the first is that verified games are set to work ‘out of the box’ with Auto SR, meaning the feature will be on by default. That could cause some confusion or conflict if a gamer is using another type of upscaling potentially – though you are told that by Windows that Auto SR is being enabled when the game is launched.

We guess Microsoft feels that less tech-savvy folks will benefit from having the feature automatically applied where it makes sense, in games that are fully tested to work well with Auto SR.

The Snapdragon X requirement is the other important point to note here, although we assume this will be widened to include future AMD and Intel laptop CPUs – those with a powerful enough NPU to qualify as the engine of a Copilot+ PC (as Auto SR will be for these PCs only).

However, we also noticed that Microsoft says Auto SR is only supported for games running on ARM64 natively or emulated x64 games (with the latter using Prism, the translation layer for running Windows games on ARM chips). Presumably that’s a reflection that currently (well, as of next month) only the new Snapdragon X can drive a Copilot+ PC, and that when AMD Strix Point or Intel Lunar Lake CPUs arrive for these AI-powered laptops, there’ll surely be fine with Auto SR.

You might also like…

TechRadar – All the latest technology news

Read More

Windows 11’s AI-powered feature to make games run more smoothly is for Copilot+ PCs only, we’re afraid

Windows 11 is getting a trick to help the best PC games run more smoothly, although this previously rumored feature comes with a catch – namely that it will only be available to those who have a Copilot+ PC with a Snapdragon X Elite processor.

The feature in question, which was leaked in preview builds of Windows 11 earlier this year, is called Auto Super Resolution (or Auto SR), and the idea is that it automatically upscales the resolution of a game (or indeed app) in real-time.

An upscaling feature like this effectively means the game – and it seems gaming is very much the focus (we’ll come back to that) – is run at a certain (lower) resolution, with the image upscaled to a higher resolution.

This means that something running at, say, 720p, can be upscaled to 1080p or Full HD resolution, and look nearly as good as native 1080p – but it can be rendered faster (because it’s really still 720p). If this sounds familiar, it’s because there are similar solutions already out there, such as Nvidia DLSS, AMD FSR, and Intel XeSS to name a few.

As outlined by Microsoft in its fresh details about Copilot+ PCs (highlighted by VideoCardz), the catch is that Auto SR is exclusive to these laptops. In fact, you need to be running the Qualcomm Snapdragon X Elite, so the lesser Plus version of this CPU is ruled out (for now anyway).

The other caveat to bear in mind here is that to begin with this is just for a “curated set of games,” so it’ll have a rather limited scope initially.


Analysis: The start of a long upscaling journey

When it was just a leak, there was some debate about whether Auto SR might be a feature for upscaling anything – games or apps – but Microsoft specifically talks about PC games here, and so that’s the intended use in the main. We also expected it to be some kind of all-encompassing tech in terms of game support, and that clearly isn’t the case.

Eventually, though, we’d think Auto SR will have a much broader rollout, and maybe that’ll happen before too long. After all, AI is being pushed heavily as helping gamers too – as a kind of gaming Copilot – so this is another string to that bow, and an important one we can imagine Microsoft working hard on.

Of course, the real fly in the ointment is the requirement for a Snapdragon X Elite chip, which rules out most PCs, of course. This is likely due to the demanding nature of the task, and the feature being built around the presence of a beefy NPU (Neural Processing Unit) to accelerate the AI workloads involved. Only Qualcomm’s new Snapdragon X has a peppy enough NPU to deal with this, or that’s what we can assume – but this won’t be the case for long.

Newer laptop chips from Intel, such as Lunar Lake (and Arrow Lake), and AMD’s Strix Point are inbound for later this year, and will deliver the goods in terms of the NPU and qualifying as the engine for a Copilot+ PC – and therefore being able to run Auto SR.

Naturally, we still need to see how well Microsoft implements this feature, and how upscaling games leveraging a powerful NPU works out. But as mentioned, the company has so much riding on AI, and the gaming side of the equation appears to be important enough, that we’d expect Microsoft will be trying its best to impress.

You might also like…

TechRadar – All the latest technology news

Read More

5 ways that Android 15 on Pixel is going to be way more customizable for users

The second Android 15 beta came out not too long ago, on May 15. According to the official Android Developers Blog, the patch continues Google’s efforts at creating a platform that improves productivity, maximizes app performance, and protects user privacy. 

However, the post didn’t mention all the different ways Android 15 will upgrade system customization. As people have dug deep into the OS’ files, many of its other features have been unearthed, with several of them providing new ways to customize a smartphone. 

Below is a list highlighting the most notable of these possible tools. Android 15 won’t launch for a while, so there's a chance that some will not be in the final release. It’s hard to say, but given their seemingly advanced states, we believe they will be available at launch or soon after.

1. Slideshow screensavers

Android 15's possible slideshow menu on Pixel

(Image credit: 9To5Google)

In June 2023, a mysterious Google app called Dreams was discovered on the Play Store. It was for the Pixel Tablet and allowed the device to play a “collection of screen savers” when docked and not in use. Nothing really came of it, though, as Dreams just disappeared from the store.

It appears, though, that the same feature will be making its way to Pixel phones as Android 15 Beta 2 refers to “Dreamliner” within its files. When docked on the second-generation Pixel Stand, users can select photo albums on their device to be a slideshow as a screensaver. Moreover, the Google Photos UI has been updated to accommodate Dreamliner and not the Google Assistant. 

2. Widget buttons and previews

Android 15's new Widget button on Pixel

(Image credit: 9To5Google)

Adding widgets to your Android phones requires manually dragging and dropping apps from the Home screen. However, evidence suggests that Google plans to introduce an “Add button.” So, instead of having to drag the widget over, you can just push the button and attach them that way. Images in 9To5Google’s report show that there will be a big blue button right where a widget space is available.

3. Pixel Avatar

Android 15 Pixel Avatar app

(Image credit: Androig Authority)

Industry insider Mishaal Rahman discovered an unbundled version of Google Pixel Avatar inside the beta files. This is an app that allows users to select an icon to be their profile picture. Rahman states the software has been a part of Android for a while now, but it adds a new feature: “the ability to use your Google Account picture as your [main] profile picture.”

Prior to this update, Google Account and Android profile images existed as separate entities. Now, the barrier is gone, allowing one photo for both platforms. It’s important to mention that this capability actually came out on the first Android 15 beta, but the syncing process wasn’t very reliable. Things should be much better now.

There is no word if it’ll work with third-party apps, as the current version only connects the Pixel Avatar with SystemUI apps.

4. Cast volume controls

google nest

(Image credit: Google)

Audio company Sonos sued Google for an “alleged patent infringement” back in 2020, claiming the tech giant “ripped off its patented speaker technology.” Google eventually disabled the ability to use a Pixel phone’s volume buttons to control speaker groups and other “Chrome and Google cast devices.” Sonos seemingly won the lawsuit, however a California judge overturned the verdict in 2023, paving the way for Google to bring back volume controls and that’s exactly what we’re seeing.

It’s the return of a feature people initially thought would never return. Android Authority was able to cast songs from YouTube Music to Nest Hub devices using Beta 2 of Android 15. Adjusting the speaker group volume worked without a hitch. So, after years of waiting, users may soon finally create (or recreate) their ideal listening environment. 

5. Vibration strength

Android 15 on Pixel - Adaptive Vibration

(Image credit: 9To5Google)

Lastly, Google is adding a new Adaptive Vibration tool to Pixel. According to the text description, the software can automatically adjust the smartphone’s vibration level “based on your environment.” Phone vibration won’t be as powerful on a table, for example, but if it detects it’s on a couch, the Pixel would vibrate more loudly. The device will be able to detect where it’s located by using the “microphone and other sensors… to determine sound levels and context.” Maybe most importantly, no data will be recorded.

It's unknown whether these features will roll out to third-party Android phones. Google may possibily be giving Pixel owners the opportunity to try them out first before expanding their availability.

Be sure to check out TechRadar's list of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Google’s Live Caption may soon become more emotionally expressive on Android

Google is reportedly working on implementing several customization features to the Live Caption accessibility feature on mobile devices. Evidence of this update was discovered by software deep diver Assemble Debug after digging through the Android System Intelligence app. According to an image given to Android Authority, there will be four options in total. We don't know much, but there is a little bit of explanation to be found. 

The first one allows Android phones to display “emoji icons” in a caption transcript; perhaps to better convey what emotions the voices expressing. The other three aren't as clear. The second feature will “emphasize emotional intensity in [the] transcription” while the third is said to include the “word duration [effects]” and the ability to display “emotional tags.”

Feature breakdown

As you can see, the wording is pretty vague, but there’s enough to paint a picture. It seems Live Caption will become better at replicating emotions in voices it transcribes. Say, for example, you’re watching a movie and someone is angrily screaming. Live Caption could perhaps show text in all caps to signify yelling. 

The feature could also slant words in a line to indicate whenever someone is being sarcastic or trying to imply something. Word duration effect could refer to the software showing drawn out letters in a set of captions. Maybe someone is singing in and they begin to hold a note. The sound that’s being held could be displayed thanks to this toggle. 

Emotional tags is admittedly more difficult to envision. Android Authority mentions the tags will be shown and included in a transcript. This could mean that the tool is going to add clear indicators within transcriptions of what a subject is expressing at the moment. Users might see the word “Angry” pop up whenever a person is feeling angry about something or “Sad” whenever someone is crying.

Greater utility

That’s our best guess. If these rumored features do operate as described, it would give Live Caption even greater utility than what it already has. The tool was introduced back in 2019 as an accessibility tool to help people enjoy content if they’re hard of hearing or can’t turn on the sound for whatever reason.

The current captions are rather plain, but with update, emotions could be added to Google’s tool for a better immersive experience.  

Android Authority claims the the features were found in a “variant of the Android System Intelligence app”. We believe this means that they were located inside a special version of the app meant for first-party hardware like the Google Pixel. So the customization tools may be exclusive to the Pixel 8 or a future model. It’s too early to tell at the moment. Hopefully, the upgraded Live Captions sees a much wider release.

Until we learn more, check out TechRadar's list of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Windows 11 users, get ready for more AI – a new test build promises a designated section of the Settings menu just for AI updates

Windows 11 Build 26217 is now available to developers and testers in the Canary alpha channel, offering a few small bug fixes alongside a new page in the Settings menu dedicated to “AI component updates”. 

Microsoft has been flooding Windows 10 and Windows 11 users with some pretty cool AI-related updates and features recently, most notably the addition of Copilot to the taskbar for easy access. Spotted by WindowsLatest, the new settings page is just for AI updates, but right now we don’t really know what that could entail. We speculate that users will be able to keep track of updates to features like AI Explorer and possibly Copilot as well – or Microsoft could be setting up a new space for entirely new AI-related features.

Microsoft could also be gearing up for the Build Developer conference later this year, where it seems to be encouraging developers to build their own AI features for Windows apps. This would be fascinating news for AI enthusiasts who are already feeling the positive impacts of having a tool like Copilot ready to use and may want to boost some of the apps or programs they already use with an injection of AI functionality. 

Finally, some good news!

I’m pretty excited to see what kind of nifty features will make a home in the new settings page if we do see it have a public rollout. We have to keep in mind that many features and changes we see in the Windows Canary channel aren’t guaranteed to make a wide release, so while I might be excited now, I can’t get my full hype on until we get more information from Microsoft. 

That being said, it does look like AI is here to stay for Windows users. That could be good or bad news depending on your outlook on large language models, but it feels like Microsoft is all-in when it comes to AI. 

Overall, I am glad for some good news when it comes to Windows updates. With the influx of ads becoming the new normal in Windows 11, there’s been a bitter taste in my mouth anytime I hear about a new build or update – so if this new section of the settings does come to our desktops that’ll at least be something positive (and ad-free). Here at TechRadar, we all feel Microsoft owes us some kind of good news given how irritating ads have become – even stooping so low as to disguise themselves as recommendations

You might also like

TechRadar – All the latest technology news

Read More

OpenAI’s GPT-4o ChatGPT assistant is more life-like than ever, complete with witty quips

So no, OpenAI didn’t roll out a search engine competitor to take on Google at its May 13, 2024 Spring Update event. Instead, OpenAI unveiled GPT-4 Omni (or GPT-4o for short) with human-like conversational capabilities, and it's seriously impressive. 

Beyond making this version of ChatGPT faster and free to more folks, GPT-4o expands how you can interact with it, including having natural conversations via the mobile or desktop app. Considering it's arriving on iPhone, Android, and desktop apps, it might pave the way to be the assistant we've all always wanted (or feared). 

OpenAI's ChatGPT-4o is more emotional and human-like

OpenAI demoing GPT-4o on an iPhone during the Spring Update event.

OpenAI demoing GPT-4o on an iPhone during the Spring Update event. (Image credit: OpenAI)

GPT-4o has taken a significant step towards understanding human communication in that you can converse in something approaching a natural manner. It comes complete with all the messiness of real-world tendencies like interrupting, understanding tone, and even realizing it's made a mistake.

During the first live demo, the presenter asked for feedback on his breathing technique. He breathed heavily into his phone, and ChatGPT responded with the witty quip, “You’re not a vacuum cleaner.” It advised on a slower technique, demonstrating its ability to understand and respond to human nuances.

So yes, ChatGPT has a sense of humor but also changes the tone of responses, complete with different inflections while conveying a “thought”. Like human conversations, you can cut the assistant off and correct it, making it react or stop speaking. You can even ask it to speak in a certain tone, style, or robotic voice. Furthermore, it can even provide translations.

In a live demonstration suggested by a user on X (formerly Twitter), two presenters on stage, one speaking English and one speaking Italian, had a conversation with Chat GPT-4o handling translation. It could quickly deliver the translation from Italian to English and then seamlessly translate the English response back to Italian.

It’s not just voice understanding with GPT-4o, though; it can also understand visuals like a written-out linear equation and then guide you through how to solve it, as well as look at a live selfie and provide a description. That could be what you're wearing or your emotions. 

In this demo, GPT said the presenter looked happy and cheerful. It’s not without quirks, though. At one point ChatGPT said it saw the image of the equation before it was even written out, referring back to a previous visual of just a wooden tabletop.

Throughout the demo, ChatGPT worked quickly and didn't really struggle to understand the problem or ask about it. GPT-4o is also more natural than typing in a query, as you can speak naturally to your phone and get a desired response – not one that tells you to Google it.  

A little like “Samantha” in “Her”

If you’re thinking about Her or another futuristic-dystopian film with an AI, you’re not the only one. Speaking with ChatGPT in such a natural way is essentially the Her moment for OpenAI. Considering it will be rolling out to the mobile app and as a desktop app for free, many people may soon have their own Her moments.

The impressive demos across speech and visuals feel may only be scratching the surface of what's possible. Overall performance and how well GPT-4o performs day-to-day in various environments remains to be seen, and once available, TechRadar will be putting it through the test. Still, after this peek, it's clear that GPT-4o is preparing to take on the best Google and Apple have to offer in their eagerly-anticipated AI reveals.

The outlook on GPT-4o

However, announcing this the day before Google I/O kicks off and just a few weeks after we’ve seen new AI gadgets hit the scene – like the Rabbit R1 – OpenAI is giving us a taste of truly useful AI experiences we want. If this rumored partnership with Apple comes to fruition, Siri could be supercharged, and Google will almost certainly show off its latest AI tricks at I/O on May 14, 2024. But will they be enough?

We wish OpenAI showed off a bit more live demos with the latest ChatGPT-4o in what turned out to be a jam-packed, less-than-30-minute keynote. Luckily, it will be rolling out to users in the coming week, and you won’t have to pay to try it out.

You Might Also Like

TechRadar – All the latest technology news

Read More

Microsoft could turbocharge Edge browser’s autofill game by using AI to help fill out more complex forms

Microsoft Edge looks like it’s getting a new feature that could help you fill out forms more easily thanks to a boost from GPT-4 (the most up-to-date large language model from the creators of ChatGPT, OpenAI).

Browsers like Edge already have auto-fill assistance features to help fill out fields asking for personal information that’s requested frequently, and this ability could see even more improvement thanks to GPT-4’s technology.

The digital assistant currently on offer from Microsoft, Copilot, is also powered by GPT-4, and has seen some considerable integration into Edge already. In theory, the new GPT-4 driven form-filling feature will help Edge users tackle more complex or unusual questions, rather than typical basic fields (name, address, email etc) that existing auto-fill functionality handles just fine.

However, right now this supercharged auto-fill is a feature hidden within the Edge codebase (it’s called “msEdgeAutofillUseGPTForAISuggestions”), so it’s not yet active even in testing. Windows Latest did attempt to activate the new feature, but with no luck – so it’s yet to be seen how the feature works in action. 

A close up of a woman sitting at a table and typing on a computer (a laptop)

(Image credit: Shutterstock/Gorodenkoff)

Bolstering the powers of Edge and Copilot

Of course, as noted, Edge’s current auto-fill feature is sufficient for most form-filling needs, but that won’t help with form fields that require more complex or longer answers. As Windows Latest observes, what you can do, if you wish, is just paste those kind of questions directly into Edge’s Copilot sidebar, and the AI can help you craft an answer that way. Furthermore, you could also experiment with different conversation modes to obtain different answers, perhaps. 

This pepped-up auto-fill could be a useful addition for Edge, and Microsoft is clearly trying to develop both its browser, and the Copilot AI itself, to be more helpful and generally smarter.

That said, it’s hard to say how much Microsoft is prioritizing user satisfaction, as equally, it’s implementing measures which are set to potentially annoy some users. We’re thinking about its recent aggressive advertising strategy and curbing of access to settings if your copy of Windows is unactivated, to pick a couple of examples. Not forgetting the quickly approaching deprecation date for Windows 10 (its most popular operating system).

Copilot was presented as an all-purpose assistant, but the AI still leaves a lot to be desired. However, it’s gradually seeing improvements and integration into existing Microsoft products, and we’ll have to see if the big bet on Copilot pans out as envisioned. 

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

Latest Meta Quest 3 update improves mixed-reality passthrough yet again, and brings more iPhone-exclusive features

A new month, a new Meta Quest 3 headset update. V64 may have only landed (checks notes) 21 days ago, but we’ve got yet another upgrade courtesy of Horizon OS version v65.

Keeping up with the déjà vu, v65 brings with it yet another upgrade to passthrough, which was only just upgraded in v64, which added both exposure and dynamic range improvements, and an upgrade that makes it easier to see your real-world furniture while in VR and MR.

Now, Meta is finally giving players the option to stay immersed in mixed reality through their whole Quest 3 experience. 

Previously, when you were in the lock screen, power-off screen, and a few other important menus, you’d be trapped in a gray VR void. Now, if you're using MR home you’ll find yourself instead surrounded by your real-world space just like you would in any other mixed reality experience.

Sure it's not the most flashy upgrade, but considering Meta’s monthly release schedule we’re not going to complain if some updates are simpler quality-of-life improvements rather than earth-shaking changes.

A Meta Quest 3 player sucking up Stay Puft Marshmallow Men from Ghostbusters in mixed reality using virtual tech extending from their controllers

Mixed reality from start to finish (Image credit: Meta)

Some iPhone-exclusive upgrades 

Beyond better passthrough, Meta has also introduced a few features for iPhone users specifically – perhaps in an attempt to further convince Apple fans they don't need to shell out for an Apple Vision Pro, or wait for the now apparently delayed cheaper follow-up.

The first feature change comes to spatial video. Playback appeared via update v62 back in February, and if you had an iPhone 15 Pro you could upload your stereoscopic videos straight from your phone to your headset using the Meta Quest mobile app.

Now you can upload your videos via any iPhone running iOS 17 or later – though capturing spatial video is still an exclusive iPhone 15 Pro and iPhone 15 Pro Max feature (unless the iPhone 16 refresh brings it to more affordable models later this year).

Panorama images on the Meta Quest 3 showing a beautiful hillside

(Image credit: Meta)

Meta is also adding better support for still panoramic images. Alongside videos, you can now upload your panoramic shots from your iPhone to your Quest headset via the mobile app.

So, rather than simply viewing your shot on a flat screen, you can be re-immersed in the location where you took it. Again this has to be uploaded via an iPhone running iOS 17 or later.

There's no word yet on when or if these features will come to Android devices, but we expect they will – especially if new Android devices start to introduce camera setups that can record spatial videos.

With a Samsung XR headset – which Google is helping to make – on the way, we wouldn't be surprised if this phone camera happened. But we’ll have to wait and see what Android phone makers announce in the coming weeks.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT Plus just got a major update that might make it feel more human – here’s how the new memory feature works

Artificial intelligence might seem a little less artificial today now that Memory is live for all ChatGPT Plus users.

After a few months of testing in both the free and pay versions of the generative AI chatbot, OpenAI chose to enable the feature, for paying customers only, in all regions except Korea and Europe.

ChatGPT's memory is exactly what it sounds like. During prompt-driven “conversations” with the AI, ChatGPT Plus can now remember key facts about the conversations, including details about you, and then apply that information to future interactions. Put another way, ChatGPT Plus just graduated from a somewhat disinterested acquaintance to a friend who cares enough to remember that your birthday is next week or that you recently bought a dog.

You can tell the system to implicitly remember something or just state facts about yourself that it will remember.

ChatGPT Plus Memory

Cross-chat memory introduction (Image credit: Future)

I know, it's the kind of thing that could make AIs like ChatGPT far more useful or completely terrifying. Up until now, we've mostly dealt with generative AIs that had intense short-term memory loss. Systems like ChatGPT, Google's Gemini, and Microsoft CopIlot could carry on lengthy, discrete conversations where they'd do a decent job of maintaining context (the longer the conversation, the wonkier this could get). If, however, you ended one conversation and started another, it was like meeting a completely different person who knew nothing about you or the conversation you had three minutes ago.

Unlike human memory, which can remember some things forever but easily forget others, ChatGPT Plus Memory is in your control.

Controlling ChatGPT Plus Memory

As I mentioned earlier, you can help ChatGPT Plus build its Memory by telling it things about yourself that you want it to remember. By doing so, you'll notice that when you ask, say, your age or where you live, it will be able to tell you. ChatGPT will also take those details and combine them with future queries, which could shorten your conversation and make the results more accurate and useful.

Memory is enabled by default. You can find it under Settings/Personalization. There's a toggle switch where you can turn it off.

ChatGPT Plus Memory

ChatGPT Plus Memory control. (Image credit: Future)

To see all of ChatGPT Plus' memories, you select the Manage button, which sits right below the Memory description and toggle. Initially, even though I told ChatGPT Plus to remember things about me, my memory box remained empty. If I had found any in there, I could clear all of them or select only the ones I wanted to remove.

However, when I told ChatGPT “I really love houseplants,” I saw a little notation appear right above its response that said: “Memory updated.” When I selected that, the memory, “Loves houseplants”, appeared below it, and right below that, a link to Manage memories.

Image 1 of 4

ChatGPT Plus Memory

(Image credit: Future)
Image 2 of 4

ChatGPT Plus Memory

(Image credit: Future)
Image 3 of 4

ChatGPT Plus Memory

(Image credit: Future)
Image 4 of 4

ChatGPT Plus Memory

I made ChatGPT Plus remember my love of houseplants (Image credit: Future)

Later, when I asked ChatGPT Plus how I might liven up my home, it answered, in part (I bolded the relevant bit), “Adding some houseplants is a great way to liven up your home! They not only beautify the space but also improve air quality and can enhance your mood. Since you love houseplants, you might consider diversifying the types you have….”

As noted, Memory is not free. A ChatGPT Plus subscription, which gives you, among other things, access to the GPT-4 model, costs $ 20 /£20 a month.  I asked OpenAI if any version of Memory is coming to non-paying ChatGPT users and will update this post with their response.

Sure, ChatGPT Plus Memory nudges the generative AI in the direction of humanity, but there is, as far as I know, no way to go into anyone's mind and delete some or all memories.

ChatGPT Plus Memory

Temporary Chat will turn off memories for that that. (Image credit: Future)

While you can turn off Memories, you might like the middle option, which uses the new “Temporary Chat” to introduce short-term amnesia to the system.

To use it, choose the ChatGPT model you want from the drop-down menu and then select “Temporary chat”. Now, nothing you share with ChatGPT Plus during that chat will be added to its memory.

Come to think of it, a real friend, who only remembers what you want them to, could come in handy.

You might also like

TechRadar – All the latest technology news

Read More