Apple has officially announced macOS 15 Sequoia at this year's WWDC 2024 event (you can follow all the announcements as they happen at our WWDC 2024 live blog), giving us an early view of the upcoming operating system for Macs and MacBooks.
Following on from macOS Sonoma, which was revealed at last year's WWDC, macOS 15 comes with a range of new features, many of which make use of Apple's artificial intelligence (AI) tools, which were also announced at WWDC 2024.
What we know so far
These are the new features of macOS 15:
Your iPhone screen can now be mirrored in macOS 15
iPhone notifications are coming to Macs
Improved windows layouts – drag a window to a side of your screen and macOS 15 will give you options for arranging windows
You can replace backgrounds when using FaceTime
Password app replaces Keychain, making it easier to arrange and sync your passwords – and this is also coming to iPhone, Vision Pro, iPads and even Windows PCs!
This story is breaking. We'll continue to update this article – and check out our WWDC 2024 live blog for all the breaking news
Ahead of Global Accessibility Day on May 16, 2024, Apple unveiled a number of new accessibility features for the iPhone, iPad, Mac, and Vision Pro. Eye tracking is leading a long list of new functionality which will let you control your iPhone and iPad by moving your eyes.
Eye Tracking, Music Haptics, Vocal Shortcuts, and Vehicle Motion Cues will arrive on eligible Apple gadgets later this year. These new accessibility features will most likely be released with iOS 18, iPadOS 18, VisionOS 2, and the next version of macOS.
These new accessibility features have become a yearly drop for Apple. The curtain is normally lifted a few weeks before WWDC, aka the Worldwide Developers Conference, which kicks off on June 10, 2024. That should be the event where we see Apple show off its next generation of main operating systems and AI chops.
Eye-Tracking looks seriously impressive
Eye-tracking looks seriously impressive and is a key way to make the iPhone and iPad even more accessible. As noted in the release and captured in a video, you can navigate iPadOS – as well as iOS – open apps, and even control elements all with just your eyes, and it uses the front-facing camera, artificial intelligence, and local machine learning throughout the experience.
You can look around the interface and use “Dwell Control” to engage with a button or element. Gestures will also be handled through just eye movement. This means that you can first look at Safari, Phone, or another app, hold that view, and it will open.
Most critically, all setup and usage data is kept local on the device, so you’ll be set with just your iPhone. You won’t need an accessory to use eye tracking. It’s designed for people with physical disabilities and builds upon other accessible ways to control an iPhone or iPad.
Vocal Shortcuts, Music Haptics, and Live Captions on Vision Pro
Another new accessibility feature is Vocal Shortcuts, designed for iPad and iPhone users with ALS (amyotrophic lateral sclerosis), cerebral palsy, stroke, or “acquired or progressive conditions that affect speech.” This will let you set up a custom sound that Siri can learn and identify to launch a specific shortcut or run through a task. It lives alongside Listen for Atypical Speech, designed for the same users, to open up speech recognition for a wider set.
These two features build upon some introduced within iOS 17, so it’s great to see Apple continue to innovate. With Atypical Speech, specifically, Apple is using artificial intelligence to learn and recognize different types of speech.
Music Haptics on the iPhone is designed for users who are hard of hearing or deaf to experience music. The built-in taptic engine, which powers the iPhone’s haptics, will play different vibrations, like taps and textures, that resemble a song's audio. At launch, it will work across “millions of songs” within Apple Music, and there will be an open API for developers to implement and make music from other sources accessible.
Additionally, Apple has previews of a few other features and updates. Vehicle Motion Cues will be available on iPhone and iPad and aim to reduce motion sickness with animated dots on that screen that change as vehicle motion is detected. It's designed to help reduce motion sickness without blocking whatever you view on the screen.
One major addition arriving for VisionOS – aka the software that powers Apple Vision Pro – will be Live Captions across the entire system. This will allow for captions for spoken dialogue within conversations from FaceTime and audio from apps to be seen right in front of you. Apple’s release notes that it was designed for users who are deaf or hard of hearing, but like all accessibility features, it can be found in Settings.
Since this is Live Captions on an Apple Vision Pro, you can move the window containing the captions around and adjust the size like any other window. Vision accessibility within VisosOS will also gain reduced transparency, smart inverting, and dim flashing light functionality.
Regarding when these will ship, Apple notes in the release that the “new accessibility features [are] coming later this year.” We’ll keep a close eye on this and imagine that these will ship with the next generation of OS’ like iOS 18 and iPadOS 18, meaning folks with a developer account may be able to test these features in forthcoming beta releases.
Considering that a few of these features are powered by on-device machine learning and artificial intelligence, aiding with accessibility features is just one way that Apple believes AI has the potential to make an impact. We’ll likely hear the technology giant share more of its thoughts around AI and consumer-ready features at WWDC 2024.
While the focus of Apple’s May 7 special event was mostly hardware — four new iPads, a new Apple Pencil, and a new Magic Keyboard — there were mentions of AI with the M2 and M4 chips as well as new versions of Final Cut Pro and Logic Pro for the tablets.
The latter is all about new AI-infused or powered features that let you create a drum beat or a piano riff or even add a warmer, more distorted feel to a recorded element. Even neater, Logic Pro for iPad 2 can now take a single recording and split it into individual tracks based on the instruments in a matter of seconds.
It’s a look behind the curtain at the kind of AI features Apple sees the biggest appeal and affordance with. Notably, unlike some rollouts from Google or OpenAI, it’s not a chatbot or an image generator. With Logic Pro, you're getting features that can be genuinely helpful and further expand what you can do within an app.
A trio of AI-powered additions for Logic Pro for iPad
Arguably the most helpful feature for musicians will be Stem Splitter, which aims to solve the problem of separating out elements within a given track. Say you’re working through a track or giving an impromptu performance at a cafe; you might just hit record in Voice Memos on an iPhone or using a single microphone.
The result is one track that contains all the instruments mixed. Logic Pro 2 can now import that track, analyze it, and split it into four tracks: vocals, drums, bass and other instruments. It won’t change the sound but essentially puts each element on a separate track, allowing you to easily modify or edit it. You can even place plugins, something that Logic is known for, on iPad and the Mac.
The iPad Pro with M4 will likely be mighty speedy when tackling this thanks to its 16-core neural processing unit, but it will work on any iPad with Apple Silicon through a mixture of on-device AI and deep learning. For musicians big or small, it’s poised to be a simple, intuitive way to convert voice memos into workable and mixable tracks.
AI-powered instruments to complete a track
Building on Stem Splitter is a big expansion with Session Players. Logic Pro has long offered Dummer — both on Mac and iPad — as a way to easily add drums to a track via a virtual player that can be customized by style and even complexity. Logic Pro for iPad 2 adds a piano and bass player to the mix, which are extremely adjustable session players for any given track. With piano, in particular, you can customize the individual left or right hand’s playing style, pick between four types of piano, and use a plethora of other sliding tools. It's even smart enough to recognize where on a track it is, be it a chorus or a bridge. It only took a few seconds to come up with a decent-sized track as well on an iPad Pro.
If you’re only a singer or desperately need a bass line for your track, Logic Pro for iPad 2 aims to solve this with an output that plays with and complements any existing track.
Rounding out this AI expansion for Logic Pro on the iPad is a Chromaglow effect, which takes a common, expensive piece of hardware reserved for studios and places it on the iPad to add a bit more space, color, and even warmth to the track. Like other Logic plugins, you can pick between a few presets and further adjust them.
Interestingly enough, alongside these updates, Apple didn’t show off any new Apple Pencil integrations for Logic Pro for iPad 2. I’d have to imagine that we might see a customized experience with the palette tool at some point.
It’s clear that Apple’s approach to AI, like its other software, services, and hardware, is centered around crafting a meaningful experience for whoever uses it. In this case, for musicians, it’s solving pain points and opening doors for creativity further.
Stem Splitter, new session players, and Chromaglow feel right at home within Logic Pro, and I expect to see similar enhancements to other Apple apps announced at WWDC. Just imagine an easier way to edit photos or videos baked into the Photos app or a way to streamline or condense a presentation within Keynote.
Pricing and Availability
All of these features are bundled in with Logic Pro for iPad 2, which is set to roll out and launch on May 13, 2024. If you’re already subscribed at $ 4.99 a month or $ 49 for the year, you’ll get the update for free, and there is no price increase if you’re new to the app. Additionally, you can get a one-month free trial of first-time Logic Pro for iPad users.
Apple announced the M4 chip, a powerful new upgrade that will arrive in next-generation iPad (and, further down the line, the best Macbooks and Macs). You can check out our beat-by-beat coverage of the Apple event, but one element of the presentation has left some users confused: what exactly does TOPS mean?
TOPS is an acronym for 'trillion operations per second', and is essentially a hardware-specific measure of AI capabilities. More TOPS means faster on-chip AI performance, in this case the Neural Engine found on the Apple M4 chip.
The M4 chip is capable of 38 TOPS – that's 38,000,000,000,000 operations per second. If that sounds like a staggeringly massive number, well, it is! Modern neural processing units (NPUs) like Apple's Neural Engine are advancing at an incredibly rapid rate; for example, Apple's own A16 Bionic chip, which debuted in the iPhone 14 Pro less than two years ago, offered 17 TOPS.
Apple's new chip isn't even the most powerful AI chip about to hit the market – Qualcomm's upcoming Snapdragon X Elite purportedly offers 45 TOPS, and is expected to land in Windows laptops later this year.
How is TOPS calculated?
The processes by which we measure AI performance are still in relative infancy, but TOPS provides a useful and user-accessible metric for discerning how 'good' at handling AI tools a given processor is.
I'm about to get technical, so if you don't care about the mathematics, feel free to skip ahead to the next section! The current industry standard for calculating TOPS is TOPS = 2 × MAC unit count × Frequency / 1 trillion. 'MAC' stands for multiply-accumulate; a MAC operation is basically a pair of calculations (a multiplication and an addition) that are run by each MAC unit on the processor once every clock cycle, powering the formulas that make AI models function. Every NPU has a set number of MAC units determined by the NPU's microarchitecture.
'Frequency' here is defined by the clock speed of the processor in question – specifically, how many cycles it can process per second. It's a common metric also used in CPUs, GPUs, and other components, essentially denoting how 'fast' the component is.
So, to calculate how many operations per second an NPU can handle, we simply multiply the MAC unit count by 2 for our number of operations, then multiply that by the frequency. This gives us an 'OPS' figure, which we then divide by a trillion to make it a bit more palatable (and kinder on your zero key when typing it out).
Simply put, more TOPS means better, faster AI performance.
Why is TOPS important?
TOPS is, in the simplest possible terms, our current best way to judge the performance of a device for running local AI workloads. This applies both to the industry and the wider public; it's a straightforward number that lets professionals and consumers immediately compare the baseline AI performance of different devices.
TOPS is only applicable for on-device AI, meaning that cloud-based AI tools (like the internet's favorite AI bot, ChatGPT) don't typically benefit from better TOPS. However, local AI is becoming more and more prevalent, with popular professional software like the Adobe Creative Cloud suite starting to implement more AI-powered features that depend on the capabilities of your device.
It should be noted that TOPS is by no means a perfect metric. At the end of the day, it's a theoretical figure derived from hardware statistics and can differ greatly from real-world performance. Factors such as power availability, thermal systems, and overclocking can impact the actual speed at which an NPU can run AI workloads.
To that end, though, we're now starting to see AI benchmarks crop up, such as Procyon AI from UL Benchmarks (makers of the popular 3DMark and PCMark benchmarking programs). These can provide a much more realistic idea of how well a You can expect to see TechRadar running AI performance tests as part of our review benchmarking in the near future!
At Let Loose 2024, Apple revealed big changes coming to its Final Cut software, ones that effectively turn your iPad into a mini production studio. Chief among these is the launch of Final Cut Pro for iPad 2. It’s a direct upgrade to the current app that is capable of taking full advantage of the new M4 chipset. According to the company, it can render videos up to twice as fast as Final Cut Pro running on an M1 iPad.
Apple is also introducing a feature called Live Multicam. This allows users to connect their tablet to up to four different iPhones or iPads at once and watch a video feed from all the sources in real time. You can even adjust the “exposure, focus, [and] zoom” of each live feed directly from your master iPad.
Looking at Apple’s demo video, selecting a source expands the footage to fill up the entire screen where you can then make the necessary adjustments. Tapping the Minimize icon in the bottom right corner lets creators return to the four-split view. Apple states that previews from external devices are sent to Final Cut Pro so you can quickly begin editing.
Impactful upgrades
You can’t connect your iPhone to the multicam studio using the regular camera app, which won’t support the setup. Users will instead have to install a new app called Final Cut Camera on their mobile device. Besides the Live Multicam compatibility, Apple says you can tweak settings like white balance, shutter speed, and more to obtain professional-grade recordings. The on-screen interface even lets videographers monitor their footage via a zebra stripe pattern tool and an audio meter.
Going back to the Final Cut Pro update, there are other important features we’ve yet to mention. The platform “now supports external projects”. This means you can create a video project on and import media to “an external storage” drive without sacrificing space on an iPad. Apple is also adding more customization tools to the software like 12 additional color-grading presets and more dynamic backgrounds.
Final Cut Pro for Mac is set to receive a substantial upgrade too. Although it won’t support the four iPhone video feeds, version 10.8 does introduce several tools. For example, Enhance Light and Color offers a quick way to improve color balance and contrast in a clip among other things. Users can also give video effects and color corrections a custom name for easy identification. It’s not a total overhaul, but these changes will take some of the headache out of video editing.
Availability
There are different availability dates for the three products. Final Cut Pro for iPad 2 launches this spring and will be a “free update for existing users”. For everyone else, it will be $ 5/£5/$ 8 AUD a month or $ 50/£50/$ 60 AUD a year for access. Final Cut Camera is set to release in the spring as well and will be free for everyone. Final Cut Pro for Mac 10.8 is another free update for existing users. On the Mac App Store, it’ll cost you $ 300/£300/$ 500 AUD.
We don’t blame you if you were totally unaware of the Final Cut Pro changes as they were overshadowed by Apple's new iPad news. Speaking of which, check out TechRadar’s guide on where to preorder Apple’s 2024 iPad Pro and Air tablets.
Continuing a yearly tradition, Apple has revealed this year’s Pride Collection celebrating the LGBTQ+ community. The 2024 set consists of two new wallpapers for iPhones and iPads plus a new watch face and wristband for the Apple Watch.
Launching first on May 22 is the band which is called the Pride Edition Braided Solo Loop. Apple states the color scheme was inspired by multiple pride flags. The pink, light blue, and white threads are meant to “represent transgender and nonbinary” people, while “black and brown symbolize Black, Hispanic, and Latin communities” plus groups who have been hurt by HIV/AIDS. Laser-etched on the lug are the words “PRIDE 2024”.
The Pride Braided Loop will be available in both 41mm and 45mm for $ 99. It’ll fit on the Apple Watch SE as well as the “Apple Watch Series 4 or later” models. You can purchase it in the US on the 22nd at a physical Apple Store or on the company’s website. Other global regions can buy the band on the following day. No word on how much it’ll cost outside the United States, although we did ask.
Dyanmic wallpaper
The wallpaper coming to Apple hardware is known as Pride Radiance. What’s different about it is it’s not a static image, but rather dynamic. On the Apple Watch, the streams of light actively trace the numbers of the digital clock. They even react in real-time to the wearable moving around. 9To5Mac claims in its coverage users can customize the look of the wallpaper by choosing “from several style palettes.”
On iPhones and iPads, Pride Radiance is also dynamic, but it doesn’t trace the clock. Instead, the light spells out the word “pride” on the screen. Those interested can download the wallpaper through the Apple Watch and Apple Store app “soon”. An exact date wasn’t given. However, the company did confirm it’ll roll out with iOS 17.5, iPadOS 17.5, and watchOS 10.5.
This is noteworthy because, up until this recent post, the company had yet to announce when the next big software update would arrive for its devices. iOS 17.5 in particular is slated to introduce several interesting features such as the ability to download apps from developer websites instead of the Apple Store. We did see clues last week that the company is working on implementing Repair State. This places iPhones “in a special hibernation mode” whenever people take the device in for repairs.
Given the fact Repair State appears to still be in the early stages, we most likely won’t see it on iOS 17.5 a few weeks from now; although it may roll out on iOS 18.
It’s an open secret that Apple is going to unveil a whole host of new artificial intelligence (AI) software features in the coming weeks, with major overhauls planned for iOS 18, macOS 15, and more. But it’s not just new features that Apple is hoping to hype up – it’s the way in which those AI tools are put to use.
Tim Cook has just let slip that Apple’s generative AI will have some major “advantages” over its rivals. While the Apple CEO didn’t explain exactly what Apple’s generative AI will entail (we can expect to hear about that at WWDC in June), what he did say makes a whole lot of sense.
Speaking on Apple’s latest earnings call yesterday, Cook said: “We believe in the transformative power and promise of AI, and we believe we have advantages that will differentiate us in this new era, including Apple’s unique combination of seamless hardware, software, and services integration, groundbreaking Apple silicon with our industry-leading neural engines, and our unwavering focus on privacy, which underpins everything we create.”
Cook also said Apple is making “significant investments” in generative AI, and that he has “some very exciting things” to unveil in the near future. “We continue to feel very bullish about our opportunity in generative AI,” he added.
Why Tim Cook might be right
There are plenty of reasons why Apple’s AI implementation could be an improvement over what's come before it, not least of which is Apple’s strong track record when it comes to privacy. The company often prefers to encrypt data and run tasks on your device, rather than sending anything to the cloud, which helps ensure that it can’t be accessed by nefarious third parties – and when it comes to AI, it looks like this approach might play out again.
Bloomberg's Mark Gurman, for example, has reported that Apple’s upcoming AI features will work entirely on your device, thereby continuing Apple’s commitment to privacy, amid concerns that the rapid development of AI is putting security and privacy at risk. If successful, it could also be a more ethical approach to AI than that employed by Apple’s rivals.
In addition, the fact that Apple creates both the hardware and software in its products allows them to be seamlessly integrated in ways most of its competitors can’t match. It also means devices can be designed with specific use cases in mind that rely on hardware and software working together, rather than Apple having to rely on outside manufacturers to play ball. When it comes to AI, that could result in all kinds of benefits, from performance improvements to new app features.
We’ll find out for sure in the coming weeks. Apple is hosting an iPad event on May 7, which reports have suggested Apple might use to hint at upcoming AI capabilities. Beyond that, the company’s Worldwide Developers Conference (WWDC) lands on June 10, where Apple is expected to devote significant energy to its AI efforts. Watch this space.
If you were a Mac user in the 80s and 90s, you got the opportunity to use the classic versions of the macOS we know and love today. Now, I’ve got good news for anyone who’s feeling nostalgic: you don’t have to go digging through eBay or your attic to search for an old Mac to use a retro iteration of macOS.
A website called Infinite Mac, designed by Mihai Parparita, allows you to use every classic Mac operating system from 1985 to 2001. Once you head over to the Infinite Mac website you can scroll through your options, find the one you want to try out, and click Run. Then, like Marty McFly, you’ll be magically transported back through time to the macOS of your choice!
Blast from the past
You won’t have to install anything as it’s all contained within your browser, and you’ll be guided around the macOS setup and use it as you would a regular computer! You can create new files, explore the setup, and even play a few old-school games – including the full versions of Doom II, Quake, and Myst, although they're unsurprisingly a little bit janky to play in an emulated in-browser OS.
You can also access a saved hard drive that will back up any files you create on your computer locally, and drag any files from your desktop into the web browser, creating a file called “Outside World”. You’ll be able to try out a collection of CDs, old games, and even some software that came bundled on floppy disks with magazines at the time.
As a modern-day Apple user born in the year 2000, I think it’s pretty cool that I can take an educational trip down memory lane and see what older versions of the current system look like. It really makes you appreciate not just how far we’ve come in the world of computing – but also showcases how far we’ve yet to go! I can’t wait to see what macOS looks like in 10 years, or 20 – probably loaded up with AI, if recent news is anything to go by.
One of the most popular uses for Apple's Vision Pro headset is to enjoy movies and TV shows on its enormous virtual screen, but not all streamers are on board. Netflix in particular caused some disappointment when it said it had no plans to make a native Vision Pro app for its service.
Not to worry. Independent developer Christian Privitelli has stepped in to deliver what some streamers won't. His app, Supercut, lets you stream Netflix and Prime Video, and is designed specifically for Apple's virtual viewer.
The app works much like Apple's own TV Plus app, but instead of Apple content it offers Netflix and Prime Video without the letterboxing you get when viewing shows and movies from the headset's web browser. It's not packed with gimmicks and doesn't have the pleasant virtual theater of the Disney Plus app, but it's cheap and effective, and that's good enough for me.
Say hello to Supercut.🎬🍿My Netflix and Prime Video app for Vision Pro is now available to download on the App Store. pic.twitter.com/V9wKLnCSPyApril 6, 2024
See more
What Redditors are saying about Supercut for Vision Pro
If you want to know the ups and downs of any AV app, Reddit's always a good place to look – and the reaction to Supercut in r/visionpro has been positive, no doubt partly because Privitelli, the developer, has been cheerfully chatting with the other redditors in the subreddit and talking about what the app can do, can't do and what he hopes to do next. Future versions are likely to include some virtual viewing environments too.
At just $ 4.99 for the app – roughly 1/700th of the cost of your Vision Pro – it's extremely affordable, and that means you'll happily forgive its shortcomings – such as the fairly basic Prime Video implementation. It delivers 4K, Dolby Atmos and Dolby Vision if your Netflix subscription includes them, and it supports multiple profiles for easy account switching. It'll also tell you what resolution you're getting and whether Dolby Atmos or Dolby Vision are happening.
While the initial hype over Apple Vision Pro may have died down, Apple is still busy developing and rolling out fresh updates, including a new one that lets multiple Personas work and play together.
Apple briefly demonstrated this capability when it introduced the Vision Pro and gave me my first test-drive last year but now spatial Personas is live on Vision Pro mixed-reality headsets.
To understand “spatial Personas” you need to start with the Personas part. You capture these somewhat uncanny valley 3D representations of yourself using Vision Pro's spatial (or 3D) cameras. The headset uses that data to skin a 3D representation of you that can mimic your face, head, upper torso, and hand movements and be used in FaceTime and other video calls (if supported).
Spatial Personas does two key things: it gives you the ability to put two (or more) avatars in one space and lets them interact with either different screens or the same one and does so in a spatially aware space. This is all still happening within the confines of a FaceTime call where Vision Pro users will see a new “spatial Persona” button.
To enable this feature, you'll need the visionOS 1.1 update and may need to reboot the mixed reality headset. After that you can at any time during a FaceTime Persona call tap on the spatial icon to enable the featue.
Almost together
Spatial Personas support collaborative work and communal viewing experiences by combining the feature with Apple's SharePlay.
This will let you “sit side-by-side” (Personas don't have butts, legs or feet, so “sitting” is an assumed experience) to watch the same movie or TV show. In an Environment (you spin the Vision Pro's digital crown until your real world disappears in favor of a selected environment like Yosemite”) you can also play multi-player games. Most Vision Pro owners might choose “Game Room”, which positions the spatial avatars around a game table. A spatial Persona call can become a real group activity with up with five spatial Personas participating at once.
Vision Pro also supports spatial audio which means the audio for the Persona on the right will sound like it's coming from the right. Working in this fashion could end up feeling like everyone is in the room with you, even though they're obviously not.
Currently, any app that supports SharePlay can work with spatial Personas but not every app will allow for single-screen collaboration. If you use window share or share the app, other personas will be able to see but not interact with your app window.
Being there
While your spatial Personas will appear in other people's spaces during the FaceTime call, you'll remain in control of your viewing experience and can still move your windows and Persona to suit your needs, while not messing up what people see in the shared experience.
In a video Apple shared, it shows two spatial Personas positioned on either side of a Freeform app window, which is, in and of itself somewhat remarkable. But things take a surprising turn when each of them can reach out with their Persona hands to control the app with gestures. That feels like a game-changer to me.
In some ways, this seems like a much more limited form of Meta CEO Mark Zuckerberg's metaverse ideal, where we live work and play together in virtual reality. In this case, we collaborate and play in mixed reality while using still somewhat uncanny valley avatars. To be fair, Apple has already vastly improved the look of these things. They're still a bit jarring but less so than when I first set mine up in February.
I haven't had a chance to try the new feature, but seeing those two floating Personas reaching out and controlling an app floating a single Vision Pro space is impressive. It's also a reminder that it's still early days for Vision Pro and Apple's vision of our spatial computing future. When it comes to utility, the pricey hardware clearly has quite a bit of road ahead of it.