TOPS explained – exactly how powerful is Apple’s new M4 iPad chip?

Apple announced the M4 chip, a powerful new upgrade that will arrive in next-generation iPad (and, further down the line, the best Macbooks and Macs). You can check out our beat-by-beat coverage of the Apple event, but one element of the presentation has left some users confused: what exactly does TOPS mean?

TOPS is an acronym for 'trillion operations per second', and is essentially a hardware-specific measure of AI capabilities. More TOPS means faster on-chip AI performance, in this case the Neural Engine found on the Apple M4 chip.

The M4 chip is capable of 38 TOPS – that's 38,000,000,000,000 operations per second. If that sounds like a staggeringly massive number, well, it is! Modern neural processing units (NPUs) like Apple's Neural Engine are advancing at an incredibly rapid rate; for example, Apple's own A16 Bionic chip, which debuted in the iPhone 14 Pro less than two years ago, offered 17 TOPS.

Apple's new chip isn't even the most powerful AI chip about to hit the market – Qualcomm's upcoming Snapdragon X Elite purportedly offers 45 TOPS, and is expected to land in Windows laptops later this year.

How is TOPS calculated?

The processes by which we measure AI performance are still in relative infancy, but TOPS provides a useful and user-accessible metric for discerning how 'good' at handling AI tools a given processor is.

I'm about to get technical, so if you don't care about the mathematics, feel free to skip ahead to the next section! The current industry standard for calculating TOPS is TOPS = 2 × MAC unit count × Frequency / 1 trillion. 'MAC' stands for multiply-accumulate; a MAC operation is basically a pair of calculations (a multiplication and an addition) that are run by each MAC unit on the processor once every clock cycle, powering the formulas that make AI models function. Every NPU has a set number of MAC units determined by the NPU's microarchitecture.

'Frequency' here is defined by the clock speed of the processor in question – specifically, how many cycles it can process per second. It's a common metric also used in CPUs, GPUs, and other components, essentially denoting how 'fast' the component is. 

So, to calculate how many operations per second an NPU can handle, we simply multiply the MAC unit count by 2 for our number of operations, then multiply that by the frequency. This gives us an 'OPS' figure, which we then divide by a trillion to make it a bit more palatable (and kinder on your zero key when typing it out).

Simply put, more TOPS means better, faster AI performance.

Adobe Premiere Pro's Firefly Video AI tools in action

Adobe’s Firefly generative AI tool can be hardware-accelerated by your device’s NPU. (Image credit: Adobe)

Why is TOPS important?

TOPS is, in the simplest possible terms, our current best way to judge the performance of a device for running local AI workloads. This applies both to the industry and the wider public; it's a straightforward number that lets professionals and consumers immediately compare the baseline AI performance of different devices.

TOPS is only applicable for on-device AI, meaning that cloud-based AI tools (like the internet's favorite AI bot, ChatGPT) don't typically benefit from better TOPS. However, local AI is becoming more and more prevalent, with popular professional software like the Adobe Creative Cloud suite starting to implement more AI-powered features that depend on the capabilities of your device.

It should be noted that TOPS is by no means a perfect metric. At the end of the day, it's a theoretical figure derived from hardware statistics and can differ greatly from real-world performance. Factors such as power availability, thermal systems, and overclocking can impact the actual speed at which an NPU can run AI workloads.

To that end, though, we're now starting to see AI benchmarks crop up, such as Procyon AI from UL Benchmarks (makers of the popular 3DMark and PCMark benchmarking programs). These can provide a much more realistic idea of how well a  You can expect to see TechRadar running AI performance tests as part of our review benchmarking in the near future!

TechRadar – All the latest technology news

Read More

Qualcomm and Microsoft’s game-changing chip could supercharge Windows 12

We’re approaching ever closer to the next generation of Windows, which most people expect will be Windows 12, and at Qualcomm's Snapdragon X Elite event, which took place this week, we got a peek at some potential Windows 12 features. 

Qualcomm, a company that specialises in wireless-related semiconductors, software, and services, unveiled a flashy new processor chip, the Snapdragon X Elite and it’s made some bold claims. It’s been said that this chip will boost Windows on ARM devices in a big way, and will play a crucial role in the next generation of Windows devices’ functionality. 

At the event, Qualcomm shared the stage with Microsoft CEO Satya Nadella and CVP (Corporate Vice President) Pavan Davuluri, to discuss the Snapdragon X Elite processor and the topic of NPUs (Neural Processing Units) in the context of future Windows machines. 

The discussion was more about broad strokes and less to do with specifics, as there were no demonstrations of the new hardware or even explicit mentions of “Windows 12”, but we did learn about some features that are in the pipeline, which many people felt were hints at what the next version of Windows could be like. 

A leaked screenshot of a possible Windows 12 OS mockup.

(Image credit: Microsoft)

What AI will look like in future Windows versions

As Windows Central reports, Nadella first described his (and Microsoft’s) vision for how AI is shaping computing. Nadella thinks that generative AI (gen AI, as he called it) can be as major as smartphones and mobile computing (something he’s previously declared at the Envision event we attended last week), the emergence of cloud computing, the internet, and the personal computer have been in the recent past. He thinks gen AI will impact human-computer interaction, potentially making it more intuitive and friendly to us and make it easier to change human behavior.

According to Nadella, gen AI will transform what operating systems (OSs) are as we know them, how a user interface (UI) looks like, how we engage with applications on devices, and more. UI changes are signifiers of bigger more fundamental change overall, and Nadella calls this “a big UI change.” 

Nadella then went on to discuss Microsoft’s new reasoning engine, a system that “reasons” and mimics our own thought process. He cited the example of Microsoft’s Github Copilot, an AI coding assistant, which helps you brainstorm ideas and create. The overhaul of UIs and a modern reasoning engine will mean that, as Nadella puts it, “all software categories can be changed.”

GitHub Copilot AI

(Image credit: GitHub)

Microsoft's big hybrid computing wager

After that, Nadella highlighted hybrid computing, which Windows Central points out has been a continued topic of discussion in what next generations of OSs like Window 12s might look like, and is another major area of development for Microsoft. According to Nadella, Microsoft’s vision includes hybrid computing being crucial to improving computing capability for low-powered or older devices by processing some things locally on the device and making use of the cloud for others. 

This is apparently a critical area of innovation which makes use of the new generation of powerful NPUs to maximize the potential of local and cloud computing simultaneously. A hybrid approach to computing is also important because the scale of some AI processes and features require more processing power than a standard PC can handle. Hybrid computing basically expands the scale of what’s possible from your PC, particularly to do with AI, though it does mean you need an internet connection.

This is how Microsoft’s new brainchild, the AI assistant Windows Copilot, functions. Many of the functions it carries out happen in the cloud, and its functionality is a mix of on-device and in the cloud. Microsoft is also developing a new system architecture to make all of this happen that will allow developers to realize what Microsoft calls ‘hybrid apps’. Microsoft is looking to components like the Snapdragon X Elite chip to make this a reality.

best budget laptop deals

(Image credit: Pexels)

High stakes and possible high rewards for Windows Copilot

Nadella calls Windows Copilot a “marquee experience,” so Microsoft is clearly placing big bets on it. It wants it to become the next Start button, which is certainly bold – that iconic element of Windows made a huge and lasting impact when it debuted in Windows 95. Nadella claims that you won’t even need to give it a direction or instruction – you can describe your intent and Copilot will pull up what you need. It could assist our workflows and activities like learning, creating, queries, and more.

Right now, you have to go to Start, find the application you want or navigate your File Explorer to find a specific file, and then get on with your work. With generative AI, the idea is that you state your intent (your wish, if you will) and your wish becomes reality with Copilot bringing you everything you need. 

We’ve already seen that Microsoft is putting a great deal of effort into Copilot, and showed us previews of the sorts of things it’ll be able to do. If you try Copilot for yourself, you’ll see that it’s not quite there yet, but the vision is intriguing. Rumor has it that Microsoft is developing natural language models (a type of logical and systematic model that props up what we currently call AI) that will improve file searches and better restore previous activity. Davuluri spoke at length about other platform-related developments to help facilitate app-emulation and how generative AI will help shape each user’s individual experience.

So, it’s a long discussion that gives an interesting look into Microsoft’s future, but keeps it nebulous enough to not spoil too many surprises. For example, we still don’t know what “Windows 12” will be named officially. What we do know is Microsoft’s clear intent with an AI-centric UI that could radically change how we use PCs and devices, context-aware AI functionality that will personalize user experiences, and a focus on incorporating hybrid computing. It all sounds very exciting and it’s great for buzz-word bingo, but I think users are eager to see some solid details about what they can expect in the next version of Windows OS.

YOU MIGHT ALSO LIKE

TechRadar – All the latest technology news

Read More

This rare AMD chip is the cheapest 16-core CPU right now

Nearly 10 years ago, AMD attempted to break Intel's stranglehold on the server market with the Opteron 6272, a 16-core processor with 16 threads. Its price at launch was $ 523, but you can now get hold of one for £31.99 (around $ 40/AU$ 60) on eBay – and yes, you can fit up to four of them in a server or workstation.

The vendor is based in the UK, but will ship to many countries worldwide for an additional fee. Should remaining stocks run dry, there are still a fair few 6272s available from other sellers.

Based on the Bulldozer architecture, the Opteron 6272 was produced using a 32nm manufacturing process, has a TDP of 115W and 32MB cache with a base frequency of 2.1GHz.

Unsurprisingly, these parts have been pulled from a working environment. Opteron processors are server CPUs, used primarily in data centres that support service providers (i.e. web hosting , cloud storage and SaaS companies).

Fast forward to 2020 and AMD has its best chance in a decade to make a splash, with the new EPYC range that has up to 64 cores per CPU and is built using 7nm process.

Compared to its predecessor, AMD's new server processor enjoys a much higher IPC (instruction per clock), larger cache, multithreading and the ability to ramp up the core count with ease.

TechRadar – All the latest technology news

Read More