Can your PC or Mac run on-device AI? This handy new Opera tool lets you find out

Opera wants to make it easy for everyday users to find out whether their PC or Mac can run AI locally, and to that end, has incorporated a tool into its browser.

When we talk about running AI locally, we mean on the device itself, using your system and its resources for the entire AI workload being done – in contrast to having your PC tap the cloud to get the computing power to achieve the task at hand.

Running AI locally can be a demanding affair – particularly if you don’t have a modern CPU with a built-in NPU to accelerate AI workloads happening on your device – and so it’s pretty handy to have a benchmarking tool that tells you how capable your hardware is in terms of completing these on-device AI tasks effectively.

There is a catch though, namely that the ‘Is your computer AI ready?’ test is only available in the developer version of the Opera browser right now. So, if you want to give it a spin, you’ll need to download that developer (test) spin on the browser.

Once that’s done, you can get Opera to download an LLM (large language model) with which to run tests, and it checks the performance of your PC in various ways (tokens per second, first token latency, model load time and more).

If all that sounds like gobbledegook, it doesn’t really matter, as after running all these tests – which might take anything from just a few minutes to more like 20 – the tool will deliver a simple and clear assessment of whether your machine is ready for AI or not.

There’s an added nuance, mind: if you get the ‘ready for AI’ result then local performance is good, and ‘not AI ready’ is self-explanatory – you can forget running local AI tasks – but there’s a middle result of ‘AI functional.’ This means your device is capable of running AI tasks locally, but it might be rather slow, depending on what you’re doing.

Opera AI Benchmark Result

(Image credit: Opera)

There’s more depth to these results for experts, that you can explore if you wish, but it’s great to get an at-a-glance estimation of your PC’s on-device AI chops. It’s also possible to download different (increasingly large) AI models to test with, too, with heftier versions catering for cutting-edge PCs with the latest hardware and NPUs.


Analysis: Why local AI processing is important

It’s great to have an easily accessible test that anyone can use to get a good idea of their PC’s processing chops for local AI work. Doing AI tasks locally, kept within the confines of the device, is obviously important for privacy – as you’re not sending any data off your machine into the cloud.

Furthermore, some AI features will use local processing partly, or indeed exclusively, and we’ve already seen the latter: Windows 11’s new cornerstone AI functionality for Copilot+ PCs, Recall, is a case in point, as it works totally on-device for security and privacy reasons. (Even so, it’s been causing a storm of controversy since it was announced by Microsoft, but that’s another story).

So, to be able to easily discern your PC’s AI grunt is a useful capability to have, though right now, downloading the Opera developer version is probably not a hassle you’ll want to go through. Still, the feature will be inbound for the full version of Opera soon enough we’d guess, so you likely won’t have to wait long for it to arrive.

Opera is certainly getting serious about climbing the rankings of the best web browsers by leveraging AI, with one of the latest moves being drafting in Google Gemini to help supercharge its Aria AI assistant.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

iOS 18 could be loaded with AI, as Apple reveals 8 new artificial intelligence models that run on-device

Apple has released a set of several new AI models that are designed to run locally on-device rather than in the cloud, possibly paving the way for an AI-powered iOS 18 in the not-too-distant future.

The iPhone giant has been doubling down on AI in recent months, with a carefully split focus across cloud-based and on-device AI. We saw leaks earlier this week indicating that Apple plans to make its own AI server chips, so this reveal of new local large language models (LLMs) demonstrates that the company is committed to both breeds of AI software. I’ll dig into the implications of that further down, but for now, let’s explain exactly what these new models are.

The suite of AI tools contains eight distinct models, called OpenELMs (Open-source Efficient Language Models). As the name suggests, these models are fully open-source and available on the Hugging Face Hub, an online community for AI developers and enthusiasts. Apple also published a whitepaper outlining the new models. Four were pre-trained on CoreNet (previously CVNets), a massive library of data used for training AI language models, while the other four have been ‘instruction-tuned’ by Apple; a process by which an AI model’s learning parameters are carefully honed to respond to specific prompts.

Releasing open-source software is a somewhat unusual move for Apple, which typically retains quite a close grip on its software ecosystem. The company claims to want to “empower and enrich” public AI research by releasing the OpenELMs to the wider AI community.

So what does this actually mean for users?

Apple has been seriously committed to AI recently, which is good to see as the competition is fierce in both the phone and laptop arenas, with stuff like the Google Pixel 8’s AI-powered Tensor chip and Qualcomm’s latest AI chip coming to Surface devices.

By putting its new on-device AI models out to the world like this, Apple is likely hoping that some enterprising developers will help iron out the kinks and ultimately improve the software – something that could prove vital if it plans to implement new local AI tools in future versions of iOS and macOS.

It’s worth bearing in mind that the average Apple device is already packed with AI capabilities, with the Apple Neural Engine found on the company’s A- and M-series chips powering features such as Face ID and Animoji. The upcoming M4 chip for Mac systems also appears to sport new AI-related processing capabilities, something that's swiftly becoming a necessity as more-established professional software implements machine-learning tools (like Firefly in Adobe Photoshop).

In other words, we can probably expect AI to be the hot-button topic for iOS 18 and macOS 15. I just hope it’s used for clever and unique new features, rather than Microsoft’s constant Copilot nagging.

You might also like…

TechRadar – All the latest technology news

Read More

Apple may be working on a way to let LLMs run on-device and change your iPhones forever

Apple researchers have apparently discovered a method that’ll allow iPhones to host and run their own large language models (LLMs).

With this tech, future iPhone models may finally have the generative AI features people have been eagerly waiting for. This information comes from a pair of papers published on arXiv, a research-sharing platform owned by Cornell University. The documents are pretty dense and can be tricky to read, so we’re going to break things down for you. But if you're interested in reading them yourself, the papers are free for everyone to check out.

One of the main problems with putting an LLM on a mobile device is the limited amount of memory on the hardware. As VentureBeat explains in their coverage, recent AI models like GPT-4 “contain hundreds of billions of parameters”, which is a quantity smartphones have difficulty handling. To address this issue, Apple researchers propose two techniques. The first is called windowing, a method where the on-board AI recycles already processed data instead of using new information. Its purpose is to take some of the load off the hardware.

The second is called row-column bundling. This collects data into big chunks for the AI to read; a method that will boost the LLM’s ability to “understand and generate language”, according to MacRumors. The paper goes on to say these two techniques will let AIs run “up to twice the size of the available [memory]” on an iPhone. It’s a technology Apple must nail down if it wants to deploy advanced models “in resource-limited environments”. Without it, the researchers' plans can't take off.

On-device avatars

The second paper is centered around iPhones potentially getting the ability to create animated 3D avatars. The content will be made using videos taken by the rear cameras through a process called HUGS (Human Gaussian Splats). This tech has existed in some form before this. However, Apple’s version is said to be able to render the avatars 100 times faster than older generations as well as capture the finer details like clothing and hair.

It’s unknown exactly what Apple intends to do with HUGS or any of the techniques mentioned earlier. However, this research could open the door for a variety of possibilities, including a more powerful version of Siri, “real-time language translation”, new photography features, and chatbots. 

Powered up Siri

These upgrades may be closer to reality than some might think.

Back in October, rumors surfaced claiming Apple is working on a smarter version of  Siri that'll be boosted by artificial intelligence and sporting some generative capabilities. One potential use case would be an integration with the Messages app, letting users ask it tough questions or have it finish up sentences “more effectively.” Regarding chatbots, there have been other rumors of the tech giant developing a conversational AI called Ajax. Some people have also thrown around “Apple GPT” as a potential name.

No word on when Apple’s AI projects will see the light of day. There has been speculation that something could roll out in late 2024 alongside the launch of iOS 18 and iPadOS 18, although exactly when we'll see any of this remains unknown.

Be sure to check out TechRadar's latest roundup of the best iPhone deals for December 2023.

You might also like

TechRadar – All the latest technology news

Read More