Apple might start developing its own AI chips – here’s what that means for Mac lovers

New leaks coming from Chinese social media have claimed that Apple is planning to start development of its own dedicated AI chips in the near future – but it’s not the hotly-anticipated M4 chip that I’m talking about here.

Although Apple has been making waves in the AI space recently with its upgraded Neural Engine (a dedicated neural processing unit for handling AI-related tasks on Apple devices) as seen in its powerful new M3 chip, this leak makes specific reference to server AI processors – in other words, chips to power datacenters that run cloud-based AI tools. Popular online chatbot ChatGPT, for example, runs the bulk of its operations in the cloud rather than directly on your device, which is why it requires an internet connection to use.

Apple looks to be hedging its bets when it comes to AI – investing both in cloud AI technology and on-device machine learning capabilities, with the M4 chip promising to bring the entire Mac range up to speed in today’s world of ‘AI PCs’. But what does this actually mean for consumers?

AI for the Apple guy

Well, the current rumor (which originates from well-known Apple leaker ‘Phone Chip Expert’ on the Chinese site Weibo) states that Apple is working with chipmaker TSMC to develop the AI chip on a new 3nm process, but that production isn’t likely to start until the latter half of 2025 at the earliest. Basically, we shouldn’t expect to see this making a huge impact straight away.

The upcoming range of new M4 Mac products likely won’t be affected by this decision, with Apple still aiming to remain competitive with Intel and Qualcomm’s AI PC efforts

However, Apple users of all sorts could stand to benefit from the company’s new interest in cloud-based AI – with its own powered-up servers for offloading AI workloads on iPhones, iPads, and Macs combined with more powerful on-device AI capabilities, Apple could be poised to become a market dominator offering best-in-class AI services to everyday users.

iPhone8

Apple’s on-device AI ventures actually started way back with the iPhone 8 in 2017, long before ChatGPT exploded in popularity. (Image credit: Future)

You might be surprised just how much AI there already is in your iPhone 15 or MacBook Air. Apple’s Neural Engine tech has been lurking in its phones since the A11 Bionic chip seen in the iPhone 8, powering staple iOS features such as Face ID and Animoji. As AI-powered software becomes more common – it’s already wormed its way deep into Adobe Photoshop, for example – the need for competitive hardware in both consumer devices and data centers is on the rise.

As always with leaks such as this, it’s wise to take it with a pinch of salt – while Phone Chip Expert is a relatively reputable leaker, that doesn’t instantly guarantee that this information is legit. 

Still, I reckon it’s at least somewhat accurate; while a development like this will no doubt cause further struggles for TSMC’s already-burdened manufacturing and supply lines, the fact is that local on-chip AI isn’t yet powerful enough to properly handle high-level large language models – so investing in its own AI servers is the perfect way to deliver the best possible AI experience to users.

You might also like…

TechRadar – All the latest technology news

Read More

Google might have a new AI-powered password-generating trick up its sleeve – but can Gemini keep your secrets safe?

If you’ve been using Google Chrome for the past few years, you may have noticed that whenever you’ve had to think up a new password, or change your existing one, for a site or app, a little “Suggest strong password” dialog box would pop up – and it looks like it could soon offer AI-powered password suggestions. 

A keen-eyed software development observer has spotted that Google might be gearing up to infuse this feature with the capabilities of Gemini, its latest large language model (LLM).

The discovery was made by @Leopeva64 on X. They found references to Gemini in patches of Gerrit, a web-based code review system developed by Google and used in the development of Google products like Android

These findings appear to be backed up by screenshots that show glimpses of how Gemini could be incorporated into Chrome to give you even better password suggestions when you’re looking to create a new password or change from one you’ve previously set.

See more

Gemini guesswork

One line of code that caught my attention is that “deleting all passwords will turn this feature off.” I wonder if this does what it says on the tin: shutting the feature off if a user deletes all of their passwords, or if this just means all of the passwords generated by the “Suggest strong passwords” feature. 

The final screenshot that @Leopeva64 provides is also intriguing as it seems to show the prompt that Google engineers have included to get Gemini to generate a suitable password. 

This is a really interesting move by Google and it could play out well for Chrome users who use the strong password suggestion feature. I’m a little wary of the potential risks associated with this method of password generation, similar to risks you find with many such methods. LLMs are susceptible to information leaks caused by prompt or injection hacks. These hacks are designed to trick the AI models to give out information that their creators, individuals, or organizations might want to keep private, like someone’s login information.

A woman working on a laptop in a shared working space sitting next to a man working at a computer

(Image credit: Shutterstock/Gorodenkoff)

An important security consideration 

Now, that sounds scary and as far as we know, this hasn’t happened yet with any widely-deployed LLM, including Gemini. It’s a theoretical fear and there are standard password security practices that tech organizations like Google employ to prevent data breaches. 

These include encryption technologies, which encode data so that only authorized parties can access it for multiple stages of the password generation and storage process, and hashing, a one-way data conversion process that’s intended to make data reverse-engineering hard to do. 

You could also use any other LLM like ChatGPT to generate a strong password manually, although I feel like Google knows more about how to do this, and I’d only advise experimenting with that if you’re a software data professional. 

It’s not a bad idea as a proposition and a use of AI that could actually be very beneficial for users, but Google will have to put an equal (if not greater) amount of effort into making sure Gemini is bolted down and as impenetrable to outside attacks as can be. If it implements this and by some chance it does cause a huge data breach, that will likely damage people’s trust of LLMs and could impact the reputations of the tech companies, including Google, who are championing them.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Expression-matching robot will haunt your dreams but someday it might be your only friend

Most of the best robots, ones that can walk, run, climb steps, and do parkour, do not have faces, and there may be a good reason for that. If any of them did have mugs like the one on this new research robot, we'd likely stop in our tracks in front of them, staring wordlessly as they ran right over us.

Building robots with faces and the ability to mimic human expressions is an ongoing fascination in the robotics research world but, even though it might take less battery power and fewer load-bearing motors to make it work, the bar is much much higher for a robot smile than it is for a robot jump.

Even so, Columbia Engineering's development of its newest robot, Emo and “Human-robot Facial Co-Expression” is impressive and important work. In a recently published scientific paper and YouTube video, researchers describe their work and demonstrate Emo's ability to make eye contact and instantly imitate and replicate human expression.

To say that the robot's series of human-like expressions are eerie would be an understatement. Like so many robot faces of its generation, its head shape, eyes, and silicon skin all resemble a human face but not enough to avoid the dreaded uncanny valley.

That's okay, because the point of Emo is not to put a talking robot head in your home today. This is about programming, testing, and learning … and maybe getting an expressive robot in your home in the future.

Emo's eyes are equipped with two high-resolution cameras that let it make “eye contact” and, using one of its algorithms, watch you and predict your facial expressions.

Because human interaction often involves modeling, meaning that we often unconsciously imitate the movements and expressions of those we interact with (cross your arms in a group and gradually watch everyone else cross their arms), Emo uses its second model to mimic the facial expression it predicted.

“By observing subtle changes in a human face, the robot could predict an approaching smile 839 milliseconds before the human smiled and adjust its face to smile simultaneously.” write the researchers in their paper.

In the video, Emo's expressions change as rapidly as the researcher's. No one would claim that its smile looks like a normal, human smile, that its look of sadness isn't cringeworthy, or its look of surprise isn't haunting, but its 26 under-the-skin actuators get pretty close to delivering recognizable human expression.

Emo robot

(Image credit: Columbia Engineering)

“I think that predicting human facial expressions represents a big step forward in the field of human-robot interaction. Traditionally, robots have not been designed to consider humans,” said Columbia PhD Candidate, Yuhang Hu, in the video.

How Emo learned about human expressions is even more fascinating. To understand how its own face and motors work, the researchers put Emo in front of a camera and let it make any facial expression it wanted. This taught Emo the connection between its motor movements and the resulting expressions.

They also trained the AI on real human expressions. The combination of these training methods gets Emo about as close to instantaneous human expression as we've seen on a robot.

The goal, note researchers in the video, is for Emo to possibly become a front end for an AI or Artificial General Intelligence (basically a thinking AI).

Emo arrives just weeks after Figure AI unveiled its OpenAI-imbued Figure 01 robot and its ability to understand and act on human conversation. That robot, notably, did not have a face.

I can't help but imagine what an Emo head on a Figure 01 robot would be like. Now that's a future worth losing sleep over

You might also like

TechRadar – All the latest technology news

Read More

The new Sonos app just leaked – and it might just fix the S2 app’s many problems

Audio brand Sonos may soon completely redesign its S2 app by making it easier to set up its devices as well as “strengthen connectivity between its many speakers.” It’ll also introduce several new customization options. This nugget of information comes from TheVerge which claims to have received screenshots of the revamp from sources close to the matter. 

According to the report, the company is removing all the navigation tabs at the bottom, replacing tabs with a search bar to help soundbar owners find music quickly. The home screen will serve as a central hub consisting of “scrollable carousels” housing playlists and direct access to streaming services. 

Of course, you will be allowed to customize the layout to your liking. You can tweak the settings of a soundbar through the “Your System” section on the app.

The Now Playing screen will see revisions as well. Both the shuffle and repeat buttons are going to be present on the page. Plus, the volume slider in the mini-player will appear “no matter where you are in the app.” 

Love it or hate it

For some people on the internet, this update has been a long time coming. The Verge shared links to posts from the Sonos subreddit of people complaining about how terrible the S2 app is. One of the more passionate rants talks about the software’s poor functionality, as the Redditor was unable to turn off their speaker’s alarms remotely despite it being connected. 

Most of the reviews on app stores are positive, however several users on the Google Play Store listing do complain about an unintuitive UI and strange connection problems. People either love S2 or they hate it. There doesn’t seem to be any real middle ground. 

The Verge states the S2 update will roll out for Android and iOS on May 7th although the date could change. 

Future plans

It’s worth mentioning that this isn’t the first time we’ve heard about the redesign.

Back in February, Bloomberg published a report detailing some of Sonos’ plans for 2024, such as their focus on a “revamped mobile app codenamed Passport.” At a glance, it appears Passport is the future S2 upgrade. Originally, the update was supposed to come in March, but the brand ran into development issues and were forced to delay it.

Bloomberg’s piece goes on to mention two new Sonos devices, codenamed Duke and Disco. The latter is said to be a set of earbuds able to connect to Wi-Fi. It’s supposed to be a Sonos take on Apple Airpods

Not much is known about the Duke, but it does share a name with a pair of Sonos headphones that were discovered back in late March on the Bluetooth SIG website. 91Mobiles dug through the page revealing the device could allow music streaming over Wi-Fi, it’s slated for a June launch, and should cost $ 450. These next couple of months are looking to be a busy time for Sonos. But as always, take the info in this leak with a grain of salt.

Until we learn more, check out TechRadar's list of the best soundbars for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Google Search on Android might get a nifty Gemini switch and put AI at your fingertips

Gemini is lining up to become an even bigger part of the Android ecosystem as a toggle switch for the AI may soon appear on the official Google app. Evidence of this update was discovered in a recent beta by industry insider AssembleDebug who then shared his findings with news site Pianika Web

The feature could appear as a toggle switch right above the search bar. Flipping the switch causes the standard Search interface to morph into the Gemini interface where you can enter a prompt, talk to the model, or upload an image. According to Android Authority, turning on the AI launches a window asking permission to make the switch, assuming you haven't already. 

If this sounds familiar, that’s because the Google app on iOS has had the same function since early February. Activating the feature on either operating system has Gemini replace Google Assistant as your go-to helper on the internet. 

Gemini's new role

You can hop between the two at any time. It’s not a permanent fixture or anything – at least not right now. Google has been making its AI more prominent on smartphones and its first-party platforms. Recently, hints emerged of Gemini possibly gaining a summarization tool as well as reply suggestions on Gmail.

It is possible to have the Gemini toggle switch appear on your Android phone. AssembleDebug published a step-by-step guide on TheSpAndroid, however, the process will take you a long time. First, you’ll need a rooted smartphone running at least Android 12 which is a complicated process in of itself. We have a guide explaining how to root your mobile device if you're interested in checking that out. Then you’ll need the latest Google App beta from the Play Store, the GMS Flags app from GitHub, and Gemini on your device.

Even if you follow all of these instructions, there’s still a chance it may not work, so you’re probably better off waiting for the switch to officially roll out. 

No word on when that’ll happen. Although we could see the feature make its official debut during next month’s Google I/O 2024 event. The tech giant is cooking up something big and we can’t wait to see what it is. 

While you wait, check out TechRadar's list of the best Android phones for 2024.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT might get its own dedicated personal device – with Jony Ive’s help

Sam Altman, the CEO of ChatGPT developer OpenAI, is reportedly seeking funding for an AI-powered, personal device – perhaps not unlike the Humane AI Pin – and ex-Apple design guru Jony Ive is apparently getting involved as well.

This is as per The Information (via MacRumors), and the rumor is that Altman and Ive have started a “mysterious company” together to make the device a reality. The report doesn't mention much about the hardware, except to say it won't look like a smartphone.

As we've seen with the Humane AI Pin and the Rabbit R1, having an AI assistant running on a device means you don't necessarily need a display and traditional apps – the artificial intelligence engine can do everything for you, no tapping or scrolling required.

Altman and Ive are said to be seeking around $ 1 billion in funding, so this is clearly a major undertaking we're talking about. It's not clear how much involvement OpenAI would have, but its ChatGPT bot would most likely be used on the new device.

Previous rumors

A close up of ChatGPT on a phone, with the OpenAI logo in the background of the photo

ChatGPT could find itself in a new device (Image credit: Shutterstock/Daniel Chetroni)

This hasn't come completely out of the blue: back in September The Financial Times reported that Altman and Ive were “in talks” to get funding for a new project from SoftBank, a Japanese investment company.

SoftBank has a stake in CPU company Arm, which might be tapped to provide components for the hardware – which can't run entirely on AI cloud magic of course. All this is speculation for the time being, however.

In January, Sam Altman was spotted touring around a Samsung chip factory, so all the indications are that he's planning something in terms of physical hardware. It remains to be seen just how advanced this hardware is though.

During his time with Apple, Jony Ive led the design teams responsible for the iPod, iPhone, iPad and MacBook, so whatever is in the pipeline, we can expect it to look stylish. We can also expect to hear more about this intriguing device in the years ahead.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft could make a big change to part of the Windows 11 Start menu – one you might love or hate

Microsoft could be reworking a major part of the Start menu in Windows 11, or at least there are changes hidden in testing right now which suggest this.

As flagged up by a regular contributor of Windows leaks, PhantomOfEarth on X (formerly Twitter), the Start menu could end up with a very different layout for the ‘All apps’ panel.

See more

Currently, this presents a list of all the applications installed on your system in alphabetical order, but if this change comes to fruition, the panel will be switched to a grid-style layout (as shown in the above tweet) rather than a long list.

Note that this move is not visible in preview testing yet, and the leaker had to dig around in Windows 11 – a preview build in the Beta channel specifically – to find it (using ViVeTool, a configuration utility).


Analysis: 10X better?

What this means is that you’ll be able to see a lot more of the installed software in the ‘All apps’ panel at one time, with a whole host of icons laid out in front of you in said grid, rather than having a list with a very limited number of icons in comparison.

On the flipside, this looks a bit busier and less streamlined, with the alphabetical list being neater. Also, some have noted the resemblance to Windows 10X with this hidden change (which might provoke unwelcome OS flashbacks for some).

As ever, some might lean towards the list of installed apps, or some may not, and prefer the new grid-based view instead – which leads us to our next point: why not offer a choice of either layout, based on the user’s preference? A simple toggle somewhere could do that trick.

We shall see what happens, but bear in mind that this grid layout concept might go precisely nowhere in the end. Microsoft could just be toying with the idea, and then abandon it down the line, before even taking it live in testing.

If we do see it go live in Windows 11 preview builds, odds are it’ll be incoming maybe with Windows 11 24H2 later this year – fingers crossed with that mentioned toggle.

Via Windows Latest

You might also like…

TechRadar – All the latest technology news

Read More

A cheaper Meta Quest 3 might be coming, but trust me, it won’t look like the leaks

Over the past few days we’ve been treated to two separate Meta Quest 3 leaks – or more accurately, leaks for a new cheaper Quest 3 that’s either called the Meta Quest 3s or Meta Quest 3 Lite, depending on who you believe.

But while the phrase ‘where there's smoke there's fire’ can often ring true in the world of tech leaks, I’m having a really tough time buying what I’ve seen so far from the two designs.

Going in chronological order; the first to drop was a Meta Quest 3 Lite render shared by @ZGFTECH on Twitter.

See more

It looks an awful lot like an Oculus Quest 2 with its slightly bulkier design – perhaps because it seems to use the 2’s fresnel lens system instead of a slimmer pancake lens system like the Quest 3 – but with more rounded edges to match its namesake. 

Interestingly, it also lacks any kind of RGB cameras or a depth sensor – which for me is a massive red flag. Mixed reality is the main focus for XR hardware and software right now, so of all the downgrades to make for the Lite, removing full-color MR passthrough seems the most absurd. It’d be much more likely for Meta to give the Quest 3 Lite a worse display or chipset.

@ZGFTECH did later clarify that they aren’t saying the Quest 3 Lite lacks RGB cameras, just that their renders exclude them because they can’t reveal more “at the moment.” Though as I said before, I expect mixed reality would be a key Quest 3 Lite feature, so I’m more than a little surprised this detail is shrouded in mystery.

Then there’s the Meta Quest 3s leak. The original Reddit post has since been deleted, but copies like this Twitter post remain online.

See more

Just like the Meta Quest 3 Lite leaked design, this bulkier headset suggests a return to fresnel lenses. Although unlike the previous model, we see some possible RGB cameras and sensors on the front face panel. On top of this, we also get some more details about specs – chiefly that the cheaper Quest 3 could boast dual 1,832 x 1,920 pixel displays.

But while the design seems a little more likely (if a little too ugly), the leak itself is setting off my BS detectors. The first issue is that the shared images include elements of a Zoom call that might make it quite easy to determine who the leaker is. To see these early designs the leaker likely had to sign an NDA that would come with some kind of financial penalty for sharing the info, and unless they have zero care for their financial well being I would’ve expected them to be a lot more careful with what they do/don’t share lest they face the wrath of Meta’s well-funded legal team.

On top of this, some of the promotional assets seem a little off. Some of them feature the original Quest 3 rather than the new design, some of the images don’t seem super relevant to a VR gadget, plus ports and buttons seem to change positions and parts change color across various renders.

As such, I’m more than a little unconvinced that this is a genuine leak.

The Meta Quest 3 and controllers on their charging stand

(Image credit: Meta)

Meta Quest 3 Lite: fact or fiction? 

I guess the follow-up question from my skepticism over these leaks is: is a cheaper Meta Quest 3 even on the way? 

Inherently, the idea isn’t absurd. The Quest 3 may be cheaper than many other VR headsets, but at $ 499.99 / £479.99 / AU$ 799.99 it is pricier than the Quest was at launch – $ 299 / £299 / AU$ 479 – and its affordable price point is the central reason the Quest 2 sold phenomenally well.

I’ve previously estimated that the Quest 3 is selling slightly slower than its predecessor did at the same point in its lifespan, so Meta may be looking to juice its figures by releasing a cheaper model.

What’s more, while these leaks have details that leave me more than a little skeptical, the fact that we have had two leaks in such a short stretch of time leaves me feeling like there might be some validity to the rumors.

A Meta Quest 3 player sucking up Stay Puft Marshmallow Men from Ghostbusters in mixed reality using virtual tech extending from their controllers

The Quest 3 Lite needs good quality mixed reality (Image credit: Meta)

So while we can't yet say for certain it's coming, I wouldn't be surprised if Meta announced a Quest 3 Lite or S. I'm just not convinced that it’ll look like either of these leaked designs.

For me, the focus would be on having a sleek mixed reality machine – which would require full-color passthrough and pancake rather than fresnel lenses (which we have seen on affordable XR hardware like the Pico 4).

The cost savings would then come from having lower resolution displays, less storage (starting at 64GB), and having a worse chipset or less RAM than we see in the Quest 3. 

We’ll have to wait and see if Meta announces anything officially. I expect we won’t hear anything until either its Meta Quest Gaming Showcase for 2024 – which is due around June – or this year’s Meta Connect event – which usually lands around September or October.

You might also like

TechRadar – All the latest technology news

Read More

Midjourney just changed the generative image game and showed me how comics, film, and TV might never be the same

Midjourney, the Generative AI platform that you can currently use on Discord just introduced the concept of reusable characters and I am blown away.

It's a simple idea: Instead of using prompts to create countless generative image variations, you create and reuse a central character to illustrate all your themes, live out your wildest fantasies, and maybe tell a story.

Up until recently, Midjourney, which is trained on a diffusion model (add noise to an original image and have the model de-noise it so it can learn about the image) could create some beautiful and astonishingly realistic images based on prompts you put in the Discord channel (“/imagine: [prompt]”) but unless you were asking it to alter one of its generated images, every image set and character would look different.

Now, Midjourney has cooked up a simple way to reuse your Midjourney AI characters. I tried it out and, for the most part, it works.

Image 1 of 3

Midjourney AI character creation

I guess I don’t know how to describe myself. (Image credit: Future)
Image 2 of 3

Midjourney AI character creation

(Image credit: Future)
Image 3 of 3

Midjourney AI character creation

Things are getting weird (Image credit: Future)

In one prompt, I described someone who looked a little like me, chose my favorite of Midjourney's four generated image options, upscaled it for more definition, and then, using a new “– cref” prompt and the URL for my generated image (with the character I liked), I forced Midjounrey to generate new images but with the same AI character in them.

Later, I described a character with Charles Schulz's Peanuts character qualities and, once I had one I liked, reused him in a different prompt scenario where he had his kite stuck in a tree (Midjourney couldn't or wouldn't put the kite in the tree branches).

Image 1 of 2

Midjourney AI character creation

An homage to Charles Schulz (Image credit: Future)
Image 2 of 2

Midjourney AI character creation

(Image credit: Future)

It's far from perfect. Midjourney still tends to over-adjust the art but I contend the characters in the new images are the same ones I created in my initial images. The more descriptive you make your initial character-creation prompts, the better result you'll get in subsequent images.

Perhaps the most startling thing about Midjourney's update is the utter simplicity of the creative process. Writing natural language prompts has always been easy but training the system to make your character do something might typically take some programming or even AI model expertise. Here it's just a simple prompt, one code, and an image reference.

Image 1 of 2

Midjourney AI character creation

Got a lot closer with my photo as a reference (Image credit: Future)
Image 2 of 2

Midjourney AI character creation

(Image credit: Future)

While it's easier to take one of Midjourney's own creations and use that as your foundational character, I decided to see what Midjourney would do if I turned myself into a character using the same “cref” prompt. I found an online photo of myself and entered this prompt: “imagine: making a pizza – cref [link to a photo of me]”.

Midjourney quickly spit out an interpretation of me making a pizza. At best, it's the essence of me. I selected the least objectionable one and then crafted a new prompt using the URL from my favorite me.

Midjourney AI character creation

Oh, hey, Not Tim Cook (Image credit: Future)

Unfortunately, when I entered this prompt: “interviewing Tim Cook at Apple headquarters”, I got a grizzled-looking Apple CEO eating pizza and another image where he's holding an iPad that looks like it has pizza for a screen.

When I removed “Tim Cook” from the prompt, Midjourney was able to drop my character into four images. In each, Midjourney Me looks slightly different. There was one, though, where it looked like my favorite me enjoying a pizza with a “CEO” who also looked like me.

Midjourney AI character creation

Midjourney me enjoying pizza with my doppelgänger CEO (Image credit: Future)

Midjourney's AI will improve and soon it will be easy to create countless images featuring your favorite character. It could be for comic strips, books, graphic novels, photo series, animations, and, eventually, generative videos.

Such a tool could speed storyboarding but also make character animators very nervous.

If it's any consolation, I'm not sure Midjourney understands the difference between me and a pizza and pizza and an iPad – at least not yet.

You might also like

TechRadar – All the latest technology news

Read More