Rabbit denies that the Rabbit R1 is fundamentally just an Android app

The Rabbit R1, a AI gadget that's equal parts charming and puzzling, is now out in the wild, and we've just spent a confusing day with it. But how exactly does the bright orange assistant actually work? Well, Rabbit has now refuted accusations that the R1 is fundamentally run by a single Android app.

Android Authority recently revealed how it had installed the Rabbit R1 launcher's APK (Android Package) on an Android phone, showing that the device is likely both running Android and that its interface is powered by an Android app.

But Rabbit's founder and CEO Jesse Lyu told the site in a statement (echoed by a Rabbit post on X, formerly Twitter) that the reality is a more bit nuanced. It said that “rabbit r1 is not an Android app” and that “to clear any misunderstanding and set the record straight, rabbit OS and LAM run on the cloud with very bespoke AOSP [Android Open Source Project] and lower level firmware modifications”.

In other words, the R1's real juice is in the cloud rather than an on-device app, and that “a local bootleg APK without the proper OS and Cloud endpoints won’t be able to access our service”. 

Rabbit r1 device

(Image credit: Rabbit)

None of this is a huge surprise, nor does it directly contradict Android Authority's broader points about the device. It'd be unfair to say the Rabbit R1 is as simple as an Android app because its LLM (large language model) and LAM (large action model) live in the cloud and can't be accessed on a phone, nor could a phone interact with them in the same way. 

But at the same time, the on-device client for those features is effectively an Android app – and that goes to the heart of those arguing that the whole experience could still live on a smartphone, and may even be better as a result.  

Why isn't the Rabbit R1 just an app?

See more

Back when the Rabbit R1 first launched, the company explained in the X (formerly Twitter) thread above why it was a piece of hardware rather than just a smartphone app. The argument was essentially that today's apps are constricted by the smartphone experience and that Rabbit wanted to rethink how we interact with AI apps – and that it could only do that with new hardware.

Our early experiences, and the many others from tech reviewers, suggest that the Rabbit R1 hasn't yet justified its existence as a standalone gadget. The R1 is undoubtedly a fun, tactile little device whose Teenage Engineering design has sparked a lot of attention from gadget-starved tech fans.

As TechRadar's US Editor At Large Lance Ulanoff states in our early Rabbit R1 hands-on, there are so many things the R1 can't do that “I'm constantly reaching for my phone” which is “a device that has multiple built-in cameras, a working phone, a calculator, and Microsoft CoPilot and OpenAI ChatGPT on it”. The latter are both generative AI platforms that “are faster than Rabbit's LAM and more effective”.

Unfortunately, the Rabbit R1 doesn't currently do enough that can't be achieved with your smartphone using other AI apps. Rabbit has now pushed its first software update to improve some early issues (like battery life and music playback), but those fundamentals look unlikely to change soon – regardless of debates around bootlegged Rabbit OS apps and its Android underpinnings.

You might also like…

TechRadar – All the latest technology news

Read More

Google denies that Bard AI copied ChatGPT’s homework

Google’s Bard AI has found itself at the center of controversy again, this time over allegations that the Bing rival was trained using data pulled from OpenAI’s ChatGPT.

As you may be aware, ChatGPT is the power behind the throne of Bing AI, and the accusation of nefarious activities behind the scenes comes from a report by The Information.

We’re told that Jacob Devlin, a software engineer at Google – an ex-engineer, we might add, having departed the firm over this affair – claims that Google used ChatGPT data (scraped from the ShareGPT website, apparently) to develop Bard.

Devlin notes that he warned Google against doing so, as this clearly went against OpenAI’s terms of service.

According to the report, Google ceased using the mentioned data after the warnings from Devlin (who left Google to join OpenAI, we’re informed).

Google denies any of this, though. A company spokesperson, Chris Pappas, told The Verge: “Bard is not trained on any data from ShareGPT or ChatGPT.”

Analysis: A denial amid some desperation

There we have it, then – a clear denial from Google in no uncertain terms that nothing underhand was going on data-wise with Bard. And to be fair, there’s certainly no evidence that Bard’s answers are remotely like the ones given by ChatGPT. (Devlin had further warned that the alleged data hoovering could mean just that, and it’d be obvious enough what had gone on as a result).

We suppose the trouble with this episode is that it very much feels like Google has rushed Bard to release – dropping clangers while doing so – as it was forced to play catchup with Microsoft’s Bing AI. Given that the latter is now successfully pushing search engine adoption to Bing, already at this early stage, all this could make it easy enough for some to believe that Google might be getting a bit desperate with tactics behind the scenes.

Whether or not the tale about poached data is true – we’ll take Google’s word that it isn’t – the report still makes an interesting revelation that Google’s Brain AI group is now working with AI firm DeepMind (both of these existing under the Alphabet umbrella, the parent company).

DeepMind has seemingly been recruited into the mix to swiftly hone and power up Bard, and it’s notable because the two AI outfits are big rivals and are very much being forced to collaborate on this.

This again sketches a picture of a rather desperate scramble to get Bard steadier on its feet, while Microsoft’s Bing AI keeps getting updated with new features at a fair old rate of knots. (Although fresh rumblings about one of the potential next ‘features’ for the Bing chatbot have us very concerned, it has to be said).

You may also recall alarm bells being rung on the privacy front when Bard itself made an apparent revelation that it used internal Gmail data for training, again prompting Google to tell us that this is not the case and that the bot got things wrong. Bard getting things wrong, of course, is very much part of a bigger issue.

TechRadar – All the latest technology news

Read More