New Rabbit R1 demo promises a world without apps – and a lot more talking to your tech

We’ve already talked about the Rabbit R1 before here on TechRadar: an ambitious little pocket-friendly device that contains an AI-powered personal assistant, capable of doing everything from curating a music playlist to booking you a last-minute flight to Rome. Now, the pint-sized companion tool has been shown demonstrating its note-taking capabilities.

The latest demo comes from Jesse Lyu on X, founder and CEO of Rabbit Inc., and shows how the R1 can be used for note-taking and transcription via some simple voice controls. The video (see the tweet below) shows that note-taking can be started with a short voice command, and ended with a single button press.

See more

It’s a relatively early tech demo – Lyu notes that it “still need bit of touch” [sic] – but it’s a solid demonstration of Rabbit Inc.’s objectives when it comes to user simplicity. The R1 has very little in terms of a physical interface, and doubles down by having as basic a software interface as possible: there’s no Android-style app grid in sight here, just an AI capable of connecting to web apps to carry out tasks.

Once you’ve recorded your notes, you can either view a full transcription, see an AI-generated summary, or replay the audio recording (the latter of which requires you to access a web portal). The Rabbit R1 is primarily driven by cloud computing, meaning that you’ll need a constant internet connection to get the full experience.

Opinion: A nifty gadget that might not hold up to criticism

As someone who personally spent a lot of time interviewing people and frantically scribbling down notes in my early journo days, I can definitely see the value of a tool like the Rabbit R1. I’m also a sucker for purpose-built hardware, so despite my frequent reservations about AI, I truly like the concept of the R1 as a ‘one-stop shop’ for your AI chatbot needs.

My main issue is that this latest tech demo doesn’t actually do anything I can’t do with my phone. I’ve got a Google Pixel 8, and nowadays I use the Otter.ai app for interview transcriptions and voice notes. It’s not a perfect tool, but it does the job as well as the R1 can right now.

Rabbit r1

The Rabbit R1’s simplicity is part of its appeal – though it does still have a touchscreen. (Image credit: Rabbit)

As much as I love the Rabbit R1’s charming analog design, it’s still going to cost $ 199 (£159 / around AU$ 300) – and I just don’t see the point in spending that money when the phone I’ve already paid for can do all the same tasks. An AI-powered pocket companion sounds like an excellent idea on paper, but when you take a look at the current widespread proliferation of AI tools like Windows Copilot and Google Gemini in our existing tech products, it feels a tad redundant.

The big players such as Google and Microsoft aren’t about to stop cramming AI features into our everyday hardware anytime soon, so dedicated AI gadgets like Rabbit Inc.’s dinky pocket helper will need to work hard to prove themselves. The voice control interface that does away with apps completely is a good starting point, but again, that’s something my Pixel 8 could feasibly do in the future. And yet, as our Editor-in-Chief Lance Ulanoff puts it, I might still end up loving the R1…

You might also like

TechRadar – All the latest technology news

Read More

Oppo’s new AI-powered AR smart glasses give us a glimpse of the next tech revolution


  • Oppo has shown off its Air Glass 3 AR glasses at MWC 2024
  • They’re powered by its AndesGPT AI model and can answer questions
  • They’re just a prototype, but the tech might not be far from launching

While there’s a slight weirdness to the Meta Ray-Ban Smart Glasses – they are a wearable camera, after all – the onboard AI is pretty neat, even if some of its best features are still in beta. So it’s unsurprising that other companies are looking to launch their own AI-powered specs, with Oppo being the latest in unveiling its new Air Glass 3 at MWC 2024.

In a demo video, Oppo shows how the specs have seemingly revolutionized someone's working day. When they boot up, the Air Glass 3's 1,000-nit displays show the user a breakdown of their schedule, and while making a coffee ahead of a meeting they get a message saying that it's started early.

While in the meeting the specs pick up on a question that’s been asked, and Oppo's AndesGPT AI model (which runs on a connected smartphone) is able to provide some possible answers. Later it uses the design details that have been discussed to create an image of a possible prototype design which the wearer then brings to life.

After a good day’s work they can kick back to some of their favorite tunes that play through the glasses’ in-built speakers. All of this is crammed into a 50g design. 

Now, the big caveat here is the Air Glass 3 AR glasses are just a prototype. What’s more, neither of the previous Air Glass models were released outside of China – so there’s a higher than likely chance the Air Glass 3 won’t be either.

But what Oppo is showing off isn’t far from being mimicked by its rivals, and a lot of it is pretty much possible in tech that you can go out and buy today – including those Meta Ray-Ban Smart Glasses.

The future is now

The Ray-Ban Meta Smart Glasses already have an AI that can answer questions like a voice-controlled ChatGPT

They can also scan the environment around you using the camera to get context for questions – for example, “what meal can I make with these ingredients?” – via their 'Look and Ask' feature. These tools are currently in beta, but the tech is working and the AI features will hopefully be more widely available soon.

They can also alert you to texts and calls that you’re getting and play music, just like the Oppo Air Glass 3 concept.

Orange RayBan Meta Smart Glasses in front of a wall of colorful lenses including green, blue, yellow and pink

The Ray-Ban Meta glasses ooze style and have neat AI tools (Image credit: Meta)

Then there’s the likes of the Xreal Air 2. While their AR display is a little more distracting than the screen found on the Oppo Air Glass 3, they are a consumer product that isn’t mind-blowingly expensive to buy – just $ 399 / £399 for the base model.

If you combine these two glasses then you’re already very close to Oppo’s concept; you’d just need to clean up the design a little, and probably splash out a little more as I expect lenses with built-in displays won’t come cheap.

The only thing I can’t see happening soon is the AI creating a working prototype product design for you. It might be able to provide some inspiration for a designer to work off, but reliably creating a fully functional model seems more than a little beyond existing AI image generation tools' capabilities.

While the Oppo Air Glass 3 certainly look like a promising glimpse of the future, we'll have to see what they're actually capable of if and when they launch outside China.

You might also like

TechRadar – All the latest technology news

Read More

Pour one out for MSN Messenger, Zune, and more: Microsoft Graveyard gives a salute to the tech giants’ retired creations

Microsoft is an old (in tech terms, at least) company – and a very successful one at that, but not every product it makes is a success.

For every Windows 7, there's a Games for Windows Live. Every Microsoft Office, there's a Clippy.

To help people reminisce and revisit memories of Microsoft products gone by, a group of developers and tech enthusiasts has made an open-source site named Microsoft Graveyard

If that rings a bell, that’s probably because you may have come across Killed by Google, a similar website made by Cody Ogden, another developer and tech enthusiast, but for deprecated and discontinued Google products. Ogden made an analogous website for Microsoft products named Killed by Microsoft, and that heavily inspired the creation of the Microsoft Graveyard. 

Welcome to the (unofficial) Microsoft Graveyard

At Microsoft Graveyard, you can peruse the various products, services, apps, and other creations that Microsoft has launched and ended up ditching – both software and hardware. 

There’s plenty to reflect upon, as many people who have been using computing or mobile products for any portion of their lives have probably come across at least a couple of these. I know I have, and there’s also lots to learn about many of Microsoft’s attempts at innovation through the years (Microsoft Graveyard’s entries are in chronological order). 

The unofficial archive of discontinued Microsoft products was made by Victor Frye and a community of Microsoft enthusiasts, launching last week. The group calls the website “a passion project built because we have lovingly used many of these products before their untimely death.” You can read about products like MSN Messenger, Kinect, and many more. MSN Messenger (also known as Live Messenger) was a cross-platform instant messaging (IM) program used by many kids who grew up at the early stages of the internet as we now know it, and Kinect was a motion sensing gaming controller that was killed off just last year.

Go down memory lane for yourself to read about things like Windows Phone, Zune, the recently “deceased” Cortana, Clippy, and many more. Each entry is headed up with the name of the product, which links to a page where you can find more detail about it (sometimes a Wikipedia page). That followed the product’s lifespan and a paragraph description of the product. 

Clippy

(Image credit: Microsoft)

Go and see it for yourself, maybe even get involved

When you visit the website, you might notice that the first handful of entries are dated into the future and the icons are coffins instead of gravestones. That’s to indicate the Microsoft products that will be joining the rest of the discontinued “dead” products on the list in the near future. This includes products like Windows 10 (which still sees minor tweaks and updates), the Xbox 360 Store, and others. 

If you’re intrigued, I’d urge you to check out Microsoft Graveyard for yourself. As it’s an open-source project on GitHub, you can actually join the fun of compiling, contributing to, and maintaining the website. You can also follow the project’s ongoing development and updates on Threads

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Windows 11 is getting a voice-powered ability many users have been longing for, as Microsoft kills off Windows Speech Recognition for the far superior Voice Access tech

Windows 11 has a new preview build which further improves Voice Access, an area Microsoft has been putting a lot of effort into of late.

Preview build 22635.2915 (KB5033456) has just been rolled out to the Beta channel, and one of the additions is the ability to make customized voice shortcuts.

Using this feature, you can specify a trigger phrase for the command, and then the command itself.

Microsoft gives an example of an ‘insert work address’ command which when given automatically pastes in the specified address of your workplace. Anytime you need that putting into a document you’re working on, you just say the command – which is quite the timesaver.

Language support for Voice Access has also been extended, and now the following are included (on top of the existing languages): French (France), French (Canada), German, Spanish (Spain) and Spanish (Mexico).

Finally for voice features, multiple monitors are now supported, meaning that when you summon a grid overlay – for directing mouse clicks to certain areas of the desktop – you can do so on any of the screens connected to your PC. (Before now, the grid overlay could only be used on the primary display).

You can switch your focus to another monitor simply by using a letter (A, B, C and so on) or its phonetic equivalent (Alpha, Bravo, etc).

Microsoft further notes that there’s a drag and drop feature to move files or shortcuts from one display to another.

Elsewhere in build 22635, screen casting in Windows 11 has been improved, with a help option now in the Cast flyout from Quick Settings. This can be clicked if you’re having trouble piping your desktop to another screen and want some troubleshooting advice.

Users are also getting the ability to rename their device for the Nearby Sharing feature to help identify it more easily.

For the full list of changes and fixes in this Beta build, peruse Microsoft’s blog post.


Voice Access shortcuts

(Image credit: Microsoft)

Analysis: Custom capers

This is some useful work with Voice Access, and those with multiple monitors who use the feature will of course be very pleased, no doubt. Voice shortcuts is a powerful addition into the mix for voice functionality, too, and with this, there are a good deal of options.

Not just pasting a section of text as we mention in the example above, but also tasks can be triggered such as opening specified URLs in a browser, or opening a file or folder. You can combine multiple actions too, along with functions like mouse clicks or key presses. This is a feature we’ve been wanting for some time, so it’s great to see it arrive.

It’s also worth noting that Windows Speech Recognition has been removed from Windows 11 in this build, and when you open that old app, you’ll now get a message informing you of its deprecation, and recommending the far superior Voice Access capability instead.

We’re hoping that in the future, Voice Access is going to become an even more central part of the Windows 11 interface, and it seems a great candidate to be driven forward with AI – and maybe incorporated into Copilot.

You might also like…

TechRadar – All the latest technology news

Read More

Meta opens the gates to its generative AI tech with launch of new Imagine platform

Amongst all the hullabaloo of Google’s Gemini launch, Meta opened the gates to its free-standing image generator website called Imagine with Meta AI.

The company has been tinkering with this technology for some time now. WhatsApp, for instance, has had a beta in-app image generator since August of this year. Accessing the feature required people to have Meta's app installed on their smartphones. But now with Imagine, all you need is an email address to create an account on the platform. Once in, you’re free to create whatever you want by entering a simple text prompt. It functions similarly to DALL-E

We tried out the website ourselves and discovered the AI will create four 1,280 x 1,280 pixel JPEG images that you can download by clicking the three dots in the upper right corner. The option will appear in the drop-down menu.

Below is a series of images we asked the engine to make. You’ll notice in the bottom left corner is a watermark stating that it was created by an AI.

Image 1 of 3

Homer according to Meta

(Image credit: Future)
Image 2 of 3

Me, according to Meta

(Image credit: Future)
Image 3 of 3

Char's Zaku, according to Meta

(Image credit: Future)

We were surprised to discover that it’s able to create content featuring famous cartoon characters like Homer Simpson and even Mickey Mouse. You’d think there would be restrictions for certain copyrighted material, but apparently not. As impressive as these images may be, there are noticeable flaws. If you look at the Homer Simpson sample, you can see parts of the picture melting into each other. Plus, the character looks downright bizarre.

Limitations (and the work arounds)

A lot of care was put into the development of Imagine. You see, it's powered by Meta's proprietary Emu learning model. According to a company research paper from September, Emu was trained on “1.1 billion images”. At the time, no one really knew the source of all this data. However, Nick Clegg, Meta’s president of global affairs, told Reuters it used public Facebook and Instagram posts to train the model. Altogether, over a billion social media accounts were scrapped.

To rein in all this data, Meta implemented some restrictions. The tech keeps things family friendly as it'll refuse prompts that are violent or sexual nor can they mention a famous person. 

Despite the tech giant’s best efforts, it’s not perfect by any stretch. It appears there is a way to get around said limitations with indirect wording. For example, when we asked Meta AI to create an image of former President Barack Obama, it refused. But, when we entered “a former US president” as the prompt, the AI generated a man that resembled President Obama. 

A former US president, according to Meta

(Image credit: Future)

There are plans to introduce “invisible watermarking… for increased transparency and traceability”, but it’s still weeks away from being released. A lot of damage can be done in that short period. Misuse is something that Meta is concerned about, however, there are still holes. We reached out asking if it aims to implement more protection. This story will be updated at a later time.

Until then, check out TechRadar's guide on the best AI art generators for the year.

You might also like

TechRadar – All the latest technology news

Read More

Photoshop Elements 2024 offers subscription-free access to Adobe AI tech

Adobe is rolling out the 2024 version of its lightweight Photoshop Elements app with several AI-powered features leading the charge.

Chief among this batch, in our opinion, is the new Artistic Effect tool which can place a filter over photographs making them look like paintings. These effects are based on notable art styles and famous artists like Vincent Van Gogh. You do have control over how strong the filters can be via a slider or you can keep the colors from the original image if you don’t like what Photoshop adds. Users can even isolate the changes to certain parts of the photograph. 

Photoshop Elements - Artistic Effects

(Image credit: Adobe)

Next, Quick Actions are being compartmentalized into a single panel and simplified for easier usage. So, if you want to remove artifacts in a JPEG image or highlight an entire background, you can now do so with just one click of the on-screen button.

New editing tools

In addition to all of the AI features, Adobe is expanding the arsenal of tools on Photoshop Elements. 

For example, you can pull together a collection of pictures into a slideshow via Photo Reels. It houses its own set of editing tools, allowing users to make adjustments on the fly, insert graphics, or adjust the time each image lasts on-screen. There’s Color Match, which lets you transfer the color and tone from one picture to another seamlessly, or you can use one of the many built-in presets – it’s totally up to you.

Color Match

(Image credit: Adobe)

Guided Edits now has the ability to replace entire backgrounds in an image while leaving the main subject completely intact. As the cherry on top, Adobe has also redesigned Photoshop Elements, adding new “fonts, icons, buttons, and colors”. Plus, you can choose to display the app in either light or dark mode.

Premiere update

Alongside Photoshop Elements, Adobe is releasing the 2024 version of Premiere Elements. Most of Premiere’s changes aren’t backed up by artificial intelligence, but there is one: Automatic Highlight Reels. This will scan an uploaded video picking out clips to put into, well, a highlight reel. Specifically, it targets high-quality footage, close-ups, as well as people in motion. 

Premiere Elements' Highlight Reel tool

(Image credit: Adobe)

Similar to its sister app, Premiere Elements 2024 lets you grab the color from one video to another where you can then “fine-tune” the saturation or hue. For audio, new effects such as Vocal Enhancer have been introduced to improve sound quality.

Everything you see here is available on the desktop versions of Photoshop and Premiere Elements while the “web and mobile companion apps” have received several beta features. The Quick Actions we mentioned earlier are currently being tested for smartphones plus you can try out putting overlays in images. No word on when the mobile update will arrive.

Until then, we recommend checking out TechRadar’s list of the best online Photoshop courses for 2023 if you’re interested in picking it up 

You might also like

TechRadar – All the latest technology news

Read More

Xbox tech set to reduce CPU overhead by up to 40% when gaming on Windows 11

Windows 11 gamers could get some really beefy benefits from DirectStorage tech, which was recently announced to have arrived on Microsoft’s newest OS – but it’ll be some time yet before developers incorporate it into games.

We already knew that Windows 11 would give users ‘optimal’ results with DirectStorage (compared to Windows 10) in terms of what this feature does – namely seriously speeding up NVMe SSDs.

However, there’s been an eye-opening revelation concerning exactly how much difference this will make when it comes to relieving the pressure on the PC’s processor.

As TweakTown reports, Cooper Partin, a senior software engineer at Microsoft, explained that the DirectStorage implementation for PC is specifically designed for Windows.

Partin noted: “DirectStorage is designed for modern gaming systems. It handles smaller reads more efficiently, and you can batch multiple requests together. When fully integrated with your title, DirectStorage, with an NVMe SSD on Windows 11, reduces the CPU overhead in a game by 20-40%.

“This is attributed to the advancements made in the file IO stack on Windows 11 and the improvements on that platform in general.”


Analysis: CPU resources freed which will make a major difference elsewhere

A 40% reduction is a huge difference in terms of lightening the load on the CPU, although that is a best-case scenario – but even 20% is a big step forward for freeing up processor resources.

Those resources can then be used elsewhere to help big open world games run more smoothly – as we’ve seen before, DirectStorage isn’t simply about making games load more quickly . There’s much more to it than that, and now we’re getting some exciting glimpses of exactly how much difference this Microsoft tech could make to PC games.

Of course, while the public SDK (software development kit) has been released, it’s still up to game developers to bake in this tech when they’re coding, and it’ll be quite some time before we see DirectStorage appearing in many games.

The first game which uses DirectStorage is Forspoken, and we got a glimpse of that at GDC, where it was shown to load up in a single second. Forspoken is scheduled to arrive in October 2022.

TechRadar – All the latest technology news

Read More

Xbox tech set to reduce CPU overhead by up to 40% when gaming on Windows 11

Windows 11 gamers could get some really beefy benefits from DirectStorage tech, which was recently announced to have arrived on Microsoft’s newest OS – but it’ll be some time yet before developers incorporate it into games.

We already knew that Windows 11 would give users ‘optimal’ results with DirectStorage (compared to Windows 10) in terms of what this feature does – namely seriously speeding up NVMe SSDs.

However, there’s been an eye-opening revelation concerning exactly how much difference this will make when it comes to relieving the pressure on the PC’s processor.

As TweakTown reports, Cooper Partin, a senior software engineer at Microsoft, explained that the DirectStorage implementation for PC is specifically designed for Windows.

Partin noted: “DirectStorage is designed for modern gaming systems. It handles smaller reads more efficiently, and you can batch multiple requests together. When fully integrated with your title, DirectStorage, with an NVMe SSD on Windows 11, reduces the CPU overhead in a game by 20-40%.

“This is attributed to the advancements made in the file IO stack on Windows 11 and the improvements on that platform in general.”


Analysis: CPU resources freed which will make a major difference elsewhere

A 40% reduction is a huge difference in terms of lightening the load on the CPU, although that is a best-case scenario – but even 20% is a big step forward for freeing up processor resources.

Those resources can then be used elsewhere to help big open world games run more smoothly – as we’ve seen before, DirectStorage isn’t simply about making games load more quickly . There’s much more to it than that, and now we’re getting some exciting glimpses of exactly how much difference this Microsoft tech could make to PC games.

Of course, while the public SDK (software development kit) has been released, it’s still up to game developers to bake in this tech when they’re coding, and it’ll be quite some time before we see DirectStorage appearing in many games.

The first game which uses DirectStorage is Forspoken, and we got a glimpse of that at GDC, where it was shown to load up in a single second. Forspoken is scheduled to arrive in October 2022.

TechRadar – All the latest technology news

Read More