Amazon is reportedly working on its own AI chatbot that might be smarter than ChatGPT

Amazon is reportedly working on its own AI chatbot, codenamed “Metis”, that’ll operate in a similar vein to ChatGPT. 

According to Business Insider, who spoke “to people familiar with the project,” the new platform will be accessible through a web browser. They also viewed an internal document revealing the chatbot's potential capabilities. It’ll provide text answers to inquiries in a “conversational manner,” give links to sources, suggest follow-up questions, and even generate images. 

So far, it appears that Metis performs just like any other generative AI, but things soon begin to deviate. The company apparently wants to utilize a technique called “retrieval-augmented generation,” or RAG for short. It gives Metis the ability to grab information outside of its original training data, thereby giving the AI a big advantage over its rivals.

ChatGPT, by comparison, works by accessing a data reservoir whenever a user inputs a prompt, but that reservoir has a cut-off date that differs between the service’s models. For example, GPT-4 Turbo has a cut-off date of December 2023. It’s not privy to anything that has happened so far in 2024.

Powering the AI chatbot

It’s unknown if Amazon has implemented RAG at the time of Business Insider’s report. Metis is also slated to function as an “AI agent.” Judging from the description given, it would allow the service to function as a smart home assistant of sorts, “automating and performing complex tasks.” This includes but is not limited to turning on lights, making vacation itineraries, and booking flights.

The report goes on to reveal some of the tech powering Metis. The AI runs on a new internal AI model called Olympus, which is supposed to be a better version of Amazon’s “publicly available Titan.” The company even brought people from the Alexa team to help with development. In fact, Metis “uses some of the [same] resources” as the long-rumored Alexa upgrade.

Differing attitudes

Attitudes towards the AI chatbot vary among different parts of the company. Amazon CEO Andy Jassy seems very interested in the project, as he is directly involved with development and often reviews the team’s progress. Others, however, are less enthusiastic. One of the sources told Business Insider that they felt the company was way too late to party. Rival companies are so ahead of the curve that playing chase may not be worthwhile.

The report mentions that Amazon’s ventures into AI have been mostly duds. The Titan model is considered weaker than rival models; their Amazon Q corporate chatbot isn’t great, and there is low demand for their Trainnium and Inferentia AI chips. Amazon needs a big win to stay in the AI space.

Sources claim Metis is scheduled to launch in September around the same time Amazon is planning to hold its next big event. However, the date could change at any time. Nothing is set in stone at the moment.

While we have you, be sure to check out TechRadar's list of the best AI chatbots for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Amazon Alexa’s next-gen upgrade could turn the assistant into a generative AI chatbot

Rumors started circulating earlier this year claiming Amazon was working on improving Alexa by giving it new generative AI features. Since then, we haven’t heard much about it until very recently when CNBC spoke to people familiar with the project’s development. The new reporting provided insight into what the company aims to do with the upgraded Alexa, how much it may cost, and the reason why Amazon is doing this.

CNBC’s sources were pretty tight-lipped. They didn’t reveal exactly what the AI will be able, but they did mention the tech giant’s goals. Amazon wants its developers to create something “that holds up amid the new AI competition,” referring to the likes of ChatGPT. Company CEO Andy Jazzy was reportedly “underwhelmed” with the modern-day Alexa and he isn’t the only one who wants the assistant to do more. Reportedly, the dev team is seemingly worried the model currently amounts to just being an “expensive alarm clock.”

To facilitate the new direction, Amazon reorganized major portions of its business within the Alexa team, shifting focus toward achieving artificial general intelligence. 

AGI is a concept from science fiction, but it’s the idea that an AI model may one day match or surpass the intelligence of a human being. Despite their lofty goals, Amazon seems to be starting small by wanting to create its own chatbot with generative capabilities. 

The sources state, “Amazon will use its own large language model, Titan, in the Alexa upgrade.” Titan is only available to businesses as a part of Amazon Bedrock. It can generate text, create images, summarize documents, and more for enterprise users, similar to other AIs. Following this train of thought, the new Alexa could offer the same features to regular, non-enterprising users.

Potential costs

Previous reports have said Amazon plans to charge people for access to the supercharged Alexa; however, the cost or plan structure were unknown. Now, we’re learning Amazon is planning to launch the Alexa upgrade as a subscription service completely separate from Prime, meaning people will have to pay extra to try out the AI, according to this new report.

Apparently, there’s been debate on exactly how much to charge. Amazon has yet to nail down the monthly fee. One of the sources told CNBC that “a $ 20 price point was floated” around at one point while someone else suggested dropping costs down to “single-digit dollar [amounts].” So, in other words, less than $ 10, which would allow the brand to undercut rivals. OpenAI, for example, charges $ 20 a month for its Plus plan.

There is no word on when Alexa’s update will launch or even be formally announced. But if and when it does come out, it might be the first chatbot accessible through an Amazon smart speaker like the Echo Pop

We did reach out to the company to see if it wanted to make a statement about CNBC’s report. We’ll update this story if we hear back.

Til then, check out TechRadar's roundup of the best smart speakers for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Bereft ChatGPT fans start petition to bring back controversial ‘Sky’ chatbot voice

OpenAI has pulled ChatGPT's popular 'Sky' chatbot voice after Scarlett Johansson expressed her “disbelief” at how “eerily similar” it sounded to her own. But fans of the controversial voice in the ChatGPT app aren't happy – and have now started a petition to bring it back.

The Sky voice, which is one of several that are available in the ChatGPT app for iOS and Android, is no longer available after OpenAI stated yesterday on X (formerly Twitter) that it'd had hit pause in order to address “questions about how we chose the voices in ChatGPT”.

Those questions became very pointed yesterday when Johansson wrote a fiery statement given to NPR that she was “shocked, angered and in disbelief” that OpenAI CEO Sam Altman would “pursue a voice that sounded so eerily similar to mine” after she had apparently twice declined licensing her voice for the ChatGPT assistant.

OpenAI has rejected those accusations, stating in a blog post that “Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice.” But pressure from Johansson's lawyers, which NPR reports are demanding answers, has forced OpenAI to suspend the voice – and fans aren't happy.

In a fascinating example of how attached some are already becoming to AI chatbots, a popular Reddit thread titled 'Petition to bring Sky voice back' includes a link to a Change petition, which currently has over 300 signatures.

In fairness, many of the Reddit comments and signatures predate Johansson's statement and OpenAI's reasoning for pulling the Sky voice option in the ChatGPT app. And it now looks increasingly likely that the voice won't simply be paused but instead put on indefinite hiatus.

But the thread is still an interesting, and mildly terrifying, glimpse of where we're headed with convincing AI chatbot voices, whether they're licensed from famous actresses or not. One comment from Redditor JohnDango states that “she was the only bot I spoke to that had a 'realness' about her that made it feel like a real step beyond chatbot,” while GaneshLookALike noted mournfully that “Sky was full of warmth and compassion.”

That voice, which we also found to be one of ChatGPT's most convincing options, is now on the backburner while the case rumbles on. 

What next for ChatGPT's Sky voice?

See more

It doesn't sound like ChatGPT's 'Sky' voice is going to return anytime soon. In her statement shared with NPR, Scarlett Johansson said she'd been “forced to hire legal counsel” and send letters to OpenAI asking how the voice had been made. OpenAI's blog post looks like its response to those questions, though it remains to be seen whether that's enough to keep the lawyers at bay.

Johansson understandably sounds determined to pursue the issue, adding in her statement to NPR that “in a time when we are all grappling with deepfakes and the protection of our own likeness, our own work, our own identities, I believe these are questions that deserve absolute clarity.” 

While there's no suggestion that OpenAI cloned Johansson's voice, the company did reveal in March that it had developed a new voice synthesizer that could apparently copy a voice from just 15 seconds of audio. That tool was never released to the public due to concerns about how it might be misused, with OpenAI stating that it was investigating the “responsible deployment of synthetic voices”.

OpenAI CEO Sam Altman also didn't exactly help his company's cause by simply posting “her” on X (formerly Twitter) on the eve of the launch of its GPT-4o model, which included the new voice mode that it demoed. That looks like a thinly-veiled reference to Spike Jonze's movie Her, about a man who develops a relationship with an AI virtual assistant Samantha, which was voiced by none other than Scarlett Johansson.

For now, then, it looks like fans of the ChatGPT app will need to make do with the other voices – including Breeze, Cove, Ember and Juniper – while this fascinating case rumbles on. This also shouldn't effect the rollout of GPT-4o's impressive new conversational voice powers, which OpenAI says it will be rolling out “in alpha within ChatGPT Plus in the coming weeks”.

You might also like

TechRadar – All the latest technology news

Read More

Elon Musk brings controversial AI chatbot Grok to more X users in bid to halt exodus

Premium subscribers of all tiers for the X social media platform will soon gain access to its generative AI chatbot, Grok. Previously, the chatbot was only accessible to users who subscribed to the most expensive subscription tier, Premium+,  for $ 16 a month (approximately £12 or AU$ 25). That’s set to change, with X’s owner Elon Musk announcing the expansion of availability to the large language model (LLM) to Basic Tier and Premium Tier X users in a post. 

Grok has been made open-source, reportedly to allow researchers and developers to leverage Grok’s capabilities for their own projects and research. If you’re interested in checking out its code, you can check out the Grok-1 repository on GitHub. It’s the first major offering from Musk’s own AI venture, xAI

As Dev Technosys, a mobile app and web development company, explains, Grok is Musk’s head-on challenge to ChatGPT, with the billionaire boasting that it beat ChatGPT 3.5 on multiple benchmarks. Musk describes the chatbot as having “a focus on deep understanding and humor,” and replying to questions with a “rebellious streak.” The model is trained on a massive dataset of text and code, including real-time text from X posts (which is what Musk points to as giving the bot a unique advantage), and text data scraped from across the web such as Wikipedia articles and academic papers.

Some industry observers think that this could be a push to boost X subscriber numbers, as analysis performed by Sensor Tower and reported by NBC indicates that visitors to the platform and user retention have been dropping. This has seemingly spooked many advertisers and hit the platform’s revenues, with apparently 75 of the top 100 US advertisers cutting X from their ad budgets entirely from October 2022 onwards. 

It does look like Musk is hoping that an exclusive perk like access to such a well-informed and entertaining chatbot as Grok will convince people to become subscribers, and to keep those who are already subscribed. 

Man wearing glasses, sitting at a table and using a laptop

(Image credit: Shutterstock/fizkes)

The Elon-Musk led ChatGPT that never was

Earlier this year, Musk leveled a lawsuit against what is undoubtedly Grok’s largest competitor and the current industry leader in generative AI, OpenAI. He was an early investor in the company but departed after disagreements about several aspects, including the mission and vision for OpenAI, as well as control and equity in the company. Now, Musk asserts that OpenAI has diverted from its non-profit goals and is prioritizing corporate profits, particularly for Microsoft (a key investor and collaborator), above its other objectives –  violating a contract called the ‘Founding Agreement.’

According to Musk, the Founding Agreement laid down specific principles and commitments that OpenAI had agreed to follow. OpenAI has responded to this accusation by denying such a contract, or any similar agreement, existed with Musk at all. Its overall response to the lawsuit so far has been dismissive, characterizing it as ‘frivolous’ and alleging that Musk is driven by his own business interests. 

Apparently, it was established from early on by OpenAI that the company would transition into being a for-profit organization, as it wouldn’t be able to raise the funds necessary to build the sorts of things it was planning to as a non-profit company. OpenAI claims Musk was not only aware of these plans and was consulted when they were being made, but that he was seeking to have majority equity in OpenAI, wanted to control the board of directors at the time, and wanted to assume the position of CEO. 

Elon Musk wearing a suit and walking in New York

(Image credit: Shutterstock/photosince)

Elon Musk's Grok gambit

Musk didn’t give an exact date for Grok’s wider rollout, but according to Tech Crunch, it’s due sometime at the end of this week. Having seen what Musk considers funny, many people are morbidly curious about what sort of artificial intelligence Grok offers. One other aspect of Grok that might concern (or please, depending on your point of view) people is that it will respond to queries and topics that have been made off-limits for the most part with other chatbots, including controversial political ideas and conspiracy theories. 

The sourcing from X in real-time is one unique advantage that Grok has, although before Musk’s takeover, this would have arguably been a much bigger prize.

Despite my misgivings, Grok does give users another option of chatbot to choose from, and more competition in this emerging field could spur on more innovation as companies battle to win users.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

WhatsApp is testing an all-knowing AI chatbot that will live in your search bar

WhatsApp is slated to receive a pair of AI-powered upgrades aiming to help people answer tough questions on the fly, as well as edit images on the platform.

Starting with answering questions, the upgrade integrates one of Meta’s AI models into the WhatsApp search bar. Doing so, according to WABetaInfo, would allow users to directly input queries without having to create a separate chat room for the AI. You'd be able to hold a quick conversation right on the same page. 

It appears this is an extension of the in-app assistants that originally came out back in November 2023. A screenshot in the report reveals WhatsApp will provide a handful of prompts to get a conversation flowing.

It’s unknown just how capable the search bar AI will be. The assistants are available in different personas specializing in certain topics. But looking at the aforementioned screenshot, it appears the search bar will house the basic Meta AI model. It would be really fun if we could assign the Snoop Dogg persona as the main assistant.

See more

AI image editing

The second update is a collection of image editing features discovered by industry expert AssembleDebug, after diving into a recent WhatsApp beta. AssembleDebug discovered three possibly upcoming tools – Backdrop, Restyle, and Expand. It’s unknown exactly what they do as not a single one works. However the first two share a name with other features currently available on Instagram, so they may, in fact, function the same way.

Backdrop could let users change the background of an image into something different via text prompt. Restyle can completely alter the art style of an uploaded picture. Think of these like filters, but more capable. You can make a photograph into a watercolor painting or pixel art. It’s even possible to create wholly unique content through a text prompt.

WhatsApp's new image editing tools

(Image credit: AssembleDebug/TheSPAndroid)

Expand is the new kid on the block. Judging by the name, AssembleDebug theorizes it’ll harness the power of AI “to expand images beyond their visible area”. Technology like this already exists on other platforms. Photoshop, for example, has Generative Expand,  and Samsung's Galaxy S24 series can expand images after they have been adjusted by rotation. 

WhatsApp gaining such an ability would be a great inclusion as it’ll give users a robust editing tool that is free. Most versions of this tech are locked behind a subscription or tied to a specific device.

Do keep in mind neither beta is available to early testers at the time of this writing. They're still in the works, and as stated earlier, we don’t know the full capabilities of either set. Regardless of their current status, it is great to see that one day WhatsApp may come equipped with AI tech on the same level as what you’d find on Instagram especially when it comes to the search bar assistant. The update will make accessing that side of Meta software more convenient for everyone.

If you prefer to tweak images on a desktop, check out TechRadar's list of the best free photo editors for PC and Mac.

You might also like

TechRadar – All the latest technology news

Read More

Gemma, Google’s new open-source AI model, could make your next chatbot safer and more responsible

Google has unveiled Gemma, an open-source AI model that will allow people to create their own artificial intelligence chatbots and tools based on the same technology behind Google Gemini (the suite of AI tools formerly known as Bard and Duet AI).

Gemma is a collection of open-source models curated from the same technology and research as Gemini, developed by the team at Google DeepMind. Alongside the new open-source model, Google has also put out a ‘Responsible Generative AI Toolkit’ to support developers looking to get to work and experiment with Gemini, according to an official blog post

The open-source model comes in two variations, Gemma 2B and Gemma 7B, which have both been pre-trained to filter out sensitive or personal information. Both versions of the model have also been tested with reinforcement learning from human feedback, to reduce the potential of any chatbots based on Gemma from spitting out harmful content quite significantly. 

 A step in the right direction 

While it may be tempting to think of Gemma as just another model that can spawn chatbots (you wouldn’t be entirely wrong), it’s interesting to see that the company seems to have genuinely developed Gemma to “[make] AI helpful for everyone” as stated in the announcement. It looks like Google’s approach with its latest model is to encourage more responsible use of artificial intelligence. 

Gemma’s release comes right after OpenAI unveiled the impressive video generator Sora, and while we may have to wait and see what developers can produce using Gemma, it’s comforting to see Google attempt to approach artificial intelligence with some level of responsibility. OpenAI has a track record of pumping features and products out and then cleaning up the mess and implementing safeguards later on (in the spirit of Mark Zuckerberg’s ‘Move fast and break things’ one-liner). 

One other interesting feature of Gemma is that it’s designed to be run on local hardware (a single CPU or GPU, although Google Cloud is still an option), meaning that something as simple as a laptop could be used to program the next hit AI personality. Given the increasing prevalence of neural processing units in upcoming laptops, it’ll soon be easier than ever for anyone to take a stab at building their own AI.

You might also like…

TechRadar – All the latest technology news

Read More

DeepMind and Meta staff plan to launch a new AI chatbot that could have the edge over ChatGPT and Bard

Since the explosion in popularity of large language AI models chatbots like ChatGPT, Google Gemini, and Microsoft Copilot, many smaller companies have tried to wiggle their way into the scene. Reka, a new AI startup, is gearing up to take on artificial intelligence chatbot giants like Gemini (formerly known as Google Bard) and OpenAI’s ChatGPT – and it may have a fighting chance to actually do so. 

The company is spearheaded by Singaporean scientist Yi Tay, working towards Reka Flash, a multilingual language model that has been trained in over 32 languages. Reka Flash also boasts 21 billion parameters, with the company stating that the model could have a competitive edge with Google Gemini Pro and OpenAI’s ChatGPT 3.5 across multiple AI benchmarks. 

According to TechInAsia, the company has also released a more compact version of the model called Reka Edge, which offers 7 billion parameters with specific use cases like on-device use. It’s worth noting that ChatGPT and Google Gemini have significantly more training parameters (approximately 175 billion and 137 billion respectively), but those bots have been around for longer and there are benefits to more ‘compact’ AI models; for example, Google has ‘Gemini Nano’, an AI model designed for running on edge devices like smartphones that uses just 1.8 billion parameters – so Reka Edge has it beat there.

So, who’s Yasa?

The model is available to the public in beta on the official Reka site. I’ve had a go at using it and can confirm that it's got a familiar ChatGPT-esque feel to the user interface and the way the bot responds. 

The bot introduced itself as Yasa, developed by Reka, and gave me an instant rundown of all the things it could do for me. It had the usual AI tasks down, like general knowledge, sharing jokes or stories, and solving problems.

Interestingly, Yasa noted that it can also assist in translation, and listed 28 languages it can swap between. While my understanding of written Hindi is rudimentary, I did ask Yasa to translate some words and phrases from English to Hindi and from Hindi to English. 

I was incredibly impressed not just by the accuracy of the translation, but also by the fact that Yasa broke down its translation to explain not just how it got there, but also breaking down each word in the phrase or sentence and translated it word forward before giving you the complete sentence. The response time for each prompt no matter how long was also very quick. Considering that non-English-language prompts have proven limited in the past with other popular AI chatbots, it’s a solid showing – although it’s not the only multilingual bot out there.

Image 1 of 2

Reka translating

(Image credit: Future)
Image 2 of 2

Reka AI Barbie

(Image credit: Future)

I tried to figure out how up-to-date the bot was with current events or general knowledge and finally figured out the information.  It must have been trained on information that predates the release of the Barbie movie. I know, a weird litmus test, but when I asked it to give me some facts about the pink-tinted Margot Robbie feature it spoke about it as an ‘upcoming movie’ and gave me the release date of July 28, 2023. So, we appear to have the same case as seen with ChatGPT, where its knowledge was previously limited to world events before 2022

Of all the ChatGPT alternatives I’ve tried since the AI boom, Reka (or should I say, Yasa) is probably the most immediately impressive. While other AI betas feel clunky and sometimes like poor-man’s knockoffs, Reka holds its own not just with its visually pleasing user interfaces and easy-to-use setup, but for its multilingual capabilities and helpful, less robotic personality.

You might also like…

TechRadar – All the latest technology news

Read More

Nvidia’s new app will let you run an AI chatbot on your RTX-powered PC

Nvidia is offering users the opportunity to try out their new AI chatbot that runs natively on your PC called Chat with RTX.

Well, it’s not a chatbot in the traditional sense like ChatGPT. It’s more of an AI summarizer as Chat with RTX doesn’t come with its own knowledge base. The data it references has to come from documents that you provide. According to the company, their software supports multiple file formats including .txt, .pdf, .doc, and .xml. The way it works is you upload a single file or an entire folder onto Chat with RTX’s library. You then ask questions relating to the uploaded content or have it sum up the text into a bize-sized paragraph. 

As TheVerge points out, the tool can help professionals analyze lengthy documents. And because it runs locally on your PC, you can ask Chat with RTX to process sensitive data without worrying about any potential leaks.

The software can also summarize YouTube videos. To do this, you’ll need to first change the dataset from Folder Path to YouTube URL then paste said URL of the clip into Chat with RTX. It’ll then transcribe the entire video for the app to use as its knowledge base. URLs for YouTube playlists can be pasted as well. You can also list out how many videos are in the playlist. Regardless, the software will transcribe everything as normal.

Rough around the edges

Keep in mind Chat with RTX is far from perfect. As TheVerge puts it, “the app is a little rough around the edges”. 

It’s reportedly pretty good at summarizing documents since it had “no problem pulling out all the key information.” However, it falters with YouTube videos. The publication uploaded one of their clips for Chat with RTX to transcribe. After looking through the summary, they discovered it was for a completely different video. Additionally, it doesn’t understand context. If you ask a follow-up question “based on the context of a previous question,” the AI won’t know what you’re talking about. Each prompt must be treated as a completely new one.

Requirements

The demo is free for everyone to try out, although your computer must meet certain requirements. It needs to have a GeForce RTX 30 Series or higher graphics card running on the latest Nvidia drivers, at least 8GB of VRAM, and Windows 10 or Windows 11 on board. 

TheVerge claims it took about 30 minutes for Chat with RTX to finish installing on their PC which housed an Intel Core i9-14900K CPU and a GeForce RTX 4090 GPU. So even if you have a powerful machine, it’ll still take a while. The file size for the app is nearly 40GB and will eat up around 3GB of RAM when activated.

We reached out to Nvidia asking if there are plans to expand support to RTX 20 Series graphics cards and when the final version will launch (assuming there is one). This story will be updated at a later time. 

Until then, check out TechRadar's list of the best graphics cards for 2024.

You might also like

TechRadar – All the latest technology news

Read More

From chatterbox to archive: Google’s Gemini chatbot will hold on to your conversations for years

If you were thinking of sharing your deepest, darkest secrets with Google's freshly-rebranded family of generative AI apps, Gemini, just keep in mind that someone else might also see them. Google has made this explicitly clear in a lengthy Gemini support document where it elaborates on its data collection practices for Gemini chatbot apps across platforms like Android and iOS, as well as directly in-browser.

Google explained that it’s standard practice for human annotators to read, label, and process conversations that users have with Gemini. This information and data are used to improve Gemini to make it perform better in future conversations with users. It does clarify that conversations are “disconnected” from specific Google accounts before being seen by reviewers, but also that they’re stored for up to three years, with “related data” like user devices and languages as well as location. According to TechCrunch, Google doesn’t make it clear if these are in-house annotators or outsourced from elsewhere. 

If you’re feeling some discomfort about relinquishing this sort of data to be able to use Gemini, Google will give users some control over how and which Gemini-related data is retained. You can turn off Gemini App Activity in the My Activity dashboard (which is turned on by default). Turning off this setting will stop Gemini from saving conversations in the long term, starting when you disable this setting. 

However, even if you do this, Google will save conversations associated with your account for up to 72 hours. You can also go in and delete individual prompts and conversations in the Gemini Apps Activity screen (although again, it’s unclear if this fully scrubs them from Google's records). 

A direct warning that's worth heeding

Google puts the following in bold for this reason – your conversations with Gemini are not just your own:

Please don’t enter confidential information in your conversations or any data you wouldn’t want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies.

Google’s AI policies regarding data collection and retention are in line with its AI competitors like OpenAI. OpenAI’s policy for the standard, free tier of ChatGPT is to save all conversations for 30 days unless a user is subscribed to the enterprise-tier plan and chooses a custom data retention policy.

Google and its competitors are navigating what is one of the most contentious aspects of generative AI – the issues raised and the necessity of user data that comes with the nature of developing and training AI models. So far, it’s been something of a Wild West when it comes to the ethics, morals, and legality of AI. 

That said, some governments and regulators have started to take notice, for example, the FTC in the US and the Italian Data Protection Authority. Now’s a good time as ever for tech organizations and generative AI makers to pay attention and be proactive. We know they already do this when it comes to their corporate-orientated, paid customer models as those AI products very explicitly don’t retain data. Right now, tech companies don’t feel they need to do this for free individual users (or to at least give them the option to opt-out), so until they do, they’ll probably continue to scoop up all of the conversational data they can.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Not spending enough on Amazon already? Its new AI chatbot is here to help

If there's one tech innovation that our bank accounts didn't need in 2024, it's an Amazon chatbot with infinite knowledge of the site's array of potential impulse buys. But unfortunately for our savings, that's exactly what we've just been given in the form of Rufus.

Amazon says its Rufus chatbot has now launched in the US in beta form to “a small subset of customers” who use its mobile app, but that it'll “progressively roll out to additional US customers in the coming weeks”. Rufus is apparently “an expert shopping assistant” who's been trained on Amazon's product catalog and will help answer your questions in a conversational way.

Rather than Googling for extra advice on the differences between trail and road running shoes, the idea is that you can instead search for pointers in the Amazon app and Rufus will pop up with the answers. 

Quite how good those answers are remains to be seen, as Amazon says they come from “a combination of extensive product catalog, customer reviews, community Q&As, and information from across the web”. Considering the variable quality of Amazon's reviews, and the tendency of AI chatbots to hallucinate, you may still want to cross-reference your research with some external sources. 

Still, it's an early glimpse at the future of shopping, with retailers looking to arm you with all of the information you need so you can, well, spend more money with them. Amazon says that the questions can be as broad as “what are good gifts for Valentine’s Day?”, but also as specific as “is this cordless drill easy to hold?” if you're on a product page.

How to find and use Rufus

Right now, Rufus is only being made available to “select customers when they next update their Amazon Shopping app”. But if you live in the US and are keen to take it for a spin, it's worth updating your iOS or Android app to see if you're one of the early chosen ones.

If you are, the bar at the top of the app should now say “search or ask a question”. That's where you can fire conversational questions at Rufus, like “what to consider when buying headphones?”, or prompts like “best dinosaur toys for a 5-year-old“ or “I want to start an indoor garden”.

The ability to ask specific questions about products on their product pages also sounds handy, although this will effectively only be a summary of the page's Q&As and reviews. Given our experience with AI shopping chatbots so far, we'd be reluctant to take everything at face value without double-checking with another source.

Still, with Rufus getting a wider US rollout in “the coming weeks”, it is a pretty major change to the Amazon app – and could change how we shop with the retail giant. Amazon will no doubt be hoping it convinces us to spend more – maybe we need two chatbots, with the other one warning us about our overdraft.

You might also like

TechRadar – All the latest technology news

Read More