Elon Musk brings controversial AI chatbot Grok to more X users in bid to halt exodus

Premium subscribers of all tiers for the X social media platform will soon gain access to its generative AI chatbot, Grok. Previously, the chatbot was only accessible to users who subscribed to the most expensive subscription tier, Premium+,  for $ 16 a month (approximately £12 or AU$ 25). That’s set to change, with X’s owner Elon Musk announcing the expansion of availability to the large language model (LLM) to Basic Tier and Premium Tier X users in a post. 

Grok has been made open-source, reportedly to allow researchers and developers to leverage Grok’s capabilities for their own projects and research. If you’re interested in checking out its code, you can check out the Grok-1 repository on GitHub. It’s the first major offering from Musk’s own AI venture, xAI

As Dev Technosys, a mobile app and web development company, explains, Grok is Musk’s head-on challenge to ChatGPT, with the billionaire boasting that it beat ChatGPT 3.5 on multiple benchmarks. Musk describes the chatbot as having “a focus on deep understanding and humor,” and replying to questions with a “rebellious streak.” The model is trained on a massive dataset of text and code, including real-time text from X posts (which is what Musk points to as giving the bot a unique advantage), and text data scraped from across the web such as Wikipedia articles and academic papers.

Some industry observers think that this could be a push to boost X subscriber numbers, as analysis performed by Sensor Tower and reported by NBC indicates that visitors to the platform and user retention have been dropping. This has seemingly spooked many advertisers and hit the platform’s revenues, with apparently 75 of the top 100 US advertisers cutting X from their ad budgets entirely from October 2022 onwards. 

It does look like Musk is hoping that an exclusive perk like access to such a well-informed and entertaining chatbot as Grok will convince people to become subscribers, and to keep those who are already subscribed. 

Man wearing glasses, sitting at a table and using a laptop

(Image credit: Shutterstock/fizkes)

The Elon-Musk led ChatGPT that never was

Earlier this year, Musk leveled a lawsuit against what is undoubtedly Grok’s largest competitor and the current industry leader in generative AI, OpenAI. He was an early investor in the company but departed after disagreements about several aspects, including the mission and vision for OpenAI, as well as control and equity in the company. Now, Musk asserts that OpenAI has diverted from its non-profit goals and is prioritizing corporate profits, particularly for Microsoft (a key investor and collaborator), above its other objectives –  violating a contract called the ‘Founding Agreement.’

According to Musk, the Founding Agreement laid down specific principles and commitments that OpenAI had agreed to follow. OpenAI has responded to this accusation by denying such a contract, or any similar agreement, existed with Musk at all. Its overall response to the lawsuit so far has been dismissive, characterizing it as ‘frivolous’ and alleging that Musk is driven by his own business interests. 

Apparently, it was established from early on by OpenAI that the company would transition into being a for-profit organization, as it wouldn’t be able to raise the funds necessary to build the sorts of things it was planning to as a non-profit company. OpenAI claims Musk was not only aware of these plans and was consulted when they were being made, but that he was seeking to have majority equity in OpenAI, wanted to control the board of directors at the time, and wanted to assume the position of CEO. 

Elon Musk wearing a suit and walking in New York

(Image credit: Shutterstock/photosince)

Elon Musk's Grok gambit

Musk didn’t give an exact date for Grok’s wider rollout, but according to Tech Crunch, it’s due sometime at the end of this week. Having seen what Musk considers funny, many people are morbidly curious about what sort of artificial intelligence Grok offers. One other aspect of Grok that might concern (or please, depending on your point of view) people is that it will respond to queries and topics that have been made off-limits for the most part with other chatbots, including controversial political ideas and conspiracy theories. 

The sourcing from X in real-time is one unique advantage that Grok has, although before Musk’s takeover, this would have arguably been a much bigger prize.

Despite my misgivings, Grok does give users another option of chatbot to choose from, and more competition in this emerging field could spur on more innovation as companies battle to win users.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

WhatsApp is testing an all-knowing AI chatbot that will live in your search bar

WhatsApp is slated to receive a pair of AI-powered upgrades aiming to help people answer tough questions on the fly, as well as edit images on the platform.

Starting with answering questions, the upgrade integrates one of Meta’s AI models into the WhatsApp search bar. Doing so, according to WABetaInfo, would allow users to directly input queries without having to create a separate chat room for the AI. You'd be able to hold a quick conversation right on the same page. 

It appears this is an extension of the in-app assistants that originally came out back in November 2023. A screenshot in the report reveals WhatsApp will provide a handful of prompts to get a conversation flowing.

It’s unknown just how capable the search bar AI will be. The assistants are available in different personas specializing in certain topics. But looking at the aforementioned screenshot, it appears the search bar will house the basic Meta AI model. It would be really fun if we could assign the Snoop Dogg persona as the main assistant.

See more

AI image editing

The second update is a collection of image editing features discovered by industry expert AssembleDebug, after diving into a recent WhatsApp beta. AssembleDebug discovered three possibly upcoming tools – Backdrop, Restyle, and Expand. It’s unknown exactly what they do as not a single one works. However the first two share a name with other features currently available on Instagram, so they may, in fact, function the same way.

Backdrop could let users change the background of an image into something different via text prompt. Restyle can completely alter the art style of an uploaded picture. Think of these like filters, but more capable. You can make a photograph into a watercolor painting or pixel art. It’s even possible to create wholly unique content through a text prompt.

WhatsApp's new image editing tools

(Image credit: AssembleDebug/TheSPAndroid)

Expand is the new kid on the block. Judging by the name, AssembleDebug theorizes it’ll harness the power of AI “to expand images beyond their visible area”. Technology like this already exists on other platforms. Photoshop, for example, has Generative Expand,  and Samsung's Galaxy S24 series can expand images after they have been adjusted by rotation. 

WhatsApp gaining such an ability would be a great inclusion as it’ll give users a robust editing tool that is free. Most versions of this tech are locked behind a subscription or tied to a specific device.

Do keep in mind neither beta is available to early testers at the time of this writing. They're still in the works, and as stated earlier, we don’t know the full capabilities of either set. Regardless of their current status, it is great to see that one day WhatsApp may come equipped with AI tech on the same level as what you’d find on Instagram especially when it comes to the search bar assistant. The update will make accessing that side of Meta software more convenient for everyone.

If you prefer to tweak images on a desktop, check out TechRadar's list of the best free photo editors for PC and Mac.

You might also like

TechRadar – All the latest technology news

Read More

Gemma, Google’s new open-source AI model, could make your next chatbot safer and more responsible

Google has unveiled Gemma, an open-source AI model that will allow people to create their own artificial intelligence chatbots and tools based on the same technology behind Google Gemini (the suite of AI tools formerly known as Bard and Duet AI).

Gemma is a collection of open-source models curated from the same technology and research as Gemini, developed by the team at Google DeepMind. Alongside the new open-source model, Google has also put out a ‘Responsible Generative AI Toolkit’ to support developers looking to get to work and experiment with Gemini, according to an official blog post

The open-source model comes in two variations, Gemma 2B and Gemma 7B, which have both been pre-trained to filter out sensitive or personal information. Both versions of the model have also been tested with reinforcement learning from human feedback, to reduce the potential of any chatbots based on Gemma from spitting out harmful content quite significantly. 

 A step in the right direction 

While it may be tempting to think of Gemma as just another model that can spawn chatbots (you wouldn’t be entirely wrong), it’s interesting to see that the company seems to have genuinely developed Gemma to “[make] AI helpful for everyone” as stated in the announcement. It looks like Google’s approach with its latest model is to encourage more responsible use of artificial intelligence. 

Gemma’s release comes right after OpenAI unveiled the impressive video generator Sora, and while we may have to wait and see what developers can produce using Gemma, it’s comforting to see Google attempt to approach artificial intelligence with some level of responsibility. OpenAI has a track record of pumping features and products out and then cleaning up the mess and implementing safeguards later on (in the spirit of Mark Zuckerberg’s ‘Move fast and break things’ one-liner). 

One other interesting feature of Gemma is that it’s designed to be run on local hardware (a single CPU or GPU, although Google Cloud is still an option), meaning that something as simple as a laptop could be used to program the next hit AI personality. Given the increasing prevalence of neural processing units in upcoming laptops, it’ll soon be easier than ever for anyone to take a stab at building their own AI.

You might also like…

TechRadar – All the latest technology news

Read More

DeepMind and Meta staff plan to launch a new AI chatbot that could have the edge over ChatGPT and Bard

Since the explosion in popularity of large language AI models chatbots like ChatGPT, Google Gemini, and Microsoft Copilot, many smaller companies have tried to wiggle their way into the scene. Reka, a new AI startup, is gearing up to take on artificial intelligence chatbot giants like Gemini (formerly known as Google Bard) and OpenAI’s ChatGPT – and it may have a fighting chance to actually do so. 

The company is spearheaded by Singaporean scientist Yi Tay, working towards Reka Flash, a multilingual language model that has been trained in over 32 languages. Reka Flash also boasts 21 billion parameters, with the company stating that the model could have a competitive edge with Google Gemini Pro and OpenAI’s ChatGPT 3.5 across multiple AI benchmarks. 

According to TechInAsia, the company has also released a more compact version of the model called Reka Edge, which offers 7 billion parameters with specific use cases like on-device use. It’s worth noting that ChatGPT and Google Gemini have significantly more training parameters (approximately 175 billion and 137 billion respectively), but those bots have been around for longer and there are benefits to more ‘compact’ AI models; for example, Google has ‘Gemini Nano’, an AI model designed for running on edge devices like smartphones that uses just 1.8 billion parameters – so Reka Edge has it beat there.

So, who’s Yasa?

The model is available to the public in beta on the official Reka site. I’ve had a go at using it and can confirm that it's got a familiar ChatGPT-esque feel to the user interface and the way the bot responds. 

The bot introduced itself as Yasa, developed by Reka, and gave me an instant rundown of all the things it could do for me. It had the usual AI tasks down, like general knowledge, sharing jokes or stories, and solving problems.

Interestingly, Yasa noted that it can also assist in translation, and listed 28 languages it can swap between. While my understanding of written Hindi is rudimentary, I did ask Yasa to translate some words and phrases from English to Hindi and from Hindi to English. 

I was incredibly impressed not just by the accuracy of the translation, but also by the fact that Yasa broke down its translation to explain not just how it got there, but also breaking down each word in the phrase or sentence and translated it word forward before giving you the complete sentence. The response time for each prompt no matter how long was also very quick. Considering that non-English-language prompts have proven limited in the past with other popular AI chatbots, it’s a solid showing – although it’s not the only multilingual bot out there.

Image 1 of 2

Reka translating

(Image credit: Future)
Image 2 of 2

Reka AI Barbie

(Image credit: Future)

I tried to figure out how up-to-date the bot was with current events or general knowledge and finally figured out the information.  It must have been trained on information that predates the release of the Barbie movie. I know, a weird litmus test, but when I asked it to give me some facts about the pink-tinted Margot Robbie feature it spoke about it as an ‘upcoming movie’ and gave me the release date of July 28, 2023. So, we appear to have the same case as seen with ChatGPT, where its knowledge was previously limited to world events before 2022

Of all the ChatGPT alternatives I’ve tried since the AI boom, Reka (or should I say, Yasa) is probably the most immediately impressive. While other AI betas feel clunky and sometimes like poor-man’s knockoffs, Reka holds its own not just with its visually pleasing user interfaces and easy-to-use setup, but for its multilingual capabilities and helpful, less robotic personality.

You might also like…

TechRadar – All the latest technology news

Read More

Nvidia’s new app will let you run an AI chatbot on your RTX-powered PC

Nvidia is offering users the opportunity to try out their new AI chatbot that runs natively on your PC called Chat with RTX.

Well, it’s not a chatbot in the traditional sense like ChatGPT. It’s more of an AI summarizer as Chat with RTX doesn’t come with its own knowledge base. The data it references has to come from documents that you provide. According to the company, their software supports multiple file formats including .txt, .pdf, .doc, and .xml. The way it works is you upload a single file or an entire folder onto Chat with RTX’s library. You then ask questions relating to the uploaded content or have it sum up the text into a bize-sized paragraph. 

As TheVerge points out, the tool can help professionals analyze lengthy documents. And because it runs locally on your PC, you can ask Chat with RTX to process sensitive data without worrying about any potential leaks.

The software can also summarize YouTube videos. To do this, you’ll need to first change the dataset from Folder Path to YouTube URL then paste said URL of the clip into Chat with RTX. It’ll then transcribe the entire video for the app to use as its knowledge base. URLs for YouTube playlists can be pasted as well. You can also list out how many videos are in the playlist. Regardless, the software will transcribe everything as normal.

Rough around the edges

Keep in mind Chat with RTX is far from perfect. As TheVerge puts it, “the app is a little rough around the edges”. 

It’s reportedly pretty good at summarizing documents since it had “no problem pulling out all the key information.” However, it falters with YouTube videos. The publication uploaded one of their clips for Chat with RTX to transcribe. After looking through the summary, they discovered it was for a completely different video. Additionally, it doesn’t understand context. If you ask a follow-up question “based on the context of a previous question,” the AI won’t know what you’re talking about. Each prompt must be treated as a completely new one.

Requirements

The demo is free for everyone to try out, although your computer must meet certain requirements. It needs to have a GeForce RTX 30 Series or higher graphics card running on the latest Nvidia drivers, at least 8GB of VRAM, and Windows 10 or Windows 11 on board. 

TheVerge claims it took about 30 minutes for Chat with RTX to finish installing on their PC which housed an Intel Core i9-14900K CPU and a GeForce RTX 4090 GPU. So even if you have a powerful machine, it’ll still take a while. The file size for the app is nearly 40GB and will eat up around 3GB of RAM when activated.

We reached out to Nvidia asking if there are plans to expand support to RTX 20 Series graphics cards and when the final version will launch (assuming there is one). This story will be updated at a later time. 

Until then, check out TechRadar's list of the best graphics cards for 2024.

You might also like

TechRadar – All the latest technology news

Read More

From chatterbox to archive: Google’s Gemini chatbot will hold on to your conversations for years

If you were thinking of sharing your deepest, darkest secrets with Google's freshly-rebranded family of generative AI apps, Gemini, just keep in mind that someone else might also see them. Google has made this explicitly clear in a lengthy Gemini support document where it elaborates on its data collection practices for Gemini chatbot apps across platforms like Android and iOS, as well as directly in-browser.

Google explained that it’s standard practice for human annotators to read, label, and process conversations that users have with Gemini. This information and data are used to improve Gemini to make it perform better in future conversations with users. It does clarify that conversations are “disconnected” from specific Google accounts before being seen by reviewers, but also that they’re stored for up to three years, with “related data” like user devices and languages as well as location. According to TechCrunch, Google doesn’t make it clear if these are in-house annotators or outsourced from elsewhere. 

If you’re feeling some discomfort about relinquishing this sort of data to be able to use Gemini, Google will give users some control over how and which Gemini-related data is retained. You can turn off Gemini App Activity in the My Activity dashboard (which is turned on by default). Turning off this setting will stop Gemini from saving conversations in the long term, starting when you disable this setting. 

However, even if you do this, Google will save conversations associated with your account for up to 72 hours. You can also go in and delete individual prompts and conversations in the Gemini Apps Activity screen (although again, it’s unclear if this fully scrubs them from Google's records). 

A direct warning that's worth heeding

Google puts the following in bold for this reason – your conversations with Gemini are not just your own:

Please don’t enter confidential information in your conversations or any data you wouldn’t want a reviewer to see or Google to use to improve our products, services, and machine-learning technologies.

Google’s AI policies regarding data collection and retention are in line with its AI competitors like OpenAI. OpenAI’s policy for the standard, free tier of ChatGPT is to save all conversations for 30 days unless a user is subscribed to the enterprise-tier plan and chooses a custom data retention policy.

Google and its competitors are navigating what is one of the most contentious aspects of generative AI – the issues raised and the necessity of user data that comes with the nature of developing and training AI models. So far, it’s been something of a Wild West when it comes to the ethics, morals, and legality of AI. 

That said, some governments and regulators have started to take notice, for example, the FTC in the US and the Italian Data Protection Authority. Now’s a good time as ever for tech organizations and generative AI makers to pay attention and be proactive. We know they already do this when it comes to their corporate-orientated, paid customer models as those AI products very explicitly don’t retain data. Right now, tech companies don’t feel they need to do this for free individual users (or to at least give them the option to opt-out), so until they do, they’ll probably continue to scoop up all of the conversational data they can.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Not spending enough on Amazon already? Its new AI chatbot is here to help

If there's one tech innovation that our bank accounts didn't need in 2024, it's an Amazon chatbot with infinite knowledge of the site's array of potential impulse buys. But unfortunately for our savings, that's exactly what we've just been given in the form of Rufus.

Amazon says its Rufus chatbot has now launched in the US in beta form to “a small subset of customers” who use its mobile app, but that it'll “progressively roll out to additional US customers in the coming weeks”. Rufus is apparently “an expert shopping assistant” who's been trained on Amazon's product catalog and will help answer your questions in a conversational way.

Rather than Googling for extra advice on the differences between trail and road running shoes, the idea is that you can instead search for pointers in the Amazon app and Rufus will pop up with the answers. 

Quite how good those answers are remains to be seen, as Amazon says they come from “a combination of extensive product catalog, customer reviews, community Q&As, and information from across the web”. Considering the variable quality of Amazon's reviews, and the tendency of AI chatbots to hallucinate, you may still want to cross-reference your research with some external sources. 

Still, it's an early glimpse at the future of shopping, with retailers looking to arm you with all of the information you need so you can, well, spend more money with them. Amazon says that the questions can be as broad as “what are good gifts for Valentine’s Day?”, but also as specific as “is this cordless drill easy to hold?” if you're on a product page.

How to find and use Rufus

Right now, Rufus is only being made available to “select customers when they next update their Amazon Shopping app”. But if you live in the US and are keen to take it for a spin, it's worth updating your iOS or Android app to see if you're one of the early chosen ones.

If you are, the bar at the top of the app should now say “search or ask a question”. That's where you can fire conversational questions at Rufus, like “what to consider when buying headphones?”, or prompts like “best dinosaur toys for a 5-year-old“ or “I want to start an indoor garden”.

The ability to ask specific questions about products on their product pages also sounds handy, although this will effectively only be a summary of the page's Q&As and reviews. Given our experience with AI shopping chatbots so far, we'd be reluctant to take everything at face value without double-checking with another source.

Still, with Rufus getting a wider US rollout in “the coming weeks”, it is a pretty major change to the Amazon app – and could change how we shop with the retail giant. Amazon will no doubt be hoping it convinces us to spend more – maybe we need two chatbots, with the other one warning us about our overdraft.

You might also like

TechRadar – All the latest technology news

Read More

Hate taxes? H&R Block’s new AI chatbot aims to reduce your tax frustrations

Like a spoonful of mustard or mayonnaise — or perhaps you prefer sriracha? — it seems no software recipe is complete these days without a dollop of generative AI. The groundbreaking technology plays games better than you, writes songs for you … heck, it can even be your girlfriend (to each their own). And now it can help you with life’s most vexing challenge: taxes.

In mid-December, H&R Block quietly announced AI Tax Assist, a generative AI tool powered by Microsoft’s Azure OpenAI Service. The new offering leverages the tax giant’s decades of experience in tax prep and leans on its stable of more than 60,000 tax pros to answer your thorniest questions about U.S. and state tax laws: Can I deduct this new laptop? Is this a personal expense or a business one? How many roads must a man walk down, before travel is officially part of his job?

We see AI as one of the defining technologies of our time.

Sarah Bird, Responsible AI, Microsoft

In a press preview on Thursday, January 25, in New York City, H&R Block announced another new tool designed to simplify importing last year’s tax returns from the competition. And it offered TechRadar the opportunity to try out the new AI assistant software and talk about the future of tax prep.

“We see AI as one of the defining technologies of our time … but only if we do it responsibility,” explained Sarah Bird, who serves as global lead for responsible AI at Microsoft. (Bird was virtual at the event, a result of possible exposure to Covid.) Microsoft’s Azure OpenAI Service lets developers easily build generative AI experiences, using a set of prebuilt and curated models from OpenAI, Meta and beyond. Her team helps companies like H&R Block ensure that their tools use the highest quality training data, guide you with clever prompts, and so on. “It’s really a best practices implementation in terms of responsible AI,” she said.

H&R Block notes that the tech will not file or even fill out your forms; it merely answers questions. But if you’re hesitant to even ask AI for tax advice, you have good reason. The tech is notorious for hallucinations, where it simply invents the answers to questions if it can’t find the right answer. Some experts worry that problem may never be solved. “This isn’t fixable,” Emily Bender, a linguistics professor at UW's Computational Linguistics Laboratory, told the Associated Press last fall. “It’s inherent in the mismatch between the technology and the proposed use cases.”

One answer to the problem is starting with the right training data. If the AI has reliable, trustworthy sources of data to pore through, it can find the right answer, saving you from hunting through the Kafka-esque bowels of the IRS to find the instruction form for Schedule B or whatever. And off-the-shelf large language models (LLMs) simply don’t have that data, explained Aditya Thadani, VP of Artificial Intelligence Platforms for H&R Block. Ask ChatGPT 4 a question, he noted, and you risk missing out on what’s new: The cut-off date for that LLM's data sources is April of 2023.

“The IRS has released a number of changes since then,” he told attendees at the H&R Block event. “They’re making changes well into December and January, well into the tax season. We’re making sure you get all that information.”

To try out the new system, TechRadar sat down with some sample data and asked a few test prompts: Am I missing any deductions? Can I deduct a car as a business expense? And so on. The chatbot offered reasonable prompts: A few paragraphs of information culled from H&R Block’s deep catalog of data, links to find more information, and so on. The company says it can answer tax theory questions, clarify tax terms, and give guidance on specific tax rules. And crucial to the entire experience: Live, human beings — CPAs even! — are always just a click away.

“When we are not absolutely sure? Don’t guess. Give a response that we are actually confident in,” Thadani said. And if you don’t get the response you are looking for from the AI, you can get it from the tax pro.”

Privacy matters: Who will see your data?

It’s hard to discuss any emerging technology without touching on privacy, and both Microsoft and H&R Block are very aware of the risks. After all, a person’s tax returns are highly personal and confidential – one reason they became such a hot-button in the US presidential elections. Should a company be allowed to train an LLM on your data?

“We’re sitting on a lot of really personal, private information,” Thadani admitted. “As much as we want to use that to answer questions effectively, we have to continue to find the balance.” So the assistant won’t remember you. It won’t ingest your tax forms to answer the questions you pose. And by design, other people won’t benefit from your questions down the road.

The new software also taps into one of the quirks of our modern software assistants. We don’t necessarily talk to them like adults. We’ve been trained to ask Alexa or the Google Assistant halting half-words and phrases. Meanwhile, chatbots can converse in natural language. H&R Block’s tool works fine in either space, Bird explained.

“It’s incredibly enabling because it allows people to speak in words they’re comfortable with,” she said. There’s the real power of technology in a nutshell: “Make a complex thing more accessible to people because the technology meets them where they’re at.”

Now if it could only help us deduct a few holiday pounds from the waistline.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft’s Copilot chatbot will get 6 big upgrades soon – including ChatGPT’s new brain

Microsoft has announced that Copilot – the AI chatbot formerly known as Bing Chat – is soon to get half a dozen impressive upgrades.

This batch of improvements should make the Copilot chatbot considerably more powerful in numerous respects (outside of, and inside of Windows 11).

So, first off, let’s break down the upgrades themselves (listed in a Microsoft blog post) before getting into a discussion of what difference they’re likely to make.

Firstly, and most importantly, Copilot is getting a new brain, or we should say an upgraded brain in the form of GPT-4 Turbo. That’s the latest model of GPT from OpenAI which makes various advances in terms of being generally better and more accurate.

Another beefy upgrade is an updated engine for Dall-E 3, the chatbot’s image creation feature, which produces higher quality results that are more closely aligned with what’s requested by the user. This is actually in Copilot right now.

Thirdly, Microsoft promises that Copilot will do better with image searches, returning better results when you sling a picture at the AI in order to learn more about it.

Another addition is Deep Search which uses GPT-4 to “deliver optimized search results for complex topics” as Microsoft puts it. What this means is that if you have a query for Copilot, it can produce a more in-depth search request to produce better results. Furthermore, if the terms of your query are vague and could potentially relate to multiple topics, Deep Search will follow up on what those topics might be and offer suggestions to allow you to refine the query.

The fifth upgrade Microsoft has planned is Code Interpreter, which as the name suggests will help perform complex tasks including coding, data analysis and so forth. That’s not something the average user will benefit from, but there are those who will, of course.

Finally, Copilot in Microsoft’s Edge browser has a rewrite feature (for inline text composition) coming soon. This allows you to select text on a website and get the AI to rewrite it for you.


Analysis: Something for Google to worry about

Dall-E 3

(Image credit: Future)

There are some really useful changes inbound here. Getting GPT-4 Turbo is an upgrade (from GPT-4) that a lot of Copilot users have been clamoring for, and Microsoft tells us that it’s now being tested with select users. (We previously heard it still had a few kinks to iron out, so presumably that’s what’s currently going on).

GPT-4 Turbo will be rolling out in the “coming weeks” so it should be here soon enough, with any luck, and you’ll be able to see the difference it makes for you in terms of a greater level of accuracy for the chatbot when responding to your queries.

It’s great to see Dall-E 3 getting an upgrade, too, as it’s already an excellent image creation engine, frankly. (Recall the rush to use the feature when it was first launched, due to the impressive results being shared online).

The search query improvements, both the Deep Search capabilities and refined image searching, will also combine with the above upgrades to make Copilot a whole lot better across multiple fronts. (Although we do worry somewhat about the potential for abuse with that inline rewrite feature for Edge).

All of this forward momentum for Copilot comes as we just heard news of Google delaying its advances on the AI front, pushing some major launches back to the start of 2024. Microsoft isn’t hanging around when it comes to Copilot, that’s for sure, and Google has to balance keeping up, without pushing so hard that mistakes are made.

You might also like …

TechRadar – All the latest technology news

Read More

WhatsApp’s built-in AI chatbot looks like its rolling out to more people

AI bots are rapidly being added to just about every app and platform you can think of – with more on the way – and WhatsApp is stepping up its testing of a chatbot of its own, with easier access to the feature now on the way.

Back in September, WhatsApp owner Meta announced a variety of AI upgrades that would be coming to its products. Since then, a small number of users have been able to play around with an AI chatbot inside WhatsApp, capable of answering questions, generating text, and creating art like stickers.

Now, as spotted by WABetaInfo (via Android Police), a shortcut to the AI chat functionality has been added to the conversations screen in the beta version of WhatsApp for Android. If you're running the early beta version of the app, you may see it soon.

It also means that it shouldn't be too long before the rest of us get the same feature, and we can see how WhatsApp's AI helper compares against the likes of ChatGPT and Google Bard when it comes to providing useful and accurate information.

WhatsApp and AI

From what Meta has said so far, the purpose of the AI chatbot inside WhatsApp is to help with daily activities, offering advice and suggestions: how to entertain the kids at the weekend perhaps, or what to look for when upgrading a smartphone.

WhatsApp is by no means the first messaging app to give this a try – Snapchat introduced a similar feature back in February, and the chats with the AI buddy appears alongside the rest of your conversations through the app.

Such are the capabilities of generative AI now, you can really ask these bots anything you like – from relationship advice to questions about complex technical topics. The point of them being built into apps is that you're less likely to leave the app and go somewhere else to get your AI-produced responses.

WhatsApp continues to be one of the most regularly updated apps out there: we've recently seen AI-made chat stickers, newsletter tools, and features to fight spammers introduced for users of the instant messenger.

Join TechRadar’s WhatsApp Channel to get the latest tech news, reviews, opinion and Black Friday deals directly from our editors. We’ll make sure you don’t miss a thing!

You might also like

TechRadar – All the latest technology news

Read More