ChatGPT gets a big new rival as Anthropic claims its Claude 3 AIs beat it

AI company Anthropic is previewing its new “family” of Claude 3 models it claims can outperform Google’s Gemini and OpenAI’s ChatGPT across multiple benchmarks.

This group consists of three AIs with varying degrees of “capability”. You have Claude 3 Haiku down at the bottom, followed by Claude 3 Sonnet, and then there’s Claude 3 Opus as the top dog. Anthropic claims the trio delivers “powerful performance” across the board due to their multimodality, improved level of accuracy, better understanding of context, and speed. What’s also notable about the trio is they’ll be more willing to answer tough questions. 

Anthropic explains older versions of Claude would sometimes refuse to answer prompts that pushed the boundaries of the safety guardrails. Now, the Claude 3 family will have a more nuanced approach with its responses allowing them to answer those tricky questions.

Despite the all-around performance boost, much of the announcement is focused on Opus as being the best in all of these areas. They go so far as to say the model “exhibits near-human levels of comprehension… [for] complex tasks”.

Specialized AIs

To test it, Anthropic put Opus through a “Needle In a Haystack” or NIAH evaluation to see how well it’s able to recall data. As it turns out, it’s pretty good since the AI could remember information with almost perfect detail. The company goes on to claim that Opus is quite the smart cookie able to solve math problems, generate computer code, and display better reasoning than GPT-4

The technology isn’t without its quirks. Even though Anthropic states their AIs have improved accuracy, there is still the problem of hallucinations. The responses the models churn out may contain wrong information, although they are greatly reduced compared to Claude 2.1. Plus, Opus is a little slow when it comes to answering a question with speeds comparable to Claude 2.

Of course, this isn’t to say Haiku or Sonnet are lesser than Opus as they have specific use cases. Haiku, for example, is great at giving quick replies and grabbing information “from unstructured data”. Also, it’s not as good at answering math questions as Opus. Sonnet is a larger-scale model meant to help people save time at menial tasks and even parse lines of “text from images”, while Opus is ideal for large-scale operations.

Changing the internet

Both Sonnet and Opus are currently available for purchase although there is a free version of Claude on the company website. A launch date was not given for Haiku, but Anthropic states it’ll be released soon. 

As you can probably guess, the Claude 3 trio is meant more for businesses looking to automate certain workloads. Your experience with the group will likely come in the form of an online chatbot. Amazon recently announced it’s going to be implementing Anthropic’s new AIs into AWS (Amazon Web Services) giving websites on the platform a way to create a customized Claude 3 model to suit the needs of brands and their customers.

If you're looking for a model suited for everyday use, check out TechRadar's list of the best AI content generators for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Google Gemini explained: 7 things you need to know the new Copilot and ChatGPT rival

Google has been a sleeping AI giant, but this week it finally woke up. Google Gemini is here and it's the tech giant's most powerful range of AI tools so far. But Gemini is also, in true Google style, really confusing, so we're here to quickly break it all down for you.

Gemini is the new umbrella name for all of Google's AI tools, from chatbots to voice assistants and full-blown coding assistants. It replaces both Google Bard – the previous name for Google's AI chatbot – and Duet AI, the name for Google's Workspace-oriented rival to CoPilot Pro and ChatGPT Plus.

But this is also way more than just a rebrand. As part of the launch, Google has released a new free Google Gemini app for Android (in the US, for now. For the first time, Google is also releasing its most powerful large language model (LLM) so far called Gemini Ultra 1.0. You can play with that now as well, if you sign up for its new Google One AI Premium subscription (more on that below).

This is all pretty head-spinning stuff, and we haven't even scratched the surface of what you can actually do with these AI tools yet. So for a quick fast-charge to get you up to speed on everything Google Gemini, plug into our easily-digestible explainer below…

1. Gemini replaces Google Bard and Duet AI

In some ways, Google Gemini makes things simpler. It's the new umbrella name for all of Google's AI tools, whether you're on a smartphone or desktop, or using the free or paid versions.

Gemini replaces Google Bard (the previous name for Google's “experimental” AI chatbot) and Duet AI, the collection of work-oriented tools for Google Workspace. Looking for a free AI helper to make you images or redraft emails? You can now go to Google Gemini and start using it with a standard Google account.

But if you want the more powerful Gemini Advanced AI tools – and access to Google's newest Gemini Ultra LLM – you'll need to pay a monthly subscription. That comes as part of a Google One AI Premium Plan, which you can read more about below.

To sum up, there are three main ways to access Google Gemini:   

2. Gemini is also replacing Google Assistant

Two phones on an orange background showing the Google Gemini app

(Image credit: Google)

As we mentioned above, Google has launched a new free Gemini app for Android. This is rolling out in the US now and Google says it'll be “fully available in the coming weeks”, with more locations to “coming soon”. Google is known for having a broad definition of “soon”, so the UK and EU may need to be patient.

There's going to be a similar rollout for iOS and iPhones, but with a different approach. Rather than a separate standalone app, Gemini will be available in the Google app.

The Android app is a big deal in particular because it'll let you set Gemini as your default voice assistant, replacing the existing Google Assistant. You can set this during the app's setup process, where you can tap “I agree” for Gemini to “handle tasks on your phone”.

Do this and it'll mean that whenever you summon a voice assistant on your Android phone – either by long-pressing your home button or saying “Hey Google” – you'll speak to Gemini rather than Google Assistant. That said, there is evidence that you may not want to do that just yet…

3. You may want to stick with Google Assistant (for now)

An Android phone on an orange background showing the Google Gemini app

(Image credit: Google)

The Google Gemini app has only been out for a matter of days – and there are early signs of teething issues and limitations when it comes to using Gemini as your voice assistant.

The Play Store is filling up with complaints stating that Gemini asks you to tap 'submit' even when using voice commands and that it lacks functionality compared to Assistant, including being unable to handle hands-free reminders, home device control and more. We've also found some bugs during our early tests with the app.

Fortunately, you can switch back to the old Google Assistant. To do that, just go the Gemini app, tap your Profile in the top-right corner, then go to Settings > Digital assistants from Google. In here you'll be able to choose between Gemini and Google Assistant.

Sissie Hsiao (Google's VP and General Manager of Gemini experiences) claims that Gemini is “an important first step in building a true AI assistant – one that is conversational, multimodal and helpful”. But right now, it seems that “first step” is doing a lot of heavy lifting.

4. Gemini is a new way to quiz Google's other apps

Two phones on an orange background showing the Google Gemini app

(Image credit: Google)

Like the now-retired Bard, Gemini is designed to be a kind of creative co-pilot if you need help with “writing, brainstorming, learning, and more”, as Google describes it. So like before, you can ask it to tell you a joke, rewrite an email, help with research and more. 

As always, the usual caveats remain. Google is still quite clear that “Gemini will make mistakes” and that, even though it's improving by the day, Gemini “can provide inaccurate information, or it can even make offensive statements”.

This means its other use case is potentially more interesting. Gemini is also a new way to interact with Google's other services like YouTube, Google Maps and Gmail. Ask it to “suggest some popular tourist sites in Seattle” and it'll show them in Google Maps. 

Another example is asking it to “find videos of how to quickly get grape juice out of a wool rug”. This means Gemini is effectively a more conversational way to interact with the likes of YouTube and Google Drive. It can also now generate images, which was a skill Bard learnt last week before it was renamed.

5. The free version of Gemini has limitations

Two phones on an orange background showing the Google Gemini Android app

(Image credit: Future)

The free version of Gemini (which you access in the Google Gemini app on Android, in the Google app on iOS, or on the Gemini website) has quite a few limitations compared to the subscription-only Gemini Advanced. 

This is partly because it's based on a simpler large language model (LLM) called Gemini Pro, rather than Google's new Gemini Ultra 1.0. Broadly speaking, the free version is less creative, less accurate, unable to handle multi-step questions, can't really code and has more limited data-handling powers.

This means the free version is best for basic things like answering simple questions, summarizing emails, making images, and (as we discussed above) quizzing Google's other services using natural language.

Looking for an AI assistant that can help with advanced coding, complex creative projects, and also work directly within Gmail and Google Docs? Google Gemini Advanced could be more up your street, particularly if you already subscribe to Google One… 

6. Gemini Advanced is tempting for Google One users

The subscription-only Gemini Advanced costs $ 19.99 / £18.99 / AU$ 32.99 per month, although you can currently get a two-month free trial. Confusingly, you get Advanced by paying for a new Google One AI Premium Plan, which includes 2TB of cloud storage.

This means Gemini Advanced is particularly tempting if you already pay for a Google One cloud storage plan (or are looking to sign up for it anyway). With a 2TB Google One plan already costing $ 9.99 / £7.99 / AU$ 12.49 per month, that means the AI features are effectively setting you back an extra $ 10 / £11 / AU$ 20 a month.

There's even better news for those who already have a Google One subscription with 5TB of storage or more. Google says you can “enjoy AI Premium features until July 21, 2024, at no extra charge”.

This means that Google, in a similar style to Amazon Prime, is combining its subscriptions offerings (cloud storage and its most powerful AI assistant) in order to make them both more appealing (and, most likely, more sticky too).

7. The Gemini app could take a little while to reach the UK and EU

Two phones on an orange background showing the Google Gemini app

(Image credit: Future)

While Google has stated that the Gemini Android app is “coming soon” to “more countries and languages”, it hasn't given any timescale for when that'll happen – and a possible reason for the delay is that it's waiting for the EU AI Act to become clearer.

Sissie Hsiao (Google's VP and General Manager of Gemini experiences) told the MIT Technology Review “we’re working with local regulators to make sure that we’re abiding by local regime requirements before we can expand.”

While that sounds a bit ominous, Hsiao added that “rest assured, we are absolutely working on it and I hope we’ll be able to announce expansion very, very soon.” So if you're in the UK or EU, you'll need to settle for tinkering with the website version for now.

Given the early reviews of the Google Gemini Android app, and its inconsistencies as a Google Assistant replacement, that might well be for the best anyway.

You might also like

TechRadar – All the latest technology news

Read More

LG plans to launch an Apple Vision Pro rival in 2025

LG has announced that it’s bringing its OLED TV expertise to the XR (extended reality) space, and plans to launch some kind of device next year.

The report comes via South Korean news outlet The Guru (translated to English) in which LG Electronics CEO Cho Joo-wan told a reporter at CES 2024 that “[LG] will launch an XR device as early as next year.”

Not much else is known about the device; however, there were rumors late last year (via ET News, also translated from Korean) that LG was working on a VR headset, which would be powered by the then-unannounced Snapdragon XR2+ Gen 2 from Qualcomm. The same rumor correctly predicted that the Samsung XR/VR headset would use this new chipset, so while we should still take this latest leak with a pinch of salt there appears to be a decent chance that it's accurate.

If the LG headset does indeed boast a Snapdragon XR2+ Gen then we should expect it to be a high-end XR headset that’ll rival the likes of the soon-to-release Apple Vision Pro. This would also mean the headset is likely to be pricey – we'd expect somewhere in the region of $ 2,000 / £2,000 / AU$ 3,000.

An Apple Store staff member shows a customer how to use a Vision Pro headset.

The Apple Vision Pro might soon get another rival (Image credit: Apple)

As noted by an Upload VR report on the LG XR device announcement, the Korean language doesn’t always differentiate between singular and plural. As such, it’s possible that LG wants to release more than one XR device in 2025 – and if that’s the case we’d expect to see a high-end Apple Vision Pro rival, and a more affordable option that competes with the Meta Quest 3 or the recently announced Xreal Air 2 Ultra

Is LG working alone? 

Alongside Qualcomm, LG might be working with another XR veteran to bring its headset to life – Meta, though reports suggest it would be LG helping Meta not the other way around.

In February 2023 we reported that Meta was looking to partner with LG to create OLED displays for its XR headsets – most likely the Meta Quest Pro 2, but possibly the Meta Quest 4 as well.

If a Quest Pro 2 is on the way we’ve hypothesized that 2025 is most likely the earliest we’d see one; Meta likes to tease new XR hardware a year in advance, and typically makes announcements at its Meta Connect event that happens in September / October, so we’d likely get a 2024 teaser and a 2025 release.

This schedule fits with LG’s plan to launch a device next year, suggesting that the next Meta headset and the LG headset are one and the same.

The Meta Quest Pro and its controllers on a grey cushion

The Meta Quest Pro is good, but not all that popular (Image credit: Future)

That said, there’s a chance that LG is working on an XR project without Meta.

A few months after the February leak, in July 2023, there were reports that Meta had cancelled the Quest Pro 2. Meta CTO Andrew Bosworth took to Instagram Stories to deny the rumor, though his argument was based on a technicality – saying a canceled prototype isn’t the Quest Pro 2 until Meta names it the Quest Pro 2.

This confusing explanation leaves the door open to the Quest Pro 2 being delayed – likely due to the original’s seemingly very lackluster sales (we estimate the Quest Pro is around 86 times less popular than the Quest 2 using Steam Hardware Survey data), and the arrival of the Vision Pro. 

If LG is sitting on an excellent OLED display design – while Meta is back at the drawing board stage – why wouldn’t the South Korean display company leverage this screen, and its home entertainment expertise, to create a killer headset of its own?

You might also like:

TechRadar – All the latest technology news

Read More

Google Bard Advanced leak hints at imminent launch for ChatGPT rival

The release of Google Bard’s Advanced tier may be coming sooner than people expected, according to a recent leak, and what's more, it won’t be free.

Well, it’s not a “leak” per se; the company left a bunch of clues on its website that anybody could find if you know where to look. That’s how developer Bedros Pamboukian on X (the platform formerly known as Twitter) found lines of code hinting at the imminent launch of Bard Advanced. What’s interesting is the discovery reveals the souped-up AI will be bundled with Google One, and if you buy a subscription, you can try it out as part of a three-month trial.

There is a bit of hype surrounding Bard Advanced because it will be powered by Google’s top-of-the-line Gemini Ultra model. In an announcement post from this past December, the company states Gemini Ultra has been designed to deal with “highly complex tasks and accept multimodal inputs”. This possibility is backed up by another leak from user Dylan Roussel on X claiming the chatbot will be capable of “advanced math and reasoning skills.”

It’s unknown which Google One tier people will have to buy to gain access or if there will be a new one for Bard Advanced. Neither leak reveals a price tag. But if we had to take a wild guess, you may have to opt for the $ 10 a month Premium plan. Considering the amount of interest surrounding the AI, it would make sense for Google to put up a high barrier for entry.

Potential features

Going back to the Roussel leak, it reveals a lot of other features that may or may not be coming to Google Bard. Things might change or “they may never land at all.”

First, it may be possible to create customized bots using the AI’s tool. There is very little information about them. We don’t know what they do or if they’re shareable. The only thing we do know is the bots are collectively codenamed Motoko.

Next, it appears Bard will receive a couple of extra tools. You have Gallery, a set of publicly viewable prompts on a variety of topics users can check out for brainstorming ideas. Then there’s Tasks. Roussel admits he couldn’t find many details about it, but to his understanding, it’ll be “used to manage long-running tasks such as” image generation.

See more

Speaking of generating images, the third feature allows users a way to create backgrounds and foregrounds for smartphones and website banners. The last one, called Power Up, is said to be able to improve text prompts. Once again, there’s little information to go on. We don’t know how the backgrounds can be made (if that’s what’s going on) or what powering up a text prompt even looks like. It's hard to say for sure.

Users probably won’t have to wait for very long to get the full picture. Given the fact these were hidden on Google’s website, the official rollout must be just around the corner.

2024 is shaping up to be a big year for artificial intelligence, especially when it comes to the likes of Google Bard and its ChatGPT. If you want to know which one we think will come out on top, check out TechRadar's ChatGPT vs Google Bard analyzation.

You might also like

TechRadar – All the latest technology news

Read More

What is Google Bard? Everything you need to know about the ChatGPT rival

Google finally joined the AI race and launched a ChatGPT rival called Bard – an “experimental conversational AI service” earlier this year. Google Bard is an AI-powered chatbot that acts as a poet, a crude mathematician and a even decent conversationalist.

The chatbot is similar to ChatGPT in many ways. It's able to answer complex questions about the universe and give you a deep dive into a range of topics in a conversational, easygoing way. The bot, however, differs from its rival in one crucial respect: it's connected to the web for free, so – according to Google – it gives “fresh, high-quality responses”.

Google Bard is powered by PaLM 2. Like ChatGPT, it's a type of machine learning called a 'large language model' that's been trained on a vast dataset and is capable of understanding human language as it's written.

Who can access Google Bard?

Bard was announced in February 2023 and rolled out for early access the following month. Initially, a limited number of users in the UK and US were granted access from a waitlist. However, at Google I/O – an event where the tech giant dives into updates across its product lines – Bard was made open to the public.

It’s now available in more than 180 countries around the world, including the US and all member states of the European Union. As of July 2023, Bard works with more than 40 languages. You need a Google account to use it, but access to all of Bard’s features is entirely free. Unlike OpenAI’s ChatGPT, there is no paid tier.

The Google Bard chatbot answering a question on a computerscreen

(Image credit: Google)

Opening up chatbots for public testing brings great benefits that Google says it's “excited” about, but also risks that explain why the search giant has been so cautious to release Bard into the wild. The meteoric rise of ChatGPT has, though, seemingly forced its hand and expedited the public launch of Bard.

So what exactly will Google's Bard do for you and how will it compare with ChatGPT, which Microsoft appears to be building into its own search engine, Bing? Here's everything you need to know about it.

What is Google Bard?

Like ChatGPT, Bard is an experimental AI chatbot that's built on deep learning algorithms called 'large language models', in this case one called LaMDA. 

To begin with, Bard was released on a “lightweight model version” of LaMDA. Google says this allowed it to scale the chatbot to more people, as this “much smaller model requires significantly less computing power”.

The Google Bard chatbot answering a question on a phone screen

(Image credit: Google)

At I/O 2023, Google launched PaLM 2, its next-gen language model trained on a wider dataset spanning multiple languages. The model is faster and more efficient than LamDA, and comes in four sizes to suit the needs of different devices and functions.

Google is already training its next language model, Gemini, which we think is one of its most exciting projects of the next 25 years. Built to be multi-modal, Gemini is touted to deliver yet more advancements in the arena of generative chatbots, including features such as memory.

What can Google Bard do?

In short, Bard is a next-gen development of Google Search that could change the way we use search engines and look for information on the web.

Google says that Bard can be “an outlet for creativity” or “a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills”.

Unlike traditional Google Search, Bard draws on information from the web to help it answer more open-ended questions in impressive depth. For example, rather than standard questions like “how many keys does a piano have?”, Bard will be able to give lengthy answers to a more general query like “is the piano or guitar easier to learn”?

The Google Bard chatbot answering a question on a computer screen

An example of the kind or prompt that Google’s Bard will give you an in-depth answer to. (Image credit: Google)

We initially found Bard to fall short in terms of features and performance compared to its competitors. But since its public deployment earlier this year, Google Bard’s toolkit has come on leaps and bounds. 

It can generate code in more than 20 programming languages, help you solve text-based math equations and visualize information by generating charts, either from information you provide or tables it includes in its responses. It’s not foolproof, but it’s certainly a lot more versatile than it was at launch.

Further updates have introduced the ability to listen to Bard’s responses, change their tone using five options (simple, long, short, professional or casual), pin and rename conversations, and even share conversations via a public link. Like ChatGPT, Bard’s responses now appear in real-time, too, so you don’t have to wait for the complete answer to start reading it.

Google Bard marketing image

(Image credit: Google)

Improved citations are meant to address the issue of misinformation and plagiarism. Bard will annotate a line of code or text that needs a citation, then underline the cited part and link to the source material. You can also easily double-check its answers by hitting the ‘Google It’ shortcut.

It works with images as well: you can upload pictures with Google Lens and see Google Search image results in Bard’s responses.

Bard has also been integrated into a range of Google apps and services, allowing you deploy its abilities without leaving what you’re working on. It can work directly with English text in Gmail, Docs and Drive, for example, allowing you to summarize your writing in situ.

Similarly, it can interact with info from the likes of Maps and even YouTube. As of November, Bard now has the limited ability to understand the contents of certain YouTube videos, making it quicker and easier for you to extract the information you need.

What will Google Bard do in future?

A huge new feature coming soon is the ability for Google Bard to create generative images from text. This feature, a collaborative effort between Google and Adobe, will be brought forward by the Content Authenticity Initiative, an open-source Content Credentials technology that will bring transparency to images that are generated through this integration.

The whole project is made possible by Adobe Firefly, a family of creative generative AI models that will make use of Bard's conversational AI service to power text-to-image capabilities. Users can then take these AI-generated images and further edit them in Adobe Express.

Otherwise, expect to see Bard support more languages and integrations with greater accuracy and efficiency, as Google continues to train its ability to generate responses.

Google Bard vs ChatGPT: what’s the difference?

Fundamentally the chatbot is based on similar technology to ChatGPT, with even more tools and features coming that will close the gap between Google Bard and ChatGPT.

Both Bard and ChatGPT are chatbots that are built on 'large language models', which are machine learning algorithms that have a wide range of talents including text generation, translation, and answering prompts based on the vast datasets that they've been trained on.

A laptop screen showing the landing page for ChatGPT Plus

(Image credit: OpenAI)

The two chatbots, or “experimental conversational AI service” as Google calls Bard, are also fine-tuned using human interactions to guide them towards desirable responses. 

One difference between the two, though, is that the free version of ChatGPT isn't connected to the internet – unless you use a third-party plugin. That means it has a very limited knowledge of facts or events after January 2022. 

If you want ChatGPT to search the web for answers in real time, you currently need to join the waitlist for ChatGPT Plus, a paid tier which costs $ 20 a month. Besides the more advanced GPT-4 model, subscribers can use Browse with Bing. OpenAI has said that all users will get access “soon”, but hasn't indicated a specific date.

Bard, on the other hand, is free to use and features web connectivity as standard. As well as the product integrations mentioned above, Google is also working on Search Generative Experience, which builds Bard directly into Google Search.

Does Google Bard only do text answers?

Until recently Google's Bard initially only answered text prompts with its own written replies, similar to ChatGPT. But one of the biggest changes to Bard is its multimodal functionality. This allows the chatbot to answer user prompts and questions with both text and images.

Users can also do the same, with Bard able to work with Google Lens to have images uploaded into Bard and Bard responding in text. Multimodal functionality is a feature that was hinted at for both GPT-4 and Bing Chat, and now Google Bard users can actually use it. And of course, we also have Google Bard's Adobe-powered AI image generator, which will be powered by Adobe Firefly.

TechRadar – All the latest technology news

Read More

Google’s AI plans hit a snag as it reportedly delays next-gen ChatGPT rival

Development on Google’s Gemini AI is apparently going through a rough patch as the LLM (large language model) has reportedly been delayed to next year.

This comes from tech news site The Information whose sources claim the project will not see a November launch as originally planned. Now it may not arrive until sometime in the first quarter of 2024, barring another delay. The report doesn’t explain exactly why the AI is being pushed back. Google CEO Sundar Pichai did lightly confirm the decision by stating the company is “focused on getting Gemini 1.0 out as soon as possible [making] sure it’s competitive [and] state of the art”. That said, The Information does suggest this situation is due to ChatGPT's strength as a rival.

Since its launch, ChatGPT has skyrocketed in popularity, effectively becoming a leading force in 2023’s generative AI wave. Besides being a content generator for the everyday user, corporations are using it for fast summarization of lengthy reports and even building new apps to handle internal processes and projections. It’s been so successful that OpenAI has had to pause sign-ups for ChatGPT Plus as servers have hit full capacity.

Plan of attack

So what is Google’s plan moving forward? According to The Information, the Gemini team wants to ensure “the primary model is as good as or better than” GPT-4, OpenAI’s latest model. That is a tall order. GPT-4 is multimodal meaning it can accept video, speech, and text to launch a query and generate new content. What’s more, it boasts overall better performance when compared to the older GPT-3.5 model, now capable of performing more than one task at a time.

For Gemini, Google has several use cases in mind. The tech giant plans on using the AI to power new YouTube creator tools, upgrade Bard, plus improve Google Assistant. So far, it has managed to create mini versions of Gemini “to handle different tasks”, but right now, the primary focus is getting the main model up and running. 

It also plans to court advertisers with their AI as advertising is “Google’s main moneymaker.” Company executives have reportedly talked about using Gemini to generate ad campaigns, including text and images. Videos could come later, too.

Bard upgrade

Google is far from out of the game, and while the company is putting a lot of work into Gemini, it's still building out and updating Bard

First, if you’re stuck on your math homework, Bard will now provide step-by-step instructions on how to solve the problem, similar to Google Search. All you have to do is ask the AI or upload a picture of the question. Additionally, the platform can create charts for you by using the data you enter into the text prompts. Or you can ask it to make a smiley face like we did.

Google Bard's new chart plot feature

(Image credit: Future)

If you want to know more about this technology, we recommend learning about the five ways that ChatGPT is better than Google Bard (and three ways it isn't).

Follow TechRadar on TikTok for news, reviews, unboxings, and hot Black Friday deals!

You might also like

TechRadar – All the latest technology news

Read More

Samsung’s Apple Vision Pro rival tipped to land alongside the Galaxy Z Flip 6

The Apple Vision Pro has become a massive talking point in the tech world, and it promises to become one of the best virtual reality headsets when it's released next year. Now, Samsung wants to get in on the action with a headset of its own, and it could be revealed alongside the Galaxy Z Flip 6 in 2024.

We already know that Samsung is working with Google and Qualcomm to launch an extended reality (XR) headset at some point in the future (extended reality is a catch-all term that covers VR, AR, and MR or mixed reality). While Samsung hasn’t given any indication of a launch timeframe, Korean outlet JoongAng (translated version) claims it will launch by the end of 2024.

Specifically, it says the headset, supposedly codenamed ‘Infinite,’ will be produced by December of next year, and we’ll get our first peek at it during one of Samsung’s Unpacked events. Samsung usually hosts two of these shows every year, but JoongAng’s source says the headset will be revealed at the event held during “the second half of next year,” which is when the Galaxy Z Flip 6 is widely tipped to make an appearance.

The headset might have launched sooner, JoongAng says, but for delays caused by “product completeness” issues. Now, though, it looks like Samsung is closing in on a firm release date.

Seriously limited production

A VR headset cla in black plastic with a simple strap and six visible cameras on its faces

(Image credit: Vrtuoluo / Samsung)

Numerous reports have suggested that Apple has seriously cut back production of its Vision Pro, from around one million units to just 400,000 headsets a year. Yet even that dwarves the number of XR headsets Samsung is set to produce.

According to JoongAng, Samsung will initially limit production of the device to just 30,000 units. This is due to the company wanting to gauge the response to its device, and assess how the industry looks after launch. In other words, Samsung wants to play it extremely safe without having to dedicate itself to a niche device in a fluctuating market.

Part of the reason for Samsung’s uncertainty might be the price. JoongAng’s report didn’t quote an expected launch price, but stated that Samsung aims to engage in a “fierce battle for leadership” in the XR space. If that’s the case, it might be planning a high-end device with a costly price tag to match. And if that’s the case, it may want to see how the industry develops before committing too heavily to its headset.

Either way, it looks as though the XR headset battle might be about to heat up, with both Samsung and Apple working on challengers to the existing incumbents like the Meta Quest Pro. Whether it will be enough for these devices to break through into the mainstream, though, is anyone’s guess.

You might also like

TechRadar – All the latest technology news

Read More

Elon Musk says xAI is launching its first model and it could be a ChatGPT rival

Elon Musk’s artificial intelligence startup company, xAI, will debut its first long-awaited AI model on Saturday, November 4.

The billionaire made the announcement on X (the platform formerly known as Twitter) stating the tech will be released to a “select group” of people. He even boasts that “in some important respects, it is the best that currently exists.”

It’s been a while since we’ve last heard anything from xAI. The startup hit the scene back in July, revealing it’s run by a team of former engineers from Microsoft, Google, and even OpenAI. Shortly after the debut on July 14, Musk held a 90-minute-long Twitter Spaces chat where he talked about his vision for the company. During the chat, Musk stated his startup will seek to create “a good AGI with the overarching purpose of just trying to understand the universe”. He wants it to run contrary to what he believes is problematic tech from the likes of Microsoft and Google. 

Yet another chatbot

AGI stands for artificial general intelligence, and it’s the concept of an AI having “intelligence” comparable to or beyond that of a normal human being. The problem is that it's more of an idea of what AI could be rather than a literal piece of technology. Even Wired in their coverage of AGIs states there’s “no concrete definition of the term”.

So does this mean xAI will reveal some kind of super-smart model that will help humanity as well as be able to hold conversations like a sci-fi movie? No, but that could be the lofty end goal for Elon Musk and his team. We believe all we’ll see on November 5 is a simple chatbot like ChatGPT. Let’s call it “ChatX” since the billionaire has an obsession with the letter “X”.  

Does “ChatX” even stand a chance against the likes of Google Bard or ChatGPT? The latter has been around for almost a year now and has seen multiple updates becoming more refined each time. Maybe xAI has solved the hallucination problem. That'll be great to see. Unfortunately, it's possible ChatX could just be another vehicle for Musk to spread his ideas/beliefs.

Analysis: A personal truth spinner

Musk has talked about wanting to have an alternative to ChatGPT that focuses on providing the “truth”, whatever that means. Musk has been a vocal critic of how fast companies have been developing their own generative AI models with seemingly reckless abandon. He even called for a six-month pause on AI training in March. Obviously, that didn’t happen as the technology advanced by leaps and bounds since then.

It's worth mentioning that Twitter, under Musk's management, has been known to comply with censorship requests by governments from around the world, so Musk's definition of truth seems dubious at best. Either way, we’ll know soon enough what the team's intentions are. Just don’t get your hopes up.

While we have you, be sure to check out TechRadar's list of the best AI writers for 2023.

You might also like

TechRadar – All the latest technology news

Read More

Apple is secretly spending big on its ChatGPT rival to reinvent Siri and AppleCare

Apple is apparently going hard on developing AI, according to a new report that says it’s investing millions of dollars every day in multiple AI projects to rival the likes of ChatGPT.

According to those in the know (via The Verge, citing a paywalled report at The Information), Apple has teams working on conversational AI (read: chatbots), image-generating AIs, and 'multimodel AI' which would be a hybrid of the others – being able to create video, images and text responses to queries.

These AI models would have a variety of uses, including supporting Apple Care users as well as boosting Siri’s capabilities.

Currently, the most sophisticated large language model (LLM) Apple has produced is known as Ajax GPT. It’s reportedly been trained on over 200 billion parameters, and is claimed to be more powerful than OpenAI’s GPT-3.5; this was what ChatGPT used when it first became available to the general public in 2022, though Open AI has since updated its service to GPT-4.

As with all rumors, we should take these reports with a pinch of salt. For now, Apple is remaining tight-lipped about its AI plans, and much like we saw with its Vision Pro VR headset plans, it won’t reveal anything official until it’s ready – if it even has anything to reveal.

The idea of Apple developing its own alternative to ChatGPT isn’t exactly far-fetched though – everyone and their dog in the tech space is working on AI at the moment, with Google, Microsoft, X (formerly Twitter), and Meta just a few of those with public AI aspirations.

Close-up of the Siri interface

Siri can reportedly expect a few upgrades, but when? (Image credit: Shutterstock / Tada Images)

Don't expect to see Apple AI soon

We should bear in mind that polish is everything for Apple; it doesn't release new products until it feels its got everything right, and chatbots are notoriously the antithesis of this philosophy. So much so that AI developers have a term – 'to hallucinate' – to describe when AI chatbots are incorrect, incoherent, or make information up, because they do it embarrassingly frequently. Even ChatGPT and the best ChatGPT alternatives are prone to hallucinating multiple times in a session, and even when you aren’t purposefully trying to befuddle them.

We wouldn’t be too surprised if some Apple bots started to trickle out soon, though – even as early as next month. Something like its Apple Care AI assistant would presumably have a fairly simple task of matching up user complaints with a set of common troubleshooting solutions, patching you through to a human or sending you to a real-world Apple store if it gets stumped. But something like its Ajax GPT? We’ll be lucky to see it in 2024; at least not without training wheels.

If given as much freedom as ChatGPT, Ajax could embarrass Apple and erode our  perception of the brand for delivering finely-tuned and glitch-free products out of the box. The only way we'll see Ajax soon is if AI takes a serious leap forward in terms of reliability – which is unlikely to happen quickly – or if Apple puts a boatload of limitations on its AI to ensure that it avoids making errors or wading into controversial topics. This chatbot would likely still be fine, but depending on how restricted it is, Ajax may struggle to be taken seriously as a ChatGPT rival.

Given that Apple has an event on September 12 – the Apple September event, at which we're expecting it to reveal the iPhone 15 handsets, among other products – there’s a slim chance we could hear something about its AI soon. But we wouldn’t recommend holding your breath for anything more than a few Siri updates.

Instead, we’d recommend keeping your eye on WWDC over the next few years (the company’s annual developer’s conference) to find out what AI chatbot plans Apple has up its sleeves. Just don’t be disappointed if we’re waiting until 2025 or beyond for an official update.

You might also like:

TechRadar – All the latest technology news

Read More

Apple is reportedly working on a ChatGPT rival – but you won’t see it anytime soon

Of course, Apple is working on its own generative AI, Large Language Model (LLM) and possible ChatGPT rival called, naturally, AppleGPT. Sure, the news is based on a Bloomberg report and Apple is predictably mum on the matter but, seriously, how could the Cupertino tech giant not be working on its own AI?

According to the Bloomberg report, Apple is basing its ultra-secret project on a learning framework known as Ajax, from rival and sometimes friend Google.

The effort to build some sort of chatbot and maybe other generative AI systems has been going on since late last year but, as someone who attended Apple's WWDC 2023 can tell you, Apple made no mention of chatbots of any kind at the June developer's conference.

Privacy roadblock

Apple's hyper-focus on user privacy has, as I see it, somewhat hamstrung its efforts to bring any kind of LLM-based chatbot to consumers. ChatGPT, Google Bard, and Microsoft Bing are all cloud-connected and send queries out to distant servers for rapid interpretation and response (based on the LLM's vast knowledge of how actual humans might respond under similar circumstances).

That, of course, is not the Apple way. Its Apple Silicon A16 Bionic's Neural Network is local. It does Machine Learning on your best iPhone. Sending queries with all those possibly personal details is anathema to Apple's privacy principles.

And yet, Apple clearly cannot afford to stay away from the siren call of generative AI. It is a revolution that is consuming the tech industry and the interests of average consumers and businesses. Even with the intense scrutiny AI development is under and the lawsuits some of it is facing, no one believes AI development is suddenly going to stop or go away. 

Apple has even gone as far as, according to Bloomberg, creating its own chatbot, or AppleGPT. But that's basically a highly limited and internal test and apparently not one that's ever headed to consumer desktops.

What about Siri?

Where does Siri sit in all this? 

Bloomberg claims that the Ajax work has already been used to improve Siri. That may be so, but the only Siri improvement we're getting with iOS 17 (currently in public beta) is the ability to stop starting each voice assistant prompt with “Hey.”

I have no doubt that Apple is hard at work figuring out its place in the LLM AI sphere, but it's also clear from the report that these are early days. There is no overarching strategy, and I doubt the existential question of whether or not Siri could ever host AppleGPT (or whatever it's called) has been answered.

Ultimately, this is confirmation that Apple is just as aware of what's going on around it and with competitors as ever. It will sample and test, develop and test, scrap and develop, and then test some more. I don't expect Apple to tell us anything about this during the expected September launch of the iPhone 15. However, by the time WWDC 2024 rolls around, Apple might be ready to unveil a new platform. Maybe it'll be AppleGPT-kit, AppleLLM-Kit, or even AppleGPT. 

This assumes that Apple can solve its big privacy question. If not, AppleGPT could remain in Skunkworks indefinitely.

TechRadar – All the latest technology news

Read More