Claude’s big update makes it the best ChatGPT rival so far – and you can try it for free

Anthropic's Claude AI chatbot has long been one of the best ChatGPT alternatives and now a big update has taken it to another level – including beating OpenAI's GPT-4o model in some industry standard benchmark tests.

Like Google Gemini, Claude is a family of three different AI models. The new Claude 3.5 Sonnet (which takes the baton from Claude 3 Sonnet) is the company's mid-tier AI model, sitting in between the Claude 3 Haiku (for smaller tasks) and the larger 'Opus' model, which is more like GPT-4.

This new Sonnet model now powers the browser-based Claude.ai and the Claude iOS app, both of which you can use right now for free. Like ChatGPT, there are Pro and Team subscriptions available for Claude that let you use it more intensely, but the free version gives you a taste of what it can do.

So what's new in Claude 3.5 Sonnet? The big improvements are its ability to handle vision-based tasks – for example, creating charts or transcribing handwritten notes – with Anthropic calling it “our strongest vision model yet”. The company also says that Sonnet “shows a marked improvement in grasping nuance, humor, and complex instructions”.

The upgraded Claude is also simply faster and smarter than before, edging out ChatGPT's latest GPT-4o model across many benchmarks, according to Anthropic. That includes setting new benchmark high scores for “graduate-level reasoning”, “undergraduate-level knowledge” and “coding proficiency”.

This means Claude could be a powerful new sidekick if you need help with creative writing, creating presentations and coding – particularly as it now has a new 'Artifacts' side window to help with refining its creations.

Ultimate homework assistant?

Another handy new feature in Claude 3.5 Sonnet is its so-called 'Artifacts' side window, which lets you see and tweak its visual creations without having to scroll back and forth through the chat.

For example, if you ask it to create a text document, graph or website design, these will appear in a separate window alongside your conversation. You can see an example of that in action in the video above, which shows off Claude's potential for creating graphs and presentations.

So how does this all compare to ChatGPT? One thing Claude doesn't have is a voice or audio powers – it's purely a text-based AI assistant. So if you're looking to chat casually with an AI assistant to brainstorm ideas, then ChatGPT remains the best AI tool around.

But Claude 3.5 Sonnet is undoubtedly a powerful new rival for text-based tasks and coding, edging out GPT-4o in benchmarks and giving us an increasingly well-rounded new option for both creative tasks and coding. 

The headline AI battle might be ChatGPT vs Google Gemini vs Meta AI, but if you want a fast, smart AI sidekick to help with a variety of tasks, then it's well worth taking Claude 3.5 Sonnet for a spin in its browser version or iOS app.

You might also like…

TechRadar – All the latest technology news

Read More

Runway’s new OpenAI Sora rival shows that AI video is getting frighteningly realistic

Just a week on from arrival of Luma AI's Dream Machine, another big OpenAI Sora has just landed – and Runway's latest AI video generator might be the most impressive one yet.

Runway was one of the original text-to-video pioneers, launching its Gen-2 model back in March 2023. But its new Gen-3 Alpha model, which will apparently be “available for everyone over the coming days”, takes things up several notches with new photo-realistic powers and promises of real-world physics.

The demo videos (which you can see below) showcase how versatile Runway's new AI model is, with the clips including realistic human faces, drone shots, simulations of handheld cameras and atmospheric dreamscapes. Runway says that all of them were generated with Gen-3 Alpha “with no modifications”.

Apparently, Gen-3 Alpha is also “the first of an upcoming series of models” that have been trained “on a new infrastructure built for large-scale multimodal training”. Interestingly, Runway added that the new AI tool “represents a significant step towards our goal of building General World Models”, which could create possibilities for gaming and more.

A 'General World Model' is one that effectively simulates an environment, including its physics – which is why one of the sample videos shows the reflections on a woman's face as she looks through a train window.

These tools won't just be for us to level-up our GIF games either – Runway says it's “been collaborating and partnering with leading entertainment and media organizations to create custom versions of Gen-3 Alpha”, which means tailored versions of the model for specific looks and styles. So expect to see this tech powering adverts, shorts and more very soon.

When can you try it?

A middle-aged sad bald man becomes happy as a wig of curly hair and sunglasses fall suddenly on his head

(Image credit: Runway)

Last week, Luma AI's Dream Machine arrived to give us a free AI video generator to dabble with, but Runway's Gen-3 Alpha model is more targeted towards the other end of the AI video scale. 

It's been developed in collaboration with pro video creators with that audience in mind, although Runway says it'll be “available for everyone over the coming days”. You can create a free account to try Runway's AI tools, though you'll need to pay a monthly subscription (starting from $ 12 per month, or around £10 / AU$ 18 a month) to get more credits.

You can create videos using text prompts – the clip above, for example, was made using the prompt “a middle-aged sad bald man becomes happy as a wig of curly hair and sunglasses fall suddenly on his head”. Alternatively, you can use still images or videos as a starting point.

The realism on show is simultaneously impressive and slightly terrifying, but Runway states that the model will be released with a new set of safeguards against misuse, including an “in-house visual moderation system” and C2PA (Coalition for Content Provenance and Authenticity) provenance standards. Let the AI video battles commence.

You might also like…

TechRadar – All the latest technology news

Read More

A new OpenAI Sora rival just landed for AI videos – and you can use it right now for free

The text-to-video AI boom has really kicked off in the past few months, with the only downside being that the likes of OpenAI Sora still aren't available for us to try. If you're tired of waiting, a new rival called Dream Machine just landed – and you can take it for a spin right now.

Dream Machine is made by Luma AI, which has previously released an app that helps you shoot 3D photos with your iPhone. Well, now it's turned its attention to generative video, which has a free tier that you can use right now with a Google account – albeit with some caveats.

The main one is that Dream Machine seems to be slightly overwhelmed at the time of writing. There's currently a banner on the site stating that “generations take 120 seconds” and that “due to high demand, requests will be queued”. Our text prompt took over 20 minutes to be processed, but the results (below) are pretty impressive.

Dream Machine's outputs are more limited in length and resolution compared to the likes of OpenAI's Sora and Kling AI, but it's a good taster of how these services will work. The clips it produces are five seconds long and in 1360×752 resolution. You just type a prompt into its search bar and wait for it to appear in your account, after which you can download a watermarked version. 

While there was a lengthy wait for the results (which should hopefully improve once initial demand has dropped), our prompt of 'a close-up of a dog in sunglasses driving a car through Las Vegas at night' produced a clip that was very close to what we envisaged. 

Dream Machine's free plan is capped at 30 generations a month, but if you need more there are Standard (120 generations, $ 29.99 a month, about £24, AU$ 45), Pro (400 generations, $ 99.99 a month, about £80, AU$ 150) and Premier (2,000 generations, $ 499.99 a month, about £390, AU$ 750).

A taste of AI videos to come

Like most generative AI video tools, questions remain about exactly what data Luma AI's was trained on – which means that its potential outside of personal use or improving your GIF game could be limited. It also isn't the first free text-to-video tool we've seen, with Runway's Gen 2 model coming out of beta last year.

The Dream Machine website also states that the tool does have technical limitations when it comes to handling text and motion, so there's plenty of trial-and-error involved. But as a taster of the more advanced (and no doubt more expensive) AI video generators to come, it's certainly a fun tool to test drive.

That's particularly the case, given that other alternatives like Google Veo currently have lengthy waitlists. Meanwhile, more powerful models like OpenAI's Sora (which can generate videos that are 60-seconds long) won't be available until later this year, while Kling AI is currently China-only.

This will certainly change as text-to-video generation becomes mainstream, but until then, Dream Machine is a good place to practice (if you don't mind waiting a while for the results).

You might also like…

TechRadar – All the latest technology news

Read More

The TikTok of AI video? Kling AI is a scarily impressive new OpenAI Sora rival

It feels like we're at a tipping point for AI video generators, and just a few months on from OpenAI's Sora taking social media by storm with its text-to-video skills, a new Chinese rival is taking social media by storm.

Called Kling AI, the new “video generation model” is made by the Chinese TikTok rival Kuaishou, and it's currently only available as a public demo in China via a waitlist. But that hasn't stopped it from quickly going viral, with some impressive clips that suggest it's at least as capable as Sora.

You can see some of the early demo videos (like the one below) on the Kling AI website, while a number of threads on X (formerly Twitter) from the likes of Min Choi (below) have rounded up what are claimed to be some impressive early creations made by the tool (with some help from editing apps).

A blue parrot turning its head

(Image credit: Kling AI)

As always, some caution needs to be applied with these early AI-generated clips, as they're cherry-picked examples, and we don't yet know anything about the hardware or other software that's been used to create them. 

For example, we later found that an impressive Air Head video seemingly made by OpenAI's Sora needed a lot of extra editing in post-production.

See more

Still, those caveats aside, Kling AI certainly looks like another powerful AI video generator. It lets early testers create 1080/30p videos that are up to two minutes in length. The results, while still carrying some AI giveaways like smoothing and minor artifacts, are impressively varied, with a promising amount of coherence.

Exactly how long it'll be before Kling AI is opened up to users outside China remains to be seen. But with OpenAI suggesting that Sora will get a public release “later this year”, Kling AI best not wait too long if it wants to become the TikTok of AI-generated video.

The AI video war heats up

Now that AI photo tools like Midjourney and Adobe Firefly are hitting the mainstream, it's clear that video generators are the next big AI battleground – and that has big implications for social media, the movie industry, and our ability to trust what we see during, say, major election campaigns.

Other examples of AI generators include Google Veo, Microsoft's VASA-1 (which can make lifelike talking avatars from a single photo), Runway Gen-2, and Pika Labs. Adobe has now even showed how it could soon integrate many of these tools into Premiere Pro, which would be give the space another big boost.

None of them are yet perfect, and it isn't clear how long it takes to produce a clip using the likes of Sora or Kling AI, nor what kind of computing power is needed. But the leaps being made towards photorealism and simulating real-world physics have been massive in the past year, so it clearly won't be long before these tools hit the mainstream.

That battle will become an international one, too – with the US still threatening a TikTok ban, expect there to be a few more twists and turns before the likes of Kling AI roll out worldwide. 

You might also like…

TechRadar – All the latest technology news

Read More

OpenAI’s big Google Search rival could launch within days – and kickstart a new era for search

When OpenAI launched ChatGPT in 2022, it set off alarm bells at Google HQ about what OpenAI’s artificial intelligence (AI) tool could mean for Google’s lucrative search business. Now, those fears seem to be coming true, as OpenAI is set for a surprise announcement next week that could upend the search world forever.

According to Reuters, OpenAI plans to launch a Google search competitor that would be underpinned by its large language model (LLM) tech. The big scoop here is the date that OpenAI has apparently set for the unveiling: Monday, May 13.

Intriguingly, that’s just one day before the mammoth Google I/O 2024 show, which is usually one of the biggest Google events of the year. Google often uses the event to promote its latest advances in search and AI, so it will have little time to react to whatever OpenAI decides to reveal the day before.

The timing suggests that OpenAI is really gunning for Google’s crown and aims to upstage the search giant on its home turf. The stakes, therefore, could not be higher for both firms.

OpenAI vs Google

OpenAI logo on wall

(Image credit: Shutterstock.com / rafapress)

We’ve heard rumors before that OpenAI has an AI-based search engine up its sleeve. Bloomberg, for example, recently reported that OpenAI’s search engine will be able to pull in data from the web and include citations in its results. News outlet The Information, meanwhile, has made similar claims that OpenAI is “developing a web search product”, and there has been a near-constant stream of whispers to this effect for months.

But even without the direct leaks and rumors, it has been clear for a while that tools like ChatGPT present an alternative way of sourcing information to the more traditional search engines. You can ask ChatGPT to fetch information on almost any topic you can think of and it will bring up the answers in seconds (albeit sometimes with factual inaccuracies). ChatGPT Plus can access information on the web if you’re a paid subscriber, and it looks like this will soon be joined by OpenAI’s dedicated search engine.

Of course, Google isn’t going to go down without a fight. The company has been pumping out updates to its Gemini chatbot, as well as incorporating various AI features into its existing search engine, including AI-generated answers in a box on the results page.

Whether OpenAI’s search engine will be enough to knock Google off its perch is anyone’s guess, but it’s clear that the company’s success with ChatGPT has prompted Google to radically rethink its search offering. Come next week, we might get a clearer picture of how the future of search will look.

You might also like

TechRadar – All the latest technology news

Read More

ChatGPT gets a big new rival as Anthropic claims its Claude 3 AIs beat it

AI company Anthropic is previewing its new “family” of Claude 3 models it claims can outperform Google’s Gemini and OpenAI’s ChatGPT across multiple benchmarks.

This group consists of three AIs with varying degrees of “capability”. You have Claude 3 Haiku down at the bottom, followed by Claude 3 Sonnet, and then there’s Claude 3 Opus as the top dog. Anthropic claims the trio delivers “powerful performance” across the board due to their multimodality, improved level of accuracy, better understanding of context, and speed. What’s also notable about the trio is they’ll be more willing to answer tough questions. 

Anthropic explains older versions of Claude would sometimes refuse to answer prompts that pushed the boundaries of the safety guardrails. Now, the Claude 3 family will have a more nuanced approach with its responses allowing them to answer those tricky questions.

Despite the all-around performance boost, much of the announcement is focused on Opus as being the best in all of these areas. They go so far as to say the model “exhibits near-human levels of comprehension… [for] complex tasks”.

Specialized AIs

To test it, Anthropic put Opus through a “Needle In a Haystack” or NIAH evaluation to see how well it’s able to recall data. As it turns out, it’s pretty good since the AI could remember information with almost perfect detail. The company goes on to claim that Opus is quite the smart cookie able to solve math problems, generate computer code, and display better reasoning than GPT-4

The technology isn’t without its quirks. Even though Anthropic states their AIs have improved accuracy, there is still the problem of hallucinations. The responses the models churn out may contain wrong information, although they are greatly reduced compared to Claude 2.1. Plus, Opus is a little slow when it comes to answering a question with speeds comparable to Claude 2.

Of course, this isn’t to say Haiku or Sonnet are lesser than Opus as they have specific use cases. Haiku, for example, is great at giving quick replies and grabbing information “from unstructured data”. Also, it’s not as good at answering math questions as Opus. Sonnet is a larger-scale model meant to help people save time at menial tasks and even parse lines of “text from images”, while Opus is ideal for large-scale operations.

Changing the internet

Both Sonnet and Opus are currently available for purchase although there is a free version of Claude on the company website. A launch date was not given for Haiku, but Anthropic states it’ll be released soon. 

As you can probably guess, the Claude 3 trio is meant more for businesses looking to automate certain workloads. Your experience with the group will likely come in the form of an online chatbot. Amazon recently announced it’s going to be implementing Anthropic’s new AIs into AWS (Amazon Web Services) giving websites on the platform a way to create a customized Claude 3 model to suit the needs of brands and their customers.

If you're looking for a model suited for everyday use, check out TechRadar's list of the best AI content generators for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Google Gemini explained: 7 things you need to know the new Copilot and ChatGPT rival

Google has been a sleeping AI giant, but this week it finally woke up. Google Gemini is here and it's the tech giant's most powerful range of AI tools so far. But Gemini is also, in true Google style, really confusing, so we're here to quickly break it all down for you.

Gemini is the new umbrella name for all of Google's AI tools, from chatbots to voice assistants and full-blown coding assistants. It replaces both Google Bard – the previous name for Google's AI chatbot – and Duet AI, the name for Google's Workspace-oriented rival to CoPilot Pro and ChatGPT Plus.

But this is also way more than just a rebrand. As part of the launch, Google has released a new free Google Gemini app for Android (in the US, for now. For the first time, Google is also releasing its most powerful large language model (LLM) so far called Gemini Ultra 1.0. You can play with that now as well, if you sign up for its new Google One AI Premium subscription (more on that below).

This is all pretty head-spinning stuff, and we haven't even scratched the surface of what you can actually do with these AI tools yet. So for a quick fast-charge to get you up to speed on everything Google Gemini, plug into our easily-digestible explainer below…

1. Gemini replaces Google Bard and Duet AI

In some ways, Google Gemini makes things simpler. It's the new umbrella name for all of Google's AI tools, whether you're on a smartphone or desktop, or using the free or paid versions.

Gemini replaces Google Bard (the previous name for Google's “experimental” AI chatbot) and Duet AI, the collection of work-oriented tools for Google Workspace. Looking for a free AI helper to make you images or redraft emails? You can now go to Google Gemini and start using it with a standard Google account.

But if you want the more powerful Gemini Advanced AI tools – and access to Google's newest Gemini Ultra LLM – you'll need to pay a monthly subscription. That comes as part of a Google One AI Premium Plan, which you can read more about below.

To sum up, there are three main ways to access Google Gemini:   

2. Gemini is also replacing Google Assistant

Two phones on an orange background showing the Google Gemini app

(Image credit: Google)

As we mentioned above, Google has launched a new free Gemini app for Android. This is rolling out in the US now and Google says it'll be “fully available in the coming weeks”, with more locations to “coming soon”. Google is known for having a broad definition of “soon”, so the UK and EU may need to be patient.

There's going to be a similar rollout for iOS and iPhones, but with a different approach. Rather than a separate standalone app, Gemini will be available in the Google app.

The Android app is a big deal in particular because it'll let you set Gemini as your default voice assistant, replacing the existing Google Assistant. You can set this during the app's setup process, where you can tap “I agree” for Gemini to “handle tasks on your phone”.

Do this and it'll mean that whenever you summon a voice assistant on your Android phone – either by long-pressing your home button or saying “Hey Google” – you'll speak to Gemini rather than Google Assistant. That said, there is evidence that you may not want to do that just yet…

3. You may want to stick with Google Assistant (for now)

An Android phone on an orange background showing the Google Gemini app

(Image credit: Google)

The Google Gemini app has only been out for a matter of days – and there are early signs of teething issues and limitations when it comes to using Gemini as your voice assistant.

The Play Store is filling up with complaints stating that Gemini asks you to tap 'submit' even when using voice commands and that it lacks functionality compared to Assistant, including being unable to handle hands-free reminders, home device control and more. We've also found some bugs during our early tests with the app.

Fortunately, you can switch back to the old Google Assistant. To do that, just go the Gemini app, tap your Profile in the top-right corner, then go to Settings > Digital assistants from Google. In here you'll be able to choose between Gemini and Google Assistant.

Sissie Hsiao (Google's VP and General Manager of Gemini experiences) claims that Gemini is “an important first step in building a true AI assistant – one that is conversational, multimodal and helpful”. But right now, it seems that “first step” is doing a lot of heavy lifting.

4. Gemini is a new way to quiz Google's other apps

Two phones on an orange background showing the Google Gemini app

(Image credit: Google)

Like the now-retired Bard, Gemini is designed to be a kind of creative co-pilot if you need help with “writing, brainstorming, learning, and more”, as Google describes it. So like before, you can ask it to tell you a joke, rewrite an email, help with research and more. 

As always, the usual caveats remain. Google is still quite clear that “Gemini will make mistakes” and that, even though it's improving by the day, Gemini “can provide inaccurate information, or it can even make offensive statements”.

This means its other use case is potentially more interesting. Gemini is also a new way to interact with Google's other services like YouTube, Google Maps and Gmail. Ask it to “suggest some popular tourist sites in Seattle” and it'll show them in Google Maps. 

Another example is asking it to “find videos of how to quickly get grape juice out of a wool rug”. This means Gemini is effectively a more conversational way to interact with the likes of YouTube and Google Drive. It can also now generate images, which was a skill Bard learnt last week before it was renamed.

5. The free version of Gemini has limitations

Two phones on an orange background showing the Google Gemini Android app

(Image credit: Future)

The free version of Gemini (which you access in the Google Gemini app on Android, in the Google app on iOS, or on the Gemini website) has quite a few limitations compared to the subscription-only Gemini Advanced. 

This is partly because it's based on a simpler large language model (LLM) called Gemini Pro, rather than Google's new Gemini Ultra 1.0. Broadly speaking, the free version is less creative, less accurate, unable to handle multi-step questions, can't really code and has more limited data-handling powers.

This means the free version is best for basic things like answering simple questions, summarizing emails, making images, and (as we discussed above) quizzing Google's other services using natural language.

Looking for an AI assistant that can help with advanced coding, complex creative projects, and also work directly within Gmail and Google Docs? Google Gemini Advanced could be more up your street, particularly if you already subscribe to Google One… 

6. Gemini Advanced is tempting for Google One users

The subscription-only Gemini Advanced costs $ 19.99 / £18.99 / AU$ 32.99 per month, although you can currently get a two-month free trial. Confusingly, you get Advanced by paying for a new Google One AI Premium Plan, which includes 2TB of cloud storage.

This means Gemini Advanced is particularly tempting if you already pay for a Google One cloud storage plan (or are looking to sign up for it anyway). With a 2TB Google One plan already costing $ 9.99 / £7.99 / AU$ 12.49 per month, that means the AI features are effectively setting you back an extra $ 10 / £11 / AU$ 20 a month.

There's even better news for those who already have a Google One subscription with 5TB of storage or more. Google says you can “enjoy AI Premium features until July 21, 2024, at no extra charge”.

This means that Google, in a similar style to Amazon Prime, is combining its subscriptions offerings (cloud storage and its most powerful AI assistant) in order to make them both more appealing (and, most likely, more sticky too).

7. The Gemini app could take a little while to reach the UK and EU

Two phones on an orange background showing the Google Gemini app

(Image credit: Future)

While Google has stated that the Gemini Android app is “coming soon” to “more countries and languages”, it hasn't given any timescale for when that'll happen – and a possible reason for the delay is that it's waiting for the EU AI Act to become clearer.

Sissie Hsiao (Google's VP and General Manager of Gemini experiences) told the MIT Technology Review “we’re working with local regulators to make sure that we’re abiding by local regime requirements before we can expand.”

While that sounds a bit ominous, Hsiao added that “rest assured, we are absolutely working on it and I hope we’ll be able to announce expansion very, very soon.” So if you're in the UK or EU, you'll need to settle for tinkering with the website version for now.

Given the early reviews of the Google Gemini Android app, and its inconsistencies as a Google Assistant replacement, that might well be for the best anyway.

You might also like

TechRadar – All the latest technology news

Read More

LG plans to launch an Apple Vision Pro rival in 2025

LG has announced that it’s bringing its OLED TV expertise to the XR (extended reality) space, and plans to launch some kind of device next year.

The report comes via South Korean news outlet The Guru (translated to English) in which LG Electronics CEO Cho Joo-wan told a reporter at CES 2024 that “[LG] will launch an XR device as early as next year.”

Not much else is known about the device; however, there were rumors late last year (via ET News, also translated from Korean) that LG was working on a VR headset, which would be powered by the then-unannounced Snapdragon XR2+ Gen 2 from Qualcomm. The same rumor correctly predicted that the Samsung XR/VR headset would use this new chipset, so while we should still take this latest leak with a pinch of salt there appears to be a decent chance that it's accurate.

If the LG headset does indeed boast a Snapdragon XR2+ Gen then we should expect it to be a high-end XR headset that’ll rival the likes of the soon-to-release Apple Vision Pro. This would also mean the headset is likely to be pricey – we'd expect somewhere in the region of $ 2,000 / £2,000 / AU$ 3,000.

An Apple Store staff member shows a customer how to use a Vision Pro headset.

The Apple Vision Pro might soon get another rival (Image credit: Apple)

As noted by an Upload VR report on the LG XR device announcement, the Korean language doesn’t always differentiate between singular and plural. As such, it’s possible that LG wants to release more than one XR device in 2025 – and if that’s the case we’d expect to see a high-end Apple Vision Pro rival, and a more affordable option that competes with the Meta Quest 3 or the recently announced Xreal Air 2 Ultra

Is LG working alone? 

Alongside Qualcomm, LG might be working with another XR veteran to bring its headset to life – Meta, though reports suggest it would be LG helping Meta not the other way around.

In February 2023 we reported that Meta was looking to partner with LG to create OLED displays for its XR headsets – most likely the Meta Quest Pro 2, but possibly the Meta Quest 4 as well.

If a Quest Pro 2 is on the way we’ve hypothesized that 2025 is most likely the earliest we’d see one; Meta likes to tease new XR hardware a year in advance, and typically makes announcements at its Meta Connect event that happens in September / October, so we’d likely get a 2024 teaser and a 2025 release.

This schedule fits with LG’s plan to launch a device next year, suggesting that the next Meta headset and the LG headset are one and the same.

The Meta Quest Pro and its controllers on a grey cushion

The Meta Quest Pro is good, but not all that popular (Image credit: Future)

That said, there’s a chance that LG is working on an XR project without Meta.

A few months after the February leak, in July 2023, there were reports that Meta had cancelled the Quest Pro 2. Meta CTO Andrew Bosworth took to Instagram Stories to deny the rumor, though his argument was based on a technicality – saying a canceled prototype isn’t the Quest Pro 2 until Meta names it the Quest Pro 2.

This confusing explanation leaves the door open to the Quest Pro 2 being delayed – likely due to the original’s seemingly very lackluster sales (we estimate the Quest Pro is around 86 times less popular than the Quest 2 using Steam Hardware Survey data), and the arrival of the Vision Pro. 

If LG is sitting on an excellent OLED display design – while Meta is back at the drawing board stage – why wouldn’t the South Korean display company leverage this screen, and its home entertainment expertise, to create a killer headset of its own?

You might also like:

TechRadar – All the latest technology news

Read More

Google Bard Advanced leak hints at imminent launch for ChatGPT rival

The release of Google Bard’s Advanced tier may be coming sooner than people expected, according to a recent leak, and what's more, it won’t be free.

Well, it’s not a “leak” per se; the company left a bunch of clues on its website that anybody could find if you know where to look. That’s how developer Bedros Pamboukian on X (the platform formerly known as Twitter) found lines of code hinting at the imminent launch of Bard Advanced. What’s interesting is the discovery reveals the souped-up AI will be bundled with Google One, and if you buy a subscription, you can try it out as part of a three-month trial.

There is a bit of hype surrounding Bard Advanced because it will be powered by Google’s top-of-the-line Gemini Ultra model. In an announcement post from this past December, the company states Gemini Ultra has been designed to deal with “highly complex tasks and accept multimodal inputs”. This possibility is backed up by another leak from user Dylan Roussel on X claiming the chatbot will be capable of “advanced math and reasoning skills.”

It’s unknown which Google One tier people will have to buy to gain access or if there will be a new one for Bard Advanced. Neither leak reveals a price tag. But if we had to take a wild guess, you may have to opt for the $ 10 a month Premium plan. Considering the amount of interest surrounding the AI, it would make sense for Google to put up a high barrier for entry.

Potential features

Going back to the Roussel leak, it reveals a lot of other features that may or may not be coming to Google Bard. Things might change or “they may never land at all.”

First, it may be possible to create customized bots using the AI’s tool. There is very little information about them. We don’t know what they do or if they’re shareable. The only thing we do know is the bots are collectively codenamed Motoko.

Next, it appears Bard will receive a couple of extra tools. You have Gallery, a set of publicly viewable prompts on a variety of topics users can check out for brainstorming ideas. Then there’s Tasks. Roussel admits he couldn’t find many details about it, but to his understanding, it’ll be “used to manage long-running tasks such as” image generation.

See more

Speaking of generating images, the third feature allows users a way to create backgrounds and foregrounds for smartphones and website banners. The last one, called Power Up, is said to be able to improve text prompts. Once again, there’s little information to go on. We don’t know how the backgrounds can be made (if that’s what’s going on) or what powering up a text prompt even looks like. It's hard to say for sure.

Users probably won’t have to wait for very long to get the full picture. Given the fact these were hidden on Google’s website, the official rollout must be just around the corner.

2024 is shaping up to be a big year for artificial intelligence, especially when it comes to the likes of Google Bard and its ChatGPT. If you want to know which one we think will come out on top, check out TechRadar's ChatGPT vs Google Bard analyzation.

You might also like

TechRadar – All the latest technology news

Read More

What is Google Bard? Everything you need to know about the ChatGPT rival

Google finally joined the AI race and launched a ChatGPT rival called Bard – an “experimental conversational AI service” earlier this year. Google Bard is an AI-powered chatbot that acts as a poet, a crude mathematician and a even decent conversationalist.

The chatbot is similar to ChatGPT in many ways. It's able to answer complex questions about the universe and give you a deep dive into a range of topics in a conversational, easygoing way. The bot, however, differs from its rival in one crucial respect: it's connected to the web for free, so – according to Google – it gives “fresh, high-quality responses”.

Google Bard is powered by PaLM 2. Like ChatGPT, it's a type of machine learning called a 'large language model' that's been trained on a vast dataset and is capable of understanding human language as it's written.

Who can access Google Bard?

Bard was announced in February 2023 and rolled out for early access the following month. Initially, a limited number of users in the UK and US were granted access from a waitlist. However, at Google I/O – an event where the tech giant dives into updates across its product lines – Bard was made open to the public.

It’s now available in more than 180 countries around the world, including the US and all member states of the European Union. As of July 2023, Bard works with more than 40 languages. You need a Google account to use it, but access to all of Bard’s features is entirely free. Unlike OpenAI’s ChatGPT, there is no paid tier.

The Google Bard chatbot answering a question on a computerscreen

(Image credit: Google)

Opening up chatbots for public testing brings great benefits that Google says it's “excited” about, but also risks that explain why the search giant has been so cautious to release Bard into the wild. The meteoric rise of ChatGPT has, though, seemingly forced its hand and expedited the public launch of Bard.

So what exactly will Google's Bard do for you and how will it compare with ChatGPT, which Microsoft appears to be building into its own search engine, Bing? Here's everything you need to know about it.

What is Google Bard?

Like ChatGPT, Bard is an experimental AI chatbot that's built on deep learning algorithms called 'large language models', in this case one called LaMDA. 

To begin with, Bard was released on a “lightweight model version” of LaMDA. Google says this allowed it to scale the chatbot to more people, as this “much smaller model requires significantly less computing power”.

The Google Bard chatbot answering a question on a phone screen

(Image credit: Google)

At I/O 2023, Google launched PaLM 2, its next-gen language model trained on a wider dataset spanning multiple languages. The model is faster and more efficient than LamDA, and comes in four sizes to suit the needs of different devices and functions.

Google is already training its next language model, Gemini, which we think is one of its most exciting projects of the next 25 years. Built to be multi-modal, Gemini is touted to deliver yet more advancements in the arena of generative chatbots, including features such as memory.

What can Google Bard do?

In short, Bard is a next-gen development of Google Search that could change the way we use search engines and look for information on the web.

Google says that Bard can be “an outlet for creativity” or “a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills”.

Unlike traditional Google Search, Bard draws on information from the web to help it answer more open-ended questions in impressive depth. For example, rather than standard questions like “how many keys does a piano have?”, Bard will be able to give lengthy answers to a more general query like “is the piano or guitar easier to learn”?

The Google Bard chatbot answering a question on a computer screen

An example of the kind or prompt that Google’s Bard will give you an in-depth answer to. (Image credit: Google)

We initially found Bard to fall short in terms of features and performance compared to its competitors. But since its public deployment earlier this year, Google Bard’s toolkit has come on leaps and bounds. 

It can generate code in more than 20 programming languages, help you solve text-based math equations and visualize information by generating charts, either from information you provide or tables it includes in its responses. It’s not foolproof, but it’s certainly a lot more versatile than it was at launch.

Further updates have introduced the ability to listen to Bard’s responses, change their tone using five options (simple, long, short, professional or casual), pin and rename conversations, and even share conversations via a public link. Like ChatGPT, Bard’s responses now appear in real-time, too, so you don’t have to wait for the complete answer to start reading it.

Google Bard marketing image

(Image credit: Google)

Improved citations are meant to address the issue of misinformation and plagiarism. Bard will annotate a line of code or text that needs a citation, then underline the cited part and link to the source material. You can also easily double-check its answers by hitting the ‘Google It’ shortcut.

It works with images as well: you can upload pictures with Google Lens and see Google Search image results in Bard’s responses.

Bard has also been integrated into a range of Google apps and services, allowing you deploy its abilities without leaving what you’re working on. It can work directly with English text in Gmail, Docs and Drive, for example, allowing you to summarize your writing in situ.

Similarly, it can interact with info from the likes of Maps and even YouTube. As of November, Bard now has the limited ability to understand the contents of certain YouTube videos, making it quicker and easier for you to extract the information you need.

What will Google Bard do in future?

A huge new feature coming soon is the ability for Google Bard to create generative images from text. This feature, a collaborative effort between Google and Adobe, will be brought forward by the Content Authenticity Initiative, an open-source Content Credentials technology that will bring transparency to images that are generated through this integration.

The whole project is made possible by Adobe Firefly, a family of creative generative AI models that will make use of Bard's conversational AI service to power text-to-image capabilities. Users can then take these AI-generated images and further edit them in Adobe Express.

Otherwise, expect to see Bard support more languages and integrations with greater accuracy and efficiency, as Google continues to train its ability to generate responses.

Google Bard vs ChatGPT: what’s the difference?

Fundamentally the chatbot is based on similar technology to ChatGPT, with even more tools and features coming that will close the gap between Google Bard and ChatGPT.

Both Bard and ChatGPT are chatbots that are built on 'large language models', which are machine learning algorithms that have a wide range of talents including text generation, translation, and answering prompts based on the vast datasets that they've been trained on.

A laptop screen showing the landing page for ChatGPT Plus

(Image credit: OpenAI)

The two chatbots, or “experimental conversational AI service” as Google calls Bard, are also fine-tuned using human interactions to guide them towards desirable responses. 

One difference between the two, though, is that the free version of ChatGPT isn't connected to the internet – unless you use a third-party plugin. That means it has a very limited knowledge of facts or events after January 2022. 

If you want ChatGPT to search the web for answers in real time, you currently need to join the waitlist for ChatGPT Plus, a paid tier which costs $ 20 a month. Besides the more advanced GPT-4 model, subscribers can use Browse with Bing. OpenAI has said that all users will get access “soon”, but hasn't indicated a specific date.

Bard, on the other hand, is free to use and features web connectivity as standard. As well as the product integrations mentioned above, Google is also working on Search Generative Experience, which builds Bard directly into Google Search.

Does Google Bard only do text answers?

Until recently Google's Bard initially only answered text prompts with its own written replies, similar to ChatGPT. But one of the biggest changes to Bard is its multimodal functionality. This allows the chatbot to answer user prompts and questions with both text and images.

Users can also do the same, with Bard able to work with Google Lens to have images uploaded into Bard and Bard responding in text. Multimodal functionality is a feature that was hinted at for both GPT-4 and Bing Chat, and now Google Bard users can actually use it. And of course, we also have Google Bard's Adobe-powered AI image generator, which will be powered by Adobe Firefly.

TechRadar – All the latest technology news

Read More