Elon Musk says xAI is launching its first model and it could be a ChatGPT rival

Elon Musk’s artificial intelligence startup company, xAI, will debut its first long-awaited AI model on Saturday, November 4.

The billionaire made the announcement on X (the platform formerly known as Twitter) stating the tech will be released to a “select group” of people. He even boasts that “in some important respects, it is the best that currently exists.”

It’s been a while since we’ve last heard anything from xAI. The startup hit the scene back in July, revealing it’s run by a team of former engineers from Microsoft, Google, and even OpenAI. Shortly after the debut on July 14, Musk held a 90-minute-long Twitter Spaces chat where he talked about his vision for the company. During the chat, Musk stated his startup will seek to create “a good AGI with the overarching purpose of just trying to understand the universe”. He wants it to run contrary to what he believes is problematic tech from the likes of Microsoft and Google. 

Yet another chatbot

AGI stands for artificial general intelligence, and it’s the concept of an AI having “intelligence” comparable to or beyond that of a normal human being. The problem is that it's more of an idea of what AI could be rather than a literal piece of technology. Even Wired in their coverage of AGIs states there’s “no concrete definition of the term”.

So does this mean xAI will reveal some kind of super-smart model that will help humanity as well as be able to hold conversations like a sci-fi movie? No, but that could be the lofty end goal for Elon Musk and his team. We believe all we’ll see on November 5 is a simple chatbot like ChatGPT. Let’s call it “ChatX” since the billionaire has an obsession with the letter “X”.  

Does “ChatX” even stand a chance against the likes of Google Bard or ChatGPT? The latter has been around for almost a year now and has seen multiple updates becoming more refined each time. Maybe xAI has solved the hallucination problem. That'll be great to see. Unfortunately, it's possible ChatX could just be another vehicle for Musk to spread his ideas/beliefs.

Analysis: A personal truth spinner

Musk has talked about wanting to have an alternative to ChatGPT that focuses on providing the “truth”, whatever that means. Musk has been a vocal critic of how fast companies have been developing their own generative AI models with seemingly reckless abandon. He even called for a six-month pause on AI training in March. Obviously, that didn’t happen as the technology advanced by leaps and bounds since then.

It's worth mentioning that Twitter, under Musk's management, has been known to comply with censorship requests by governments from around the world, so Musk's definition of truth seems dubious at best. Either way, we’ll know soon enough what the team's intentions are. Just don’t get your hopes up.

While we have you, be sure to check out TechRadar's list of the best AI writers for 2023.

You might also like

TechRadar – All the latest technology news

Read More

Amazon says you might have to pay for Alexa’s AI features in the future

Amazon might be mulling a subscription charge for Alexa’s AI features at some point down the road – though that may not be for some time yet, by the sound of things.

This nugget of info emerged from a Bloomberg interview with Dave Limp, who is SVP of Amazon Devices & Services currently, though he is leaving the company later this year. (Whispers on the grapevine are that Panos Panay, a big-hitting exec who just left Microsoft, will replace Limp).

Bloomberg’s Dave Lee broadly observed that the future of Alexa could involve a more sophisticated AI assistant, but one that device owners would need to fork out to subscribe to.

This would be an avenue of monetization, giving that the previous hope for spinning out some extra cash – having folks order more stuff online using Alexa, bolstering revenue that way – just hasn’t worked out for Amazon (not in any meaningful fashion, at least).

After Limp talked about Amazon pushing forward using generative AI to build out Alexa’s features, Lee fired out a question about whether there’ll come a time when those Alexa AI capabilities won’t be free – and are offered via a subscription instead.

Limp replied in no uncertain terms: “Yes, we absolutely think that,” noting the costs of training the AI model (properly), and then adding: “But before we would start charging customers for this – and I believe we will – it has to be remarkable.”

Analysis: Superhuman assistance?

Amazon Alexa new

Dave Limp (above) is currently SVP of Amazon Devices & Services, but is leaving the company later this year. (Image credit: Future / Lance Ulanoff)

So, there’s your weighty caveat. Limp makes it clear, in fact, that expectations would be built around the realization of a ‘superhuman’ assistant if Amazon was to charge for Alexa’s AI chops as outlined.

Limp clarifies that Alexa, as it is now, almost certainly won’t be charged for, and that the contemporary Alexa will remain free. He also suggested that Amazon has no idea of a pricing scheme yet for any future AI-powered Alexa that is super-smart.

This means the paid-for Alexa AI skills we’re talking about would be highly prized and a long way down the road for development with Amazon’s assistant. This isn’t anything that will remotely happen soon, but what it is, nonetheless, is a clear enough signal that this path of monetization is one Amazon is fully considering traveling down. Eventually.

As to exactly what timeframe we might be talking about, Limp couldn’t be drawn to commit beyond it not being “decades” or “years” away, with the latter perhaps hinting that maybe this could happen sooner than we may imagine.

We think it’ll be a difficult sell for Amazon in the nearer-term, though. Especially as plans are currently being pushed through to shove adverts into Prime Video early next year, and you’ll have to pay to avoid watching those ads. (As a subscriber to Prime, even though you’re paying for the video streaming service – and other benefits – you’ll still get adverts unless you stump up an extra fee).

If Amazon is seen to be watering down the value proposition of its services too much, or trying to force a burden of monetization in too many different ways, that’ll always run the risk of provoking a negative reaction from customers. In short, if the future of a super-sophisticated Alexa is indeed paying for AI skills, we’re betting this won’t be anytime soon – and the results better be darn impressive.

We must admit, we have trouble visualizing the latter, too, especially when as it currently stands, we can’t get Alexa to understand half the internet radio stations we want to listen to, a pretty basic duty for the assistant.

Okay, so Amazon did have some interesting new stuff to show off with Alexa’s development last week, but we remain skeptical on how that’ll pan out in the real-world, and obviously more so on how this new ‘superhuman’ assistant will be in the further future. In other words, we’ll keep heaping the salt on for the time being…

You might also like

TechRadar – All the latest technology news

Read More

Snapchat AI shenanigans caused by glitch not sentience, says company

The introduction of Snapchat’s AI – called ‘My AI’ – has already been met with resistance and controversy, and new reports from multiple users about the chatbot taking videos without permission could add fuel to the fire.

Several users took to social media to share stories of the Snapchat AI taking videos of their ceilings and walls without any input from them, and then posting the clip as a live Story to all their followers. These are actions that should only be available to human users, not AI, which is what set alarm bells ringing.

Some users on X, formerly known as Twitter, shared their thoughts on the issue, with some seeming to be worried, while others made light of the turn of events. CNN reported on other users sharing their concerns as well. “Why does My AI have a video of the wall and ceiling in their house as their story?” wrote one user. “This is very weird and honestly unsettling,” said another.

A spokesperson from Snap Inc. responded, confirming that the issue, which they said was quickly addressed, was just a glitch. “My AI experienced a temporary outage that’s now resolved,” according to the statement Snap gave to TechCrunch.

Snapchat announced its ChatGPT-powered AI chatbot service back in February 2023, which proved to be an unpopular move once it launched on April 20 of the same year. For instance, how to delete Snapchat Google searches increased to a staggering 488% worldwide by April 26. And the company itself warned users not to trust this feature with private and sensitive information or to provide users with accurate information in return, according to its own Snapchat Support page.

The AI wall 

While AI chat can be a fun and sometimes useful tool when used correctly, incidents like this are a reminder that the tech behind it is still in its early stages. And even more troublesome is the fact that companies are releasing this tech into the wild, knowing that it still has plenty of kinks to work out.

This isn’t the first time that a company has rushed out AI features that weren't ready for prime time – Google Bard’s launch saw employees mocking it, with even Google CEO Sundar Pichai admitting that it was like a “souped-up Civic” taking on “more powerful cars.”

And this definitely isn’t the first time suddenly implemented AI features garnered backlash from its user base – Discord had to backtrack on its reworded privacy policy regarding AI implementation and data collection.

It seems to be a wall that companies constantly hit against in their race to integrate AI features into their websites and services. And it seems that as long as the AI craze is still going strong, we’ll keep seeing this same scenario repeating.

You might also like

TechRadar – All the latest technology news

Read More

Google says its secret AI weapon could eventually outsmart ChatGPT

Google’s DeepMind laboratory is currently developing a new AI system called Gemini with claims it’ll rival, if not surpass, ChatGPT, according to a report from Wired.

In order to surpass ChatGPT, the developers plan on integrating an old “artificial intelligence program called AlphaGo” into the upcoming language learning model (LLM). What’s special about AlphaGo is it's “based on a technique” known as reinforcement learning where the software tackles tough problems through sheer trial and error. As it makes “repeated attempts”, the AI takes feedback it receives from each failure to improve its performance. DeepMind seeks to outfit Google’s future LLM with the ability to plan, or at the very least, solve complex problems.

If you combine that with a generative AI’s ability to grab information from the internet and then reformat it into natural-sounding text, Gemini has the potential to be more intelligent than any other artificial intelligence in the world. At least, that’s the idea. DeepMind co-founder and CEO Demis Hassabis claims that “if done correctly, [Gemini] will be the most beneficial technology for humanity ever”. Bold words.

The AI is deep in development at the moment – “a process that will take a number of months”, according to Hassabis. It will also cost Google a ton of money as the project price tag ranges from tens to hundreds of millions of dollars. For the sake of comparison, ChatGPT cost over $ 100 million to make. 

Analysis: Too good to be true?

Gemini certainly sounds interesting, but at this stage, we’ll remain skeptical. Our chief concern is with AlphaGo itself.

If you don’t know, AlphaGo first came to prominence back in 2016 when it defeated a champion player at the board game Go which is notorious for being incredibly complex and difficult despite its apparent simplicity. The AI was able to win because of the reinforcement learning technique mentioned earlier as it was able to “explore and remember [all] possible moves”. 

As interesting as that is, how does AlphaGo being good at a board game also make it good at solving complex problems or generating content? One set of skills for a specific scenario doesn’t mean it'll all translate well into another field. Plus, is it a good idea to have a generative AI trial and error its way to an answer? AI hallucinations are already a problem. AlphaGo can help Gemini improve faster; we just hope the growing pains aren't made public.

Secondly, Hassabis’ statement of development taking mere months is concerning. When ChatGPT rose to prominence back in early 2023, Google quickly pumped out its own AI-powered chatbot Bard, a move that drew a lot of criticism from employees. Some labeled Bard as “a pathological liar” due to its sheer amount of misinformation. It was even referred to as “worse than useless.” Perhaps it would be a good idea for Google or DeepMind to extend the development cycle from months to years. Train Gemini for a while longer. After all, what’s the rush?

In the meantime, check out TechRadar's recently updated list of the best AI writer for 2023. 

TechRadar – All the latest technology news

Read More

Apple Vision Pro might be lacking some features at launch, says leak

Apple's Vision Pro headset hasn't even gone on sale yet, and it might not do for another year yet. But that was never going to stop Apple from working on what will follow it and now a recent report suggests that isn't just one, but two new headsets.

Unfortunately for Vision Pro hopefuls, that same report also suggests that Apple will hold back some visionOS features for when those successors are shared with the public – and worst of all, they're features that were originally penciled in for the Vision Pro's launch instead.

However, Apple appears to have chosen to delay those software features until the next found of hardware is ready, and that, among other things, could be enough to give potential buyers a reason to consider hanging fire – not that we imagine people are lining up to buy this insanely expensive device, even if it does turn out to be the best VR headset ever made.

Two is better than one

Writing in his weekly Power On newsletter, Bloomberg's Mark Gurman reports that  Apple has not one, but two new versions of the Vision Pro headset in development already – one of which will be a lot cheaper. Apple only announced the Vision Pro at WWDC on June 5, but it's already moved some employees from that project and onto teams that are working on what comes next in Apple's AR/VR lineup.

We noted the two new Vision Pro models previously, but the latest report from Gurman suggests that new software features will debut with those updated models, rather than the first headset – even though that one isn't even releasing until 2024.

Gurman says that Apple is working on “The ability to show multiple Mac desktop screens when connected wirelessly to a Vision Pro,” whereas the first Vision Pro will only connect to a single desktop at launch. There's also the suggestion that Apple Fitness Plus will be integrated somehow, allowing headsets wearers to work out while in an AR/VR world.

Finally, Gurman says that Apple also wants to offer “the ability for multiple Vision Pro users in a several-person FaceTime conference to use Personas.” The Vision Pro due to go on sale in the first half of 2024 will only allow one-on-one calls with Apple's haunting 3D avatars.

It's still too early to know when Apple will announce these new headsets of course, nor do we know how much that cheaper model will cost. We can hopefully expect to learn more as the leaks roll out in the coming months.

It's a bit disappointing that Apple will apparently be holding back some features – it's particularly odd to be hearing about it now, when the first iteration of the headset is still more than six months away from release. We'd imagine there probably is enough time for Apple to implement those features, in fact, which makes the whole thing all the more disheartening.

In other words, we're probably going to hold off on dropping $ 3,499 on the Vision Pro next year – at least, unless Apple confirms these features will be backward-compatible when they finally do arrive.

TechRadar – All the latest technology news

Read More

Meta says its new speech-generating AI tool is too dangerous to release

Meta has unveiled a new AI tool, dubbed ‘Voicebox’, which it claims represents a breakthrough in AI-powered speech generation. However, the company won’t be unleashing it on the public just yet – because doing so could be disastrous.

Voicebox is currently able to produce audio clips of speech in six languages (all of which are European of origin), and – according to a blog post from Meta – is the first AI model of its kind capable of completing tasks beyond what it was ‘specifically trained to accomplish’. Meta claims that Voicebox handily outperforms competing speech-generation AIs in virtually every area.

So what exactly is it capable of? Well, for starters, it can spew out reasonably accurate text-to-speech replications of a person’s voice using a sample audio file as short as two seconds, a seemingly innocuous ability that holds a huge amount of destructive potential in the wrong hands.

The dubious power of AI

Even setting aside the dodgy stuff that creeps on the internet have been doing with ChatGPT and other AI tools (Voicebox certainly sounds like it could be a boon for anyone making fake revenge porn), this is the sort of technology that could quite literally start a war.

After all, most major public figures, including politicians, have plenty of audio recordings floating around the internet. It wouldn’t be hard to collate some speech clips of an incumbent political leader and use Voicebox to produce a startlingly realistic replication of their voice – something that could then be used for nefarious purposes.

Mark Zuckerberg

Big Zuck (sorry, ‘Meta CEO Mark Zuckerberg’) has been investing heavily in AI development at Meta for years now. (Image credit: Facebook)

Such tools exist already, of course, but they’re less convincing; you may have seen amusing videos on social media featuring the likes of Joe Biden, Donald Trump, and Barack Obama supposedly playing Fortnite together. It’s good for a laugh, but the audio is hardly convincing. It mimics the mannerisms of each presidential gamer enough that they’re recognizable, but not so well that anyone with a brain would actually believe it’s them.

Meta clearly believes its new tool is good enough to fool at least the majority of people, though – since it’s explicitly not releasing Voicebox to the public, but instead publishing a research paper and detailing a classifier tool that can identify Voicebox-generated speech from real human speech. Meta describes the classifier as “highly effective” – though notably not perfectly effective.

Speaking machines

Of course, while Meta is keen to stress that it recognizes the “potential for misuse and unintended harm” surrounding tools like Voicebox, it’s important not to lose sight of the potential benefits AI speech generation could have in the future.

Voicebox – befitting its name – could provide far more naturalistic speech to people who are mute or otherwise unable to communicate, removing some of the barriers to interaction caused by the existing text-to-speech ‘robot voice’ made famous by physicist Stephen Hawking. It could also perform real-time translation, bringing us one step closer to the sort of ‘universal translator’ devices that currently exist only in science fiction.

Instagram app logo on iOS

Instagram – which is owned by Meta – could prove to be a successful home for Voicebox, improving and translating videos for a wider audience. (Image credit: Shutterstock)

There are other applications too; smaller, but no less useful. Meta explains in its blog post that Voicebox can be used to edit and improve recorded speech. If you’ve recorded some audio but you mispronounced a word or were interrupted by background noise, Voicebox can isolate the offending segment and ‘re-record’ a snippet of speech using your voice. Impressive, and only slightly terrifying.

In any case, it’s good to see Meta taking a serious, considered approach here. Microsoft’s frantic eagerness to shove Bing AI into everything has landed it in hot water more than once, and OpenAI unleashing ChatGPT on the world has led to all sorts of weirdness over the past year. We’re in an AI gold rush, and these tools are making their way into every part of our lives.

A little caution, patience, and respect for the magnitude of this technology is a welcome sight – although I doubt Meta will sit on Voicebox for too long, since the shareholders will no doubt be wondering how much money it can make them…

TechRadar – All the latest technology news

Read More

Are you backing up your photos? This survey says most of us aren’t

Not enough of us are backing up our photos and videos – that’s the message from Mixbook following its survey on the photography habits of Americans. 

While there’s no shortage of photo cloud storage and cloud backup services, the popular online photo printing service revealed just 35% of surveyed respondents regularly backup the photos on their camera roll. 

The report also showed just how fleeting some photography is. Of those surveyed, 80% said they have pictures or videos on their phone that they haven't looked at since the day they took them. 

Gathering digital dust

We’re all taking more and more pictures and videos. High-resolution camera phones and a steady stream of photo editors and video editing apps have made it easier than ever before. Yet so much media is left gathering digital dust.  

Mixbook calls it “phlushing”, which is probably the ugliest word to be written this week. It’s the act of taking photos, then flushing them down the memory hole like they didn’t happen. Moments captured in time, and never seen again. It all sounds suspiciously like the time before everyone had a camera in their pocket. 

The data revealed users stored an average 3,139 pictures and videos on their phones. But 55% of respondents admitted not looking at their camera roll in the last year. And despite the best cloud storage providers storing years’-worth of media, users said they rarely went back to those taken more than twelve months ago. The same number confessed to feeling overwhelmed by how many photos and videos were stored on their device. Perhaps a problem easier ignored – at least until the likes of Apple iCloud and Google One come knocking for a storage space subscription. 

What actually happens to all those photos and videos? In 50% of cases, nothing at all. A further 30% share them with family and friends, while 17% post to social media. In a sign of the times, just 3% print them, online or with a photo printer

But the real concern is that 65% who are not regularly securely storing media – especially with so many ways to backup photos – whether they’re “phlushing” those images or not.  

TechRadar – All the latest technology news

Read More

Amazon says even AI isn’t powerful enough to stop fake reviews

Amazon has renewed its war on fake reviews by developing new AI-powered tools to help tackle the problem, but the retail giant admits they aren't enough to solve the issue on their own.

In a new blog post, Dharmesh Mehta, who's Amazon's VP of Worldwide Selling Partner Services, writes “we must work together to stop the fake review brokers that are the source of most fake reviews”, calling on “private sector, consumer groups, and governments” to work together to stop the brokers.

What are these so-called 'fake review brokers'? Amazon says the brokers have become an industry in recent years, and have “evolved in an attempt to evade detection”. They work by approaching average consumers though websites, social media or encrypted messaging services and getting to them write fake reviews “in exchange for money, free products, or other incentives”.

Amazon says it's using increasingly sophisticated AI tools and machine learning to stem the tide. These fraud-detection programs apparently analyze thousands of data points, including sign-in activity and review history, to help spot fake reviews. The figures involved are pretty staggering; Amazon says that last year it blocked over 200 million suspected fake reviews in 2022, and sued over 10,000 Facebook group administrators. 

But Amazon's financial might and its increasingly sophisticated AI tools seemingly aren't enough to stop fake reviews. The retail giant says that because much of the misconduct happens outside of Amazon’s store “it can be more challenging to detect, prevent, and enforce these bad actors if we are acting alone”.

A hand holding an iPhone showing Amazon reviews

(Image credit: Amazon)

So Amazon has made a three-point plan to get some extra help. Firstly, it wants there to be more cross-industry sharing about fake review brokers and their various tactics and techniques. Secondly, it wants governments and regulators to use their authority more to take action against bad actors. 

And lastly, in a veiled nudge at Meta and other social media giants, it's asked that “all sites that could be used to facilitate this illicit activity should have robust notice and takedown processes”. Amazon wants to work with “these companies” (read Facebook, WhatsApp, Signal and more) to help improve their detection methods.

Whether or not these three steps are realistic remains to be seen, but the message from Amazon is clear – it doesn't think it can stem the tide of fake reviews on its own, and that's a problem for all of us. Until that improves, it's more important than ever to follow advice on how to spot fake Amazon reviews during Prime Day and other big shopping events.

How to spot fake Amazon reviews

A laptop screen on an orange background showing an Amazon review in the website ReviewMeta

Sites like ReviewMeta (above) can help you weed out suspicious reviews from an Amazon product’s rating (Image credit: Future)

We've been highlighting the problem of fake Amazon reviews for over a decade, and it's clear that the issue has become a game of whack-a-mole – while Amazon's tools have improved, the retail giant admits that the “tactics of fake review brokers have also evolved in an attempt to try to evade detection”.

This is a big problem for the average online shopper – in the UK, the consumer group Which? says that around one in seven reviews are fake. And that means you can be misled into buying poor-quality products.

Mehta's blog post is a reminder than even the world's biggest tech giants, and the latest AI technology, aren't powerful enough to stop fake reviews. And that means we all need to be increasingly savvy when shopping online.

As our in-depth guide to spotting fake Amazon reviews highlights, there are some simple red flags to look out for in product reviewers, including “overly promotional language, repeated reviews, and reviews for an entirely different product”. 

But there are also handy third-party tools like ReviewMeta and FakeSpot (which was recently bought by the Firefox owner Mozilla) that can help you use AI to detect fake reviews and scams. These allow you paste in Amazon product URLs to get an analysis of the reviews or use Chrome extensions for a quick check.

While Amazon's three-point call-out for outside help is understandable, recent history suggests that progress is going to be slow – which means we'll all need to remain on guard when doing our online shopping, particularly during big events like Amazon Prime Day 2023.

TechRadar – All the latest technology news

Read More

Microsoft says it’s curtains for Cortana in Windows 11 (and 10) – but that’s no surprise

Microsoft has announced that it’s killing off Cortana, at least in Windows, where the assistant will be dropped in the not-so-distant future.

Windows Central reported on Microsoft’s revelation that the Cortana app will no longer be supported in Windows late in 2023 (as tipped by @Perbylund on Twitter).

However, the aged assistant will still remain in other Microsoft services, including various bits of Microsoft Teams and Outlook mobile, so Cortana hasn’t entirely been binned.

As for Windows 11, though, you won’t be needing Cortana anyway, because as Microsoft reminds us, the operating system already has elements in place to replace the digital assistant.

For voice controls, there’s now a comprehensive Voice Access feature which Microsoft has been beavering away honing considerably of late.

And for queries and assistance, naturally there’s the new Bing AI (ChatGPT-powered bot) on tap, plus there’s something bigger in the pipeline – Copilot.

In case you missed it (somehow), Microsoft recently revealed it’s bringing Copilot to Windows 11 to sit at the heart of the operating system, offering help with whatever you’re doing.

Analysis: An inevitable move from Microsoft

While Copilot isn’t here yet, the AI will be far more extensive in terms of its scope than Cortana. When it comes to assistance with doing stuff in Windows 11, it’ll not just tell you about useful features for any given task, but offer to automatically enable them if needed. Copilot can also summarize a Word document, for example, Bing AI-style, and its far more wide-ranging skills and utility make Cortana irrelevant as a result.

Just on that basis, it’s no surprise to see Microsoft giving Cortana the elbow from Windows. Indeed, with Cortana getting cut off late in 2023, that perhaps is a further suggestion that this is when Copilot will step up to be incorporated in Windows 11 – perhaps with the 23H2 update? We know Copilot will be in Windows 11 preview builds this month (or at least that’s what Microsoft has told us), so it seems everything is all lining up.

Although there’s always a chance that Copilot is such a big feature addition, Microsoft may want to save it for next-gen Windows (Windows 12, maybe) which is set to arrive next year (rumor has it).

Whatever the case, Cortana is not exactly going to be missed outside a niche of users, at least in Windows, anyway. As a digital assistant, rather than being an all-rounder, Microsoft had already angled Cortana more to business-related use (hence Cortana remaining in Teams and so forth after being ditched from Windows). So, none of this is a shock, and it just makes sense for Microsoft to eject Cortana with Copilot now incoming for Windows 11.

TechRadar – All the latest technology news

Read More