The ChatGPT ‘Sky’ assistant wasn’t a deliberate copy of Scarlett Johansson’s voice, OpenAI claims

OpenAI's high-profile run-in with Scarlett Johansson is turning into a sci-fi story to rival the move Her, and now it's taken another turn, with OpenAI sharing documents and an updated blog post suggesting that its 'Sky' chatbot in the ChatGPT app wasn't a deliberate attempt to copy the actress's voice.

OpenAI preemptively pulled its 'Sky' voice option in the ChatGPT app on May 19, just before Scarlett Johansson publicly expressed her “disbelief” at how “eerily similar” it sounded to her own (in a statement shared with NPR). The actress also revealed that OpenAI CEO Sam Altman had previously approached her twice to license her voice for the app, and that she'd declined on both occasions. 

But now OpenAI is on the defensive, sharing documents with The Washington Post suggesting that its casting process for the various voices in the ChatGPT app was kept entirely separate from its reported approaches to Johansson.

The documents, recordings and interviews with people involved in the process suggest that “an actress was hired to create the Sky voice months before Altman contacted Johansson”, according to The Washington Post. 

The agent of the actress chosen for the Sky voice also apparently confirmed that “neither Johansson nor the movie “Her” were ever mentioned by OpenAI” during the process, nor was the actress's natural speaking voice tweaked to sound more like Johansson.

OpenAI's lead for AI model behavior, Joanne Jang, also shared more details with The Washington Post on how the voices were cast. Jang stated that she “kept a tight tent” around the AI voices project and that Altman was “not intimately involved” in the decision-making process, as he was “on his world tour during much of the casting process”.

Clearly, this case is likely to rumble on, but one thing's for sure – we won't be seeing ChatGPT's 'Sky' voice reappear for some time, if at all, despite the vocal protestations and petitions of its many fans.

What happens next?

OpenAI logo on wall

(Image credit: Shutterstock.com / rafapress)

With Johansson now reportedly lawyering up in her battle with OpenAI, the case looks likely to continue for some time.

Interestingly, the case isn't completely without precedent, despite the involvement of new tech. As noted by Mitch Glazier (chief executive of the Recording Industry Association of America), there was a similar case in the 1980s involving Bette Midler and the Ford Motor Company.

After Midler declined Ford's request to use her voice in a series of ads, Ford hired an impersonator instead – which resulted in a legal battle that Midler ultimately won, after a US court found that her voice was distinctive and should be protected against unauthorized use.

OpenAI is now seemingly distancing itself from suggestions that it deliberately did something similar with Johansson in its ChatGPT app, highlighting that its casting process started before Altman's apparent approaches to the actress. 

This all follows an update to OpenAI's blog post, which included a statement from CEO Sam Altman claiming: “The voice of Sky is not Scarlett Johansson's, and it was never intended to resemble hers. We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.”

But Altman's post on X (formerly Twitter) just before OpenAI's launch of GPT-4o, which simply stated “her”, doesn't help distance the company from suggestions that it was attempting to recreate the famous movie in some form, regardless of how explicit that was in its casting process. 

You might also like

TechRadar – All the latest technology news

Read More

Google reveals new video-generation AI tool, Veo, which it claims is the ‘most capable’ yet – and even Donald Glover loves it

Google has unveiled its latest video-generation AI tool, named Veo, at its Google I/O live event. Veo is described as offering “improved consistency, quality, and output resolution” compared to previous models.

Generating video content with AI is nothing new; tools like Synthesia, Colossyan, and Lumiere have been around for a little while now, riding the wave of generative AI's current popularity. Veo is only the latest offering, but it promises to deliver a more advanced video-generation experience than ever before.

Google IO 2024

Donald Glover invited Google to his creative studio at Gilga Farm, California, to make a short film together. (Image credit: Google)

To showcase Veo, Google recruited a gang of software engineers and film creatives, led by actor, musician, writer, and director Donald Glover (of Community and Atlanta fame) to produce a short film together. The film wasn't actually shown at I/O, but Google promises that it's “coming soon”.

As someone who is simultaneously dubious of generative AI in the arts and also a big fan of Glover's work (Awaken, My Love! is in my personal top five albums of all time), I'm cautiously excited to see it.

Eye spy

Glover praises Veo's capabilities on the basis of speed: this isn't a deletion of human ideas, but rather a tool that can be utilized by creatives to “make mistakes faster”, as Glover puts it.

The flexibility of Veo's prompt reading is a key point here. It's capable of understanding prompts in text, image, or video format, paying attention to important details like cinematic style, camera positioning (for example, a birds-eye-view shot or fast-tracking shot), time elapsed on camera, and lighting types. It also has an improved capability to accurately and consistently render objects and how they interact with their surroundings.

Google DeepMind CEO Demis Hassabis demonstrated this with a clip of a car speeding through a dystopian cyberpunk city.

Google IO 2024

The more detail you provide in your prompt material, the better the output becomes. (Image credit: Google)

It can also be used for things like storyboarding and editing, potentially augmenting the work of existing filmmakers. While working with Glover, Google DeepMind research scientist Kory Mathewson explains how Veo allows creatives to “visualize things on a timescale that's ten or a hundred times faster than before”, accelerating the creative process by using generative AI for planning purposes.

Veo will be debuting as part of a new experimental tool called VideoFX, which will be available soon for beta testers in Google Labs.

TechRadar – All the latest technology news

Read More

Meta is on the brink of releasing AI models it claims to have “human-level cognition” – hinting at new models capable of more than simple conversations

We could be on the cusp of a whole new realm of AI large language models and chatbots thanks to Meta’s Llama 3 and OpenAI’s GPT-5, as both companies emphasize the hard work going into making these bots more human. 

In an event earlier this week, Meta reiterated that Llama 3 will be rolling out to the public in the coming weeks, with Meta’s president of global affairs Nick Clegg stating that we should expect the large language model “Within the next month, actually less, hopefully in a very short period, we hope to start rolling out our new suite of next-generation foundation models, Llama 3.”

Meta’s large language models are publicly available, allowing developers and researchers free and open access to the tech to create their bots or conduct research on various aspects of artificial intelligence. The models are trained on a plethora of text-based information, and Llama 3 promises much more impressive capabilities than the current model. 

No official date for Meta’s Llama 3 or OpenAI’s GPT-5 has been announced just yet, but we can safely assume the models will make an appearance in the coming weeks. 

Smarten Up 

Joelle Pineau, the vice president of AI research at Meta noted that “We are hard at work in figuring out how to get these models not just to talk, but actually to reason, to plan . . . to have memory.” Openai’s chief operating officer Brad Lightcap told the Finacial Times in an interview that the next GPT version would show progress in solving difficult queries with reasoning. 

So, it seems the next big push with these AI bots will be introducing the human element of reasoning and for lack of a better term, ‘thinking’. Lightcap also said “We’re going to start to see AI that can take on more complex tasks in a more sophisticated way,” adding “ We’re just starting to scratch the surface on the ability that these models have to reason.”

As tech companies like OpenAI and Meta continue working on more sophisticated and ‘lifelike’  human interfaces, it is both exciting and somewhat unnerving to think about a chatbot that can ‘think’ with reason and memory. Tools like Midjourney and Sora have championed just how good AI can be in terms of quality output, and Google Gemini and ChatGPT are great examples of how helpful text-based bots can be in the everyday. 

With so many ethical and moral concerns still unaddressed with the current tools available right now as they are, I dread to think what kind of nefarious things could be done with more human AI models. Plus, you must admit it’s all starting to feel a little bit like the start of a sci-fi horror story.  

You might also like…

TechRadar – All the latest technology news

Read More

ChatGPT gets a big new rival as Anthropic claims its Claude 3 AIs beat it

AI company Anthropic is previewing its new “family” of Claude 3 models it claims can outperform Google’s Gemini and OpenAI’s ChatGPT across multiple benchmarks.

This group consists of three AIs with varying degrees of “capability”. You have Claude 3 Haiku down at the bottom, followed by Claude 3 Sonnet, and then there’s Claude 3 Opus as the top dog. Anthropic claims the trio delivers “powerful performance” across the board due to their multimodality, improved level of accuracy, better understanding of context, and speed. What’s also notable about the trio is they’ll be more willing to answer tough questions. 

Anthropic explains older versions of Claude would sometimes refuse to answer prompts that pushed the boundaries of the safety guardrails. Now, the Claude 3 family will have a more nuanced approach with its responses allowing them to answer those tricky questions.

Despite the all-around performance boost, much of the announcement is focused on Opus as being the best in all of these areas. They go so far as to say the model “exhibits near-human levels of comprehension… [for] complex tasks”.

Specialized AIs

To test it, Anthropic put Opus through a “Needle In a Haystack” or NIAH evaluation to see how well it’s able to recall data. As it turns out, it’s pretty good since the AI could remember information with almost perfect detail. The company goes on to claim that Opus is quite the smart cookie able to solve math problems, generate computer code, and display better reasoning than GPT-4

The technology isn’t without its quirks. Even though Anthropic states their AIs have improved accuracy, there is still the problem of hallucinations. The responses the models churn out may contain wrong information, although they are greatly reduced compared to Claude 2.1. Plus, Opus is a little slow when it comes to answering a question with speeds comparable to Claude 2.

Of course, this isn’t to say Haiku or Sonnet are lesser than Opus as they have specific use cases. Haiku, for example, is great at giving quick replies and grabbing information “from unstructured data”. Also, it’s not as good at answering math questions as Opus. Sonnet is a larger-scale model meant to help people save time at menial tasks and even parse lines of “text from images”, while Opus is ideal for large-scale operations.

Changing the internet

Both Sonnet and Opus are currently available for purchase although there is a free version of Claude on the company website. A launch date was not given for Haiku, but Anthropic states it’ll be released soon. 

As you can probably guess, the Claude 3 trio is meant more for businesses looking to automate certain workloads. Your experience with the group will likely come in the form of an online chatbot. Amazon recently announced it’s going to be implementing Anthropic’s new AIs into AWS (Amazon Web Services) giving websites on the platform a way to create a customized Claude 3 model to suit the needs of brands and their customers.

If you're looking for a model suited for everyday use, check out TechRadar's list of the best AI content generators for 2024.

You might also like

TechRadar – All the latest technology news

Read More

Microsoft claims ChatGPT 4 will be able to make videos, and this won’t end well

ChatGPT 4 is coming as early as next week and will likely go with a new and potentially dreadful feature: video. 

Currently, ChatGPT and Microsoft’s updated Bing search engine are powered by ChatGPT 3.5 large language models, which allows them to respond to questions in a human-like way. But both AI implementations have had their fair share of problems so far, so what can we expect, or at least hope to see, with a new version on the horizon? 

According to Microsoft Germany’s CTO, Andreas Braun (as reported by Neowin), the company “will introduce GPT 4 next week, where we will have multimodal models that will offer completely different possibilities – for example, videos.” Braun made the comments during an event titled ‘AI in Focus – Digital Kickoff’. 

Essentially, AI is definitely not going away anytime soon. In its current state, we can interact with OpenAI's chatbot strictly through text, providing inputs and controls and getting conversational, mostly helpful, answers.

So the idea of having ChatGPT-powered chatbots, like the one in Bing, being able to reply in other mediums other than plain text is certainly exciting – but it also fills me with a bit of dread.

As I mentioned earlier, ChatGPT’s early days were marked with some strange and controversial responses that the chatbots gave to users. The one in Bing, for example, not only gave out incorrect information, but it then argued with the user who pointed out its mistakes, causing Microsoft to hastily intervene and limit the amount of responses it can provide in a single chat (and which Microsoft is only now slowly increasing again).

If we start seeing a similar streak of weirdness with videos, there could be even more concerning repercussions.

Ethics of AI

In a world where AI-generated ‘deepfake’ videos are an increasing concern for many people, especially those who unwittingly find themselves starring in those movies, the idea of ChatGPT dipping its toes into video creation is a bit worrying.

If people could ask ChatGPT to create a video starring a famous person, that celebrity would likely feel violated. While I’m sure many companies using ChatGPT 4, such as Microsoft, will try to limit or ban pornographic or violent requests, the fact that the ChatGPT code is easily available could mean more unscrupulous users could still abuse it.

There’s also the matter of copyright infringement. AI generated art has come under close scrutiny over where it is taking its samples from, and this will likely be the case with videos as well. Content creators, directors and streamers will likely take a dim view of their works being used in AI generated videos, especially if those videos are controversial or harmful.

AI, especially ChatGPT, which only launched a few months ago, is still in its infancy, and while its potential has yet to be fully realised, so too have the moral implications of what it can achieve. So, while Microsoft’s boasts about video coming soon to ChatGPT is impressive and exciting, the company also needs to be careful and make sure both users and original content creators are looked after.

TechRadar – All the latest technology news

Read More

Microsoft’s faster Windows 11 Update speed claims just don’t add up

As part of Microsoft’s attempts to get people to upgrade to Windows 11, the company claimed that one of the benefits of the new operating would be faster Windows Updates – but many users are complaining that those promised speed increases have failed to materialize.

As an article in WindowsReport explains, many users have found that Windows 11 updates are still taking too long despite Microsoft's claims, and are publicly complaining on sites such as Reddit.

In our own experience of using Windows 11, we’ve not noticed updates downloading or installing any faster, and along with these user complaints, it seems like Microsoft may have overstated the improvements to Windows update speeds.


Analysis: Come on, Microsoft

There’s a lot to like about the new operating system – check out our Windows 11 review to see what we think – but Microsoft also has its work cut out to convince people to upgrade. The promise of faster updates was certainly alluring – no one likes to sit around waiting while their PC installs an update, but Microsoft also needs to be careful about over-hyping improvements.

If it talks about faster update speeds, then Microsoft needs to deliver noticeable improvements. If many users feel like they aren’t getting what they were promised, they won’t be happy – and they’ll make their unhappiness known in public.

The good news is that this is still early days for Windows 11 (even though we’ve already begun hearing rumors about Windows 12), so we expect Microsoft to continue updating and improving the operating system.

That means we could see those promised update speeds coming later, or at least current speeds improving. It seems like having Windows 11 installed on modern technology, such as NVMe SSDs, helps speed up the update process as well.

But, Microsoft needs to ensure that it doesn’t over promise and under deliver, no matter what hardware people are using. If it does, then Windows 11’s reputation could suffer serious damage.

TechRadar – All the latest technology news

Read More

Barely anyone has upgraded to Windows 11, survey claims

It's now been over a month since Microsoft released the latest version of Windows but a new survey suggests less than one percent of PC users have upgraded to Windows 11.

According to new research from the IT asset management firm Lansweeper, just 0.21 percent of PC users are currently running Windows 11 despite the fact that it is available as a free update for Windows 10 users.

The company's recent investigation used data from more than 10m Windows devices running on business and home networks to find that Windows 11 is the fifth most popular Windows operating system. In fact, more PCs are running Windows XP (3.62%) and even Windows 8 (0.95%) than are running Windows 11.

One of the reasons could be due to Microsoft's TPM requirements as many systems lack the necessary hardware to run Windows 11.

End of Life operating systems

Lansweeper's report also shows that almost 1 in 10 (9.93%) of the Windows devices it scanned are running End of Life operating systems including Windows XP and Windows 7 which Microsoft stopped supporting back in 2014 and 202 respectively.

Chief marketing officer at Lansweeper, Roel Decneut provided further insight on the dangers and security risk of running End of Life operating systems in a press release, saying:

“The situation poses a significant cybersecurity risk as Microsoft no longer provides bug-fixes or security patches for Windows Vista, 2000, XP, and 7. Although the majority of users are on newer operating systems, the billions of active Windows devices worldwide means there could still be millions of people using devices that are insecure and open to attack. Plus, a large number of these outdated systems are predicted to be running on enterprise devices, which means it’s not just personal information that’s on the line.” 

While some individuals and businesses may not be ready to upgrade to Windows 11 just yet, running an older version of Windows that is no longer receiving security updates from Microsoft can put your PC at a much higher risk of falling victim to malware and other cyberattacks.

Looking to upgrade your systems for Windows 11? Check out our roundup of the best business computers as well as our lists of the best business laptopsbest workstations and best mobile workstations

TechRadar – All the latest technology news

Read More