OpenAI announces new Safety and Security Committee as the AI race hots up and concerns grow around ethics

OpenAI, the tech company behind ChatGPT, has announced that it’s formed a ‘Safety and Security Committee’ that’s intended to make the firm’s approach to AI more responsible and consistent in terms of security.

It’s no secret that OpenAI and CEO Sam Altman – who will be on the committee – want to be the first to reach AGI (Artificial General Intelligence), which is broadly considered as achieving artificial intelligence that will resemble human-like intelligence and can teach itself. Having recently debuted GPT-4o to the public, OpenAI is already training the next-generation GPT model, which it expects to be one step closer to AGI.

GPT-4o was debuted on May 13 to the public as a next-level multimodal (capable of processing in multiple ‘modes’) generative AI model, able to deal with input and respond with audio, text, and images. It was met with a generally positive reception, but more discussion around the innovation has since arisen regarding its actual capabilities, implications, and the ethics around technologies like it. 

Just over a week ago, OpenAI confirmed to Wired that its previous team responsible for overseeing the safety of its AI models had been disbanded and reabsorbed into other existing teams. This followed the notable departures of key company figures like OpenAI co-founder and chief scientist Ilya Sutskever, and co-lead of the AI safety ‘superalignment’ team Jan Leike. Their departure was reportedly related to their concerns that OpenAI, and Altman in particular, was not doing enough to develop its technologies responsibly, and was forgoing conducting due diligence. 

This has seemingly given OpenAI a lot to reflect on and it’s formed the oversight committee in response. In the announcement post about the committee being formed, OpenAI also states that it welcomes a ‘robust debate at this important moment.’ The first job of the committee will be to “evaluate and further develop OpenAI’s processes and safeguards” over the next 90 days, and then share recommendations with the company’s board.

meeting

(Image credit: Bild von Free Photos auf Pixabay)

What happens after the 90 days? 

The recommendations that are subsequently agreed upon to be adopted will be shared publicly “in a manner that is consistent with safety and security.”

The committee will be made up of Chairman Bret Taylor, CEO of Quora Adam D’Angelo, and Nicole Seligman, a former executive of Sony Entertainment, alongside six OpenAI employees which includes Sam Altman as mentioned, and John Schulman, a researcher and cofounder of OpenAI. According to Bloomberg, OpenAI stated that it will also consult external experts as part of this process. 

I’ll reserve my judgment for when OpenAI’s adopted recommendations are published, and I can see how they’re implemented, but intuitively, I don’t have the greatest confidence that OpenAI (or any major tech firm) is prioritizing safety and ethics as much as they are trying to win the AI race.

That’s a shame, and it’s unfortunate that generally speaking, those who are striving to be the best no matter what are often slow to consider the cost and effects of their actions, and how they might impact others in a very real way – even if large numbers of people are potentially going to be affected.

I’ll be happy to be proven wrong and I hope I am, and in an ideal world, all tech companies, whether they’re in the AI race or not, should prioritize the ethics and safety of what they’re doing at the same level that they strive for innovation. So far in the realm of AI, that does not appear to be the case from where I’m standing, and unless there are real consequences, I don’t see companies like OpenAI being swayed that much to change their overall ethos or behavior.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

Google has a plan to save us from AI deepfakes during the US presidential race

Amidst the rise of AI popularity, Google has decided that political ads that make use of artificial intelligence have to clearly disclose when imagery or audio has been manipulated synthetically. 

Campaigns that put out AI-generated ads on YouTube and any other Google platforms will have to show an obvious disclaimer that users are unlikely to miss, as reported by the Associated Press.

Experts have already been sounding off about the need for widespread regulation and the raising of awareness among the wider public ahead of elections, and it seems they're not the only ones with concerns. 

When and where the new policy will kick in

This political policy update was made by Google last week, with the policy officially kicking into effect in mid-November. Google also announced that it will adopt similar policies for campaign ads in time for elections in the European Union, India, South Africa, and other regions for which Google has a verification process in place. 

AI-generated and falsified media clips have become an everyday occurrence in political media, and generative AI tools are a new way to assist with that. Not only do these tools make it easier and faster to produce misinformation, they also enable bad actors to mimic speech or appearance in photos and videos more realistically. 

AI-generated video has already been used by the political campaign of one of the current forerunners for the Republican party in the US, Gov. Ron DeSantis of Florida.

DeSantis’ campaign put out an ad that depicted his GOP opponent and Republican frontrunner, Donald Trump, positively embracing Dr. Anthony Fauci, who served as one of the chief medical experts who advised Trump during the COVID pandemic. In a similar vein, the Republican National Committee (RNC) released a wholly AI-generated ad depicting what it imagines the future to be under Joe Biden. 

Looking at AI and deepfakes on a federal level

In an effort echoing Google’s new policies, the Federal Election Commission (FEC) has begun looking at implementing regulation to moderate AI-generated ads such as ‘deepfakes’ (doctored videos and images of real people). Advocates on the issue say this should help steer voters away from misinformation. It’s easy to see how regulation of this sort could help – deepfakes can come in the form of political figures saying or doing things they never expressed in real life.

Democratic senator Amy Klobuchar is a co-sponsor of legislation that would demand similar requirements to Google’s policy law; potentially deceptive AI-generated political ads will have to include disclaimers disclosing the fact. Sen. Klobuchar commented on Google’s policy in a statement praising the company’s move but also stating that “we can’t solely rely on voluntary commitments.”

Multiple states have already passed or have begun discussing legislation to address deepfake technology. 

This new policy does not mean all use of AI by political campaigns is banned – there are notable exceptions for altering content in ways that don’t change the substance and content of the advert. For instance, this includes using AI tools for media editing and quality improvement purposes. It also will apply largely to YouTube, along with the rest of Google’s platforms, and whatever third-party sites exist within Google’s ad display network.

What are other tech giants' policies?

As of this week, Google is still the only platform to put a policy like this in place in what is probably a proactive effort. I expect other social media platforms will have to follow if their existing policy is insufficient, especially if more widespread legislation comes into place.

Meta, parent company of Instagram and Facebook, doesn’t have an AI-specific prescriptive policy but does have a general blanket policy against “faked, manipulated or transformed” audio and imagery for misinformation purposes. TikTok bans political ads altogether. The Associated Press reached out to X (formerly Twitter) last week for comment on the issue, but it seems the X team is a little busy just keeping the platform from falling apart and didn’t issue a comment. 

This is concerning. Right now, it’s still very much a wild west of sorts when it comes to the use of AI for political gains. I very much appreciate any proactive efforts, even by tech companies, because to me, it shows they’re thinking about the future – and not just capturing audiences in the present. 

You might also like

TechRadar – All the latest technology news

Read More