OpenAI, the tech company behind ChatGPT, has announced that it’s formed a ‘Safety and Security Committee’ that’s intended to make the firm’s approach to AI more responsible and consistent in terms of security.

It’s no secret that OpenAI and CEO Sam Altman – who will be on the committee – want to be the first to reach AGI (Artificial General Intelligence), which is broadly considered as achieving artificial intelligence that will resemble human-like intelligence and can teach itself. Having recently debuted GPT-4o to the public, OpenAI is already training the next-generation GPT model, which it expects to be one step closer to AGI.

GPT-4o was debuted on May 13 to the public as a next-level multimodal (capable of processing in multiple ‘modes’) generative AI model, able to deal with input and respond with audio, text, and images. It was met with a generally positive reception, but more discussion around the innovation has since arisen regarding its actual capabilities, implications, and the ethics around technologies like it. 

Just over a week ago, OpenAI confirmed to Wired that its previous team responsible for overseeing the safety of its AI models had been disbanded and reabsorbed into other existing teams. This followed the notable departures of key company figures like OpenAI co-founder and chief scientist Ilya Sutskever, and co-lead of the AI safety ‘superalignment’ team Jan Leike. Their departure was reportedly related to their concerns that OpenAI, and Altman in particular, was not doing enough to develop its technologies responsibly, and was forgoing conducting due diligence. 

This has seemingly given OpenAI a lot to reflect on and it’s formed the oversight committee in response. In the announcement post about the committee being formed, OpenAI also states that it welcomes a ‘robust debate at this important moment.’ The first job of the committee will be to “evaluate and further develop OpenAI’s processes and safeguards” over the next 90 days, and then share recommendations with the company’s board.

meeting

(Image credit: Bild von Free Photos auf Pixabay)

What happens after the 90 days? 

The recommendations that are subsequently agreed upon to be adopted will be shared publicly “in a manner that is consistent with safety and security.”

The committee will be made up of Chairman Bret Taylor, CEO of Quora Adam D’Angelo, and Nicole Seligman, a former executive of Sony Entertainment, alongside six OpenAI employees which includes Sam Altman as mentioned, and John Schulman, a researcher and cofounder of OpenAI. According to Bloomberg, OpenAI stated that it will also consult external experts as part of this process. 

I’ll reserve my judgment for when OpenAI’s adopted recommendations are published, and I can see how they’re implemented, but intuitively, I don’t have the greatest confidence that OpenAI (or any major tech firm) is prioritizing safety and ethics as much as they are trying to win the AI race.

That’s a shame, and it’s unfortunate that generally speaking, those who are striving to be the best no matter what are often slow to consider the cost and effects of their actions, and how they might impact others in a very real way – even if large numbers of people are potentially going to be affected.

I’ll be happy to be proven wrong and I hope I am, and in an ideal world, all tech companies, whether they’re in the AI race or not, should prioritize the ethics and safety of what they’re doing at the same level that they strive for innovation. So far in the realm of AI, that does not appear to be the case from where I’m standing, and unless there are real consequences, I don’t see companies like OpenAI being swayed that much to change their overall ethos or behavior.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More