OpenAI announces new Safety and Security Committee as the AI race hots up and concerns grow around ethics

OpenAI, the tech company behind ChatGPT, has announced that it’s formed a ‘Safety and Security Committee’ that’s intended to make the firm’s approach to AI more responsible and consistent in terms of security.

It’s no secret that OpenAI and CEO Sam Altman – who will be on the committee – want to be the first to reach AGI (Artificial General Intelligence), which is broadly considered as achieving artificial intelligence that will resemble human-like intelligence and can teach itself. Having recently debuted GPT-4o to the public, OpenAI is already training the next-generation GPT model, which it expects to be one step closer to AGI.

GPT-4o was debuted on May 13 to the public as a next-level multimodal (capable of processing in multiple ‘modes’) generative AI model, able to deal with input and respond with audio, text, and images. It was met with a generally positive reception, but more discussion around the innovation has since arisen regarding its actual capabilities, implications, and the ethics around technologies like it. 

Just over a week ago, OpenAI confirmed to Wired that its previous team responsible for overseeing the safety of its AI models had been disbanded and reabsorbed into other existing teams. This followed the notable departures of key company figures like OpenAI co-founder and chief scientist Ilya Sutskever, and co-lead of the AI safety ‘superalignment’ team Jan Leike. Their departure was reportedly related to their concerns that OpenAI, and Altman in particular, was not doing enough to develop its technologies responsibly, and was forgoing conducting due diligence. 

This has seemingly given OpenAI a lot to reflect on and it’s formed the oversight committee in response. In the announcement post about the committee being formed, OpenAI also states that it welcomes a ‘robust debate at this important moment.’ The first job of the committee will be to “evaluate and further develop OpenAI’s processes and safeguards” over the next 90 days, and then share recommendations with the company’s board.

meeting

(Image credit: Bild von Free Photos auf Pixabay)

What happens after the 90 days? 

The recommendations that are subsequently agreed upon to be adopted will be shared publicly “in a manner that is consistent with safety and security.”

The committee will be made up of Chairman Bret Taylor, CEO of Quora Adam D’Angelo, and Nicole Seligman, a former executive of Sony Entertainment, alongside six OpenAI employees which includes Sam Altman as mentioned, and John Schulman, a researcher and cofounder of OpenAI. According to Bloomberg, OpenAI stated that it will also consult external experts as part of this process. 

I’ll reserve my judgment for when OpenAI’s adopted recommendations are published, and I can see how they’re implemented, but intuitively, I don’t have the greatest confidence that OpenAI (or any major tech firm) is prioritizing safety and ethics as much as they are trying to win the AI race.

That’s a shame, and it’s unfortunate that generally speaking, those who are striving to be the best no matter what are often slow to consider the cost and effects of their actions, and how they might impact others in a very real way – even if large numbers of people are potentially going to be affected.

I’ll be happy to be proven wrong and I hope I am, and in an ideal world, all tech companies, whether they’re in the AI race or not, should prioritize the ethics and safety of what they’re doing at the same level that they strive for innovation. So far in the realm of AI, that does not appear to be the case from where I’m standing, and unless there are real consequences, I don’t see companies like OpenAI being swayed that much to change their overall ethos or behavior.

YOU MIGHT ALSO LIKE…

TechRadar – All the latest technology news

Read More

It looks like Android 13 might include an important Apple AirTag safety feature

Google might be working on its own Bluetooth tracker detection software for Android smartphones, according to the latest reports.

Bluetooth trackers like the Tile Mate and Apple AirTag have become increasingly popular over the last couple of years, by using Bluetooth connections, and an army of phones hooked up to the Tile or Apple’s Find My network, these tags can help users find lost objects at home and out in the wider world.

Unfortunately, plenty of bad actors use these same devices to stalk unsuspecting individuals.

Tile and Apple have introduced various safety measures to reduce the risk their devices pose, but there are still issues with the current system. The main problem is that, for Android phone users, the free Tile and Apple Tracer Detect apps don’t offer automatic detection – you have to manually initiate searches on each individual app.

Now it looks like Google is taking matters into its own hands according to a 9To5Google report. The site details lines of code it found in a Google APK that was recently uploaded to the Play Store that references tag detection for devices named Tile tag and ATag (likely referencing AirTags).

The code is still fairly bare-bones right now, but it strongly suggests that Google is working on in-built tracker detection for Android. 

It’s not clear if this detection can be set to run automatically – though this should absolutely be an option – but this feature would at least give Android device users a pre-installed one-stop-shop to check if they're being stalked by unknown trackers.

We don't know when this feature will be available, but it could drop fairly soon. There’s a chance that this tracker detection will be available later this year when Android 13 launches, and it might even launch as part of the next Android 13 beta. We’ll have to wait and see what Google announces. 

Even though there are early signs that the feature is being developed, there’s no guarantee that it will ever see the light of day.

TechRadar – All the latest technology news

Read More

It looks like Android 13 might include an important Apple AirTag safety feature

Google might be working on its own Bluetooth tracker detection software for Android smartphones, according to the latest reports.

Bluetooth trackers like the Tile Mate and Apple AirTag have become increasingly popular over the last couple of years, by using Bluetooth connections, and an army of phones hooked up to the Tile or Apple’s Find My network, these tags can help users find lost objects at home and out in the wider world.

Unfortunately, plenty of bad actors use these same devices to stalk unsuspecting individuals.

Tile and Apple have introduced various safety measures to reduce the risk their devices pose, but there are still issues with the current system. The main problem is that, for Android phone users, the free Tile and Apple Tracer Detect apps don’t offer automatic detection – you have to manually initiate searches on each individual app.

Now it looks like Google is taking matters into its own hands according to a 9To5Google report. The site details lines of code it found in a Google APK that was recently uploaded to the Play Store that references tag detection for devices named Tile tag and ATag (likely referencing AirTags).

The code is still fairly bare-bones right now, but it strongly suggests that Google is working on in-built tracker detection for Android. 

It’s not clear if this detection can be set to run automatically – though this should absolutely be an option – but this feature would at least give Android device users a pre-installed one-stop-shop to check if they're being stalked by unknown trackers.

We don't know when this feature will be available, but it could drop fairly soon. There’s a chance that this tracker detection will be available later this year when Android 13 launches, and it might even launch as part of the next Android 13 beta. We’ll have to wait and see what Google announces. 

Even though there are early signs that the feature is being developed, there’s no guarantee that it will ever see the light of day.

TechRadar – All the latest technology news

Read More