Monday, October 21, 2024

AI companies cannot be relied on to willingly disclose risk information

Current and former employees of artificial intelligence (AI) companies are calling for greater whistleblower protections as they believe that companies cannot be trusted to voluntarily share information about system capabilities and risks.

During the second global AI Summit in Seoul, 16 companies signed the Frontier AI Safety Commitments, a set of voluntary measures aimed at safely developing AI technology. However, employees from companies like OpenAI, Anthropic, and DeepMind, who signed the commitments, feel that these voluntary arrangements are insufficient for effective oversight.

The employees raised concerns about confidentiality agreements preventing them from voicing their concerns and the lack of protections for disclosing non-regulated risks. They are calling for more transparency and accountability from AI companies, including the ability to raise concerns anonymously, without fear of retaliation.

Prominent AI experts, including Stuart Russell, Geoffrey Hinton, and Yoshua Bengio, have supported these calls for increased transparency and accountability. Bengio, who was selected to lead the first-ever frontier AI State of the science report, emphasized the need for more formal regulatory measures to supplement the voluntary commitments made by AI companies.

While OpenAI reiterated its commitment to providing safe AI systems and engaging with stakeholders, the employees stressed the importance of protecting confidential information while still allowing for the reporting of risk-related concerns. DeepMind and Anthropic did not respond to requests for comment.

Overall, there is a growing call for stronger whistleblower protections and regulatory measures to ensure that AI companies prioritize safety and accountability in their development processes.