Tuesday, October 22, 2024

16 AI Companies at the AI Seoul Summit Voluntarily Commit to Ensuring Safety

During the AI Seoul Summit, the governments of the UK and South Korea have secured voluntary commitments from 16 global artificial intelligence (AI) companies. These commitments ensure that the companies will develop AI technology safely and responsibly. Among the signatories are companies from the US, China, and the UAE.

The Frontier AI Safety Commitments outline the measures that the companies must take to ensure transparency and accountability in their AI development. This includes assessing risks throughout the entire AI lifecycle, setting risk thresholds for severe threats, implementing mitigations to prevent breaches of these thresholds, and investing in safety evaluation capabilities.

Furthermore, the companies have committed to involving external actors from government, civil society, and the public in the risk assessment process. They have also pledged to provide public transparency, although they may withhold certain information if it poses a disproportionate risk or reveals sensitive commercial information.

The signatories have affirmed their commitment to implementing industry best practices on AI safety. This includes conducting internal and external red-teaming of AI models, investing in cyber security and insider threat safeguards, incentivizing third-party discovery of vulnerabilities, prioritizing research on societal risks, and using AI to address global challenges.

All 16 companies have agreed to publish their safety frameworks ahead of the next AI Summit in France. UK Prime Minister Rishi Sunak called these commitments a “world first” and emphasized the importance of global standards on AI safety.

The voluntary commitments made in Seoul build upon previous commitments made during the UK government’s AI Safety Summit at Bletchley Park. The Bletchley Declaration, signed by all 28 governments in attendance, aimed to deepen cooperation on AI risks. Additionally, several AI companies agreed to undergo pre-deployment testing by the UK’s AI Safety Institute.

Yoshua Bengio, an AI academic and member of the UN’s Scientific Advisory Board, welcomes the commitments made by the companies but believes they should be reinforced by regulatory measures. Beth Barnes, founder of the non-profit METR, emphasizes the importance of establishing international agreements on AI safety.

While progress has been made, Politico reported that three major AI model developers have yet to provide the agreed pre-release access to the AI Safety Institute. The UK’s Department for Science, Innovation, and Technology views the voluntary commitments in Seoul as a historic first and believes they set a precedent for global standards. They also highlight the Seoul Statement of Intent toward International Cooperation on AI Safety Science, which will establish an international network to advance AI safety.