Tuesday, October 22, 2024

AI Seoul Summit: 27 Nations and the EU to Establish Boundaries for AI Risk

A group of more than two dozen countries and the European Union have pledged to establish shared risk thresholds for frontier artificial intelligence (AI) models. This commitment was made at the AI Seoul Summit and aims to limit the potential harmful impacts of AI while promoting safety, innovation, and inclusivity. The signatories of the Seoul ministerial statement have agreed to deepen international cooperation on AI safety, which includes collectively agreeing on risk thresholds for severe AI model risks, establishing interoperable risk management frameworks, and promoting external evaluations of AI models. The statement emphasizes the potential risks of AI models evading human oversight and assisting non-state actors in developing chemical or biological weapons. The statement also mentions the establishment of AI safety institutes to share best practices and evaluation data sets and the adherence to relevant international laws and resolutions. The UK digital secretary, Michelle Donelan, sees this agreement as the start of “phase two of the AI safety agenda” where countries will take concrete steps to become more resilient to AI risks. The statement also highlights the importance of innovation, inclusivity, and sustainability in AI development and calls for measures to address the environmental footprint and promote workforce upskilling and reskilling. The governments involved are committed to promoting AI-related education and using AI to tackle global challenges. The EU and a group of 10 countries also signed the Seoul Declaration, emphasizing multi-stakeholder collaboration, and committed to signing the Seoul Statement of Intent to ensure interoperability between AI safety research institutes. Additionally, 16 global AI firms voluntarily signed the Frontier AI Safety Commitments, pledging to assess risks, set risk thresholds, implement mitigations, and invest in safety evaluation capabilities. The aim is to ensure that AI development does not pose unacceptable risks to public safety.