Tuesday, October 22, 2024

AI Seoul Summit: 10 Nations and EU Renew Their Pledge for Safe and Inclusive AI

Ten governments and the European Union (EU) convened at the AI Seoul Summit in South Korea and have signed a joint declaration emphasizing their commitment to international cooperation on artificial intelligence (AI). The declaration highlights the importance of including a diverse range of voices in ongoing discussions about AI governance.

The Seoul Declaration, signed on May 21, 2024, builds upon the Bletchley Declaration signed six months prior by 28 governments and the EU at the UK’s AI Safety Summit. The Bletchley Declaration emphasized the need for an inclusive and human-centric approach to ensure the trustworthiness and safety of AI. It outlined the focus of international cooperation on identifying shared AI safety risks, developing a scientific understanding of these risks, and implementing risk-based governance policies.

While the Bletchley Declaration acknowledged the significance of inclusive action on AI safety, the Seoul Declaration, signed by 10 countries (Australia, Canada, EU, France, Germany, Italy, Japan, Republic of Korea, Republic of Singapore, UK, and US), explicitly affirms the importance of active collaboration among stakeholders in this field. The governments involved have committed to actively include a wide range of stakeholders in AI-related discussions.

Although government officials and representatives from the tech industry expressed positivity following the previous AI Summit, civil society and trade unions raised concerns about the exclusion of workers and other individuals directly impacted by AI. Over 100 organizations signed an open letter labeling the event “a missed opportunity.”

The latest Seoul Declaration reiterates many of the commitments made at Bletchley, primarily emphasizing the importance of enhancing international cooperation and ensuring responsible use of AI to protect human rights and the environment. It also reaffirms the commitment to developing risk-based governance approaches that are interoperable with each other. Additionally, it aims to expand the international network of scientific research bodies, such as the UK’s and US’s AI Safety Institutes, established during the previous Summit.

In line with this, the same 10 countries and the EU signed the Seoul Statement of Intent toward International Cooperation on AI Safety Science. This agreement aims to foster complementarity and interoperability between publicly backed research institutes that have already been established, building upon the existing collaboration between the US and UK institutes.

The UK, which has been at the forefront of the global movement on AI safety since the Bletchley Summit, plans to establish new offices in San Francisco to collaborate with leading AI companies and tap into the Bay Area’s tech talent pool. The UK AI Safety Institute (AISI) recently released its first set of safety testing results, which highlighted the limitations of large language models (LLMs) and their vulnerability to breaches in safeguards.

However, the Ada Lovelace Institute (ALI) questioned the effectiveness of the AISI and the dominant approach to model evaluations in the AI safety field. ALI also raised concerns about the voluntary testing framework that restricts the Institute’s access to models without the agreement of companies. It suggested that current evaluation practices predominantly cater to the interests of companies rather than the public or regulators, with a focus on performance and safety issues that pose reputational risks rather than those with more significant societal impacts.