Dozens of governments and companies reiterated their commitments to the safe and inclusive development of artificial intelligence (AI) at the second global AI summit in South Korea. However, concerns were raised about the dominance of narrow corporate interests in the AI safety field and the need for broader participation by the public, workers, and other affected parties. While some concrete outcomes were achieved at the summit, further progress is necessary in areas such as mandatory AI safety commitments, socio-technical evaluations, and enhanced public involvement.
The summit resulted in various agreements and pledges, including the EU and 10 countries signing the Seoul Declaration to promote multi-stakeholder collaboration in AI safety and the development of shared risk thresholds for frontier AI models. Companies also made commitments to responsibly develop AI technology, but many questioned the effectiveness of voluntary commitments and emphasized the importance of setting harder rules and establishing accountability mechanisms to guide AI regulation. International cooperation on AI safety research was a key focus of the summit, with efforts to foster collaboration between safety institutes and broaden the participation of diverse voices and perspectives in AI research and policymaking.