Friday, June 13, 2025

Sweden Receives Assistance in Strengthening Its Sovereign AI Capabilities

MPs to Explore Possibility of Government Digital Identity Program

Cisco Live 2025: Essential Networks for the Future of AI

UK Finance Regulator Partners with Nvidia to Enable AI Experimentation for Firms

June Patch Tuesday Eases the Burden for Defenders

Labour Pledges £17.2 Million for Spärck AI Scholarship Program

Emerging Real-World AI Applications for SDVs, Yet Readiness Gaps Remain

Are We Normalizing Surveillance in Schools?

US Lawmakers Claim UK Has Overstepped by Challenging Apple’s Encryption Measures

Review of the AI Seoul Summit by Computer Weekly

Dozens of governments and companies reiterated their commitments to the safe and inclusive development of artificial intelligence (AI) at the second global AI summit in South Korea. However, concerns were raised about the dominance of narrow corporate interests in the AI safety field and the need for broader participation by the public, workers, and other affected parties. While some concrete outcomes were achieved at the summit, further progress is necessary in areas such as mandatory AI safety commitments, socio-technical evaluations, and enhanced public involvement.

The summit resulted in various agreements and pledges, including the EU and 10 countries signing the Seoul Declaration to promote multi-stakeholder collaboration in AI safety and the development of shared risk thresholds for frontier AI models. Companies also made commitments to responsibly develop AI technology, but many questioned the effectiveness of voluntary commitments and emphasized the importance of setting harder rules and establishing accountability mechanisms to guide AI regulation. International cooperation on AI safety research was a key focus of the summit, with efforts to foster collaboration between safety institutes and broaden the participation of diverse voices and perspectives in AI research and policymaking.