Friday, January 2, 2026

Keep Your Ubuntu-based VPN Server Up to Date

Enterprise-Grade Security for Small Businesses with Linux and Open Source

Ethics for Ephemeral Signals – A Manifesto

When Regex Falls Short – Auditing Discord Bots with AI Reasoning Models

Cisco Live 2025: Bridging the Gap in the Digital Workplace to Achieve ‘Distance Zero’

Agentforce London: Salesforce Reports 78% of UK Companies Embrace Agentic AI

WhatsApp Aims to Collaborate with Apple on Legal Challenge Against Home Office Encryption Directives

AI and the Creative Industries: A Misguided Decision by the UK Government

CityFibre Expands Business Ethernet Access Threefold

Review of the AI Seoul Summit by Computer Weekly

Dozens of governments and companies reiterated their commitments to the safe and inclusive development of artificial intelligence (AI) at the second global AI summit in South Korea. However, concerns were raised about the dominance of narrow corporate interests in the AI safety field and the need for broader participation by the public, workers, and other affected parties. While some concrete outcomes were achieved at the summit, further progress is necessary in areas such as mandatory AI safety commitments, socio-technical evaluations, and enhanced public involvement.

The summit resulted in various agreements and pledges, including the EU and 10 countries signing the Seoul Declaration to promote multi-stakeholder collaboration in AI safety and the development of shared risk thresholds for frontier AI models. Companies also made commitments to responsibly develop AI technology, but many questioned the effectiveness of voluntary commitments and emphasized the importance of setting harder rules and establishing accountability mechanisms to guide AI regulation. International cooperation on AI safety research was a key focus of the summit, with efforts to foster collaboration between safety institutes and broaden the participation of diverse voices and perspectives in AI research and policymaking.