Monday, January 19, 2026

Firewall Challenge Week 3 – DEV Community

Keep Your Ubuntu-based VPN Server Up to Date

Enterprise-Grade Security for Small Businesses with Linux and Open Source

Ethics for Ephemeral Signals – A Manifesto

When Regex Falls Short – Auditing Discord Bots with AI Reasoning Models

Cisco Live 2025: Bridging the Gap in the Digital Workplace to Achieve ‘Distance Zero’

Agentforce London: Salesforce Reports 78% of UK Companies Embrace Agentic AI

WhatsApp Aims to Collaborate with Apple on Legal Challenge Against Home Office Encryption Directives

AI and the Creative Industries: A Misguided Decision by the UK Government

AI companies cannot be relied on to willingly disclose risk information

Current and former employees of artificial intelligence (AI) companies are calling for greater whistleblower protections as they believe that companies cannot be trusted to voluntarily share information about system capabilities and risks.

During the second global AI Summit in Seoul, 16 companies signed the Frontier AI Safety Commitments, a set of voluntary measures aimed at safely developing AI technology. However, employees from companies like OpenAI, Anthropic, and DeepMind, who signed the commitments, feel that these voluntary arrangements are insufficient for effective oversight.

The employees raised concerns about confidentiality agreements preventing them from voicing their concerns and the lack of protections for disclosing non-regulated risks. They are calling for more transparency and accountability from AI companies, including the ability to raise concerns anonymously, without fear of retaliation.

Prominent AI experts, including Stuart Russell, Geoffrey Hinton, and Yoshua Bengio, have supported these calls for increased transparency and accountability. Bengio, who was selected to lead the first-ever frontier AI State of the science report, emphasized the need for more formal regulatory measures to supplement the voluntary commitments made by AI companies.

While OpenAI reiterated its commitment to providing safe AI systems and engaging with stakeholders, the employees stressed the importance of protecting confidential information while still allowing for the reporting of risk-related concerns. DeepMind and Anthropic did not respond to requests for comment.

Overall, there is a growing call for stronger whistleblower protections and regulatory measures to ensure that AI companies prioritize safety and accountability in their development processes.