Wednesday, April 2, 2025

Our Data, Our Choices, Our AI Future: The Case for an AI Regulation Bill

Last July’s General Election had significant repercussions. One major impact was the halt of my AI Regulation Bill, which was all set to move from the House of Lords to the Commons. Fast forward almost a year, a new government, and with a fresh Parliament, I reintroduced my AI Bill last week.

Back in November 2023, the urgency for regulating artificial intelligence was clear. Now, that urgency has amplified, yet we seem farther away from achieving meaningful regulation. The landscape has changed dramatically since then. What was a UK government eager to regulate AI while in opposition has turned into a situation where, eight months later, there’s no sign of a Bill. It appears they are waiting to align their plans with the US.

Take the Paris AI Action Summit earlier this year, where a declaration for inclusive and sustainable AI was signed by many countries. The UK and US, however, opted not to sign. In another development, the AI Safety Institute rebranded itself to the AI Security Institute, indicating a shift in focus from societal impacts of AI to cyber security risks.

This all underscores the growing need for AI regulation in the UK. We have to push back against the persistent myth that innovation and regulation cannot coexist. This binary thinking is simply not true. The real challenge is creating sensible regulations that reinforce innovation without stifling it, a challenge that’s crucial in this digital age.

Without specific regulations for AI, we as consumers, creatives, and citizens bear the risks of these technologies. History teaches us that thoughtful regulation benefits everyone involved—citizens, consumers, innovators, and investors. Sure, there’s bad regulation out there, but that doesn’t mean regulation itself is inherently bad. Look at the UK’s open banking initiative, which has influenced over 60 nations. It’s a smart regulatory choice that benefits consumers, innovators, and investors alike.

When it comes to AI, a set of technologies brimming with potential for positive change, we need the right regulations in place to maximize those benefits.

In my Bill, I propose a flexible, principles-based regulatory framework. First, we establish an AI Authority. Think of it as a nimble regulator, not a cumbersome bureaucracy. This body would coordinate across existing regulators to tackle the opportunities and challenges AI presents. It would identify regulatory gaps, like those concerning recruitment.

The AI Authority would also champion the principles outlined in the previous government’s whitepaper, turning those into law. Another key point in the Bill is the requirement for businesses that develop or use AI to appoint AI responsible officers. These officers must ensure their AI systems operate safely, ethically, and without bias.

Again, this doesn’t have to create a bureaucratic nightmare. We already have a solid framework for reporting that could augment what’s laid out in the Companies Act. With no dedicated AI regulations in place, consumers and citizens remain vulnerable to these technologies. Clear labeling would go a long way—anyone supplying an AI product or service needs to provide clear, upfront information, including health warnings and consent options. Technologies are already available to make this happen.

Creative professionals also need protection. No AI business should exploit others’ intellectual property without permission and fair compensation.

Perhaps the most critical aspect of my Bill is its focus on public engagement. It mandates the government to implement a long-term public engagement program. Meaningful dialogue is essential for us to move forward together, recognizing both the risks and benefits.

We learned from the Warnock inquiry established during the rise of IVF technology in the 1980s, which gave us the luxury of time. But with AI developing at a dizzying pace, we can’t afford to wait. Today’s technologies allow for real-time public engagement, a feat we couldn’t achieve a few years ago. Without addressing this, many might miss out on the advantages AI offers, while also facing substantial downsides.

In summary, we need comprehensive, cross-sector AI regulation to protect citizens, consumers, creatives, innovators, and investors. Let’s make this a reality, prioritizing our data, our choices, and our AI futures.