Friday, October 18, 2024

Global AI Regulation Is Intensifying: Can the UK Afford to Stay Silent?

This summer marked a significant period for global AI regulation, with California lawmakers unexpectedly passing legislation to regulate the largest AI models. Meanwhile, on August 1, after nearly six years of discussions, the EU AI Act officially took effect.

The UK’s position on AI regulation remains ambiguous. While the UK may pursue a strategy of regulatory forbearance, it risks losing flexibility as major tech companies and enterprises increasingly comply with global regulations, thereby limiting the space for independent regulatory approaches.

Initially, the U.S. also approached AI regulation cautiously. In October 2022, the White House released a Blueprint for an AI Bill of Rights, outlining sensible and voluntary principles for trustworthy AI. President Biden built on this in October 2023 by signing an executive order focused on trustworthy AI, aimed specifically at U.S. agencies, which are likely to heed their leader’s recommendations more than those of private businesses.

Given the upcoming U.S. elections, it was realistic to anticipate limited progress; for instance, Trump has pledged to repeal Biden’s executive order if he regains office. However, California regulators jumped ahead of the federal government with more stringent proposals that go beyond soft regulations.

The proposed California law targets frontier AI models that cost $100 million or more to train or surpass certain computational thresholds, imposing additional testing, documentation, governance, audit, and takedown requirements. This will predominantly affect models from major players like OpenAI and Google and is set to have immediate repercussions in the UK market, where these services are utilized globally—despite the UK having no say in the specifics of these regulations.

EU AI Act Overview

The EU AI Act encompasses a broader range of applications. It includes large-scale foundation models alongside various AI systems, from traditional to cutting-edge technologies. The act categorizes uses based on their risk profiles: a small number of applications are banned, while many others are deemed high-risk and require additional measures. Large general-purpose models fall into the limited risk category, and most uses are classified as minimal or no risk, allowing for only a voluntary code of conduct.

For UK enterprises, any intention to engage with European consumers will necessitate compliance with the EU AI Act. Should the UK develop its own regulations in the future, expect significant lobbying from sectors like banking, telecommunications, and insurance to align closely with the EU framework. It will be beneficial for the UK to monitor the effectiveness of the EU regulations as they are implemented to guide its own regulatory strategy.

Proactive Approach Needed in the UK

So, what has the UK achieved in terms of AI regulation? Apart from vague commitments in the King’s speech regarding the establishment of requirements for developing powerful AI models and an investment of £1.3 billion in AI and supercomputing, not much has transpired. It’s impossible to evade global regulatory pressure with such minimal action.

Fundamentally, AI ethics and regulation should not devolve into a competitive race. It is in everyone’s best interest to promote beneficial uses of AI while preventing misuse. Acknowledging national differences in attitudes toward opportunity, risk, and regulation is vital.

A constructive move forward would be for UK lawmakers to adopt a more proactive stance on AI regulation—assertively signaling that new legislation will be forthcoming, aligning with U.S. and EU laws where appropriate, and differentiating where necessary. Such an approach would better prepare organizations for the regulatory landscape ahead, considering not only the European and American contexts but also the global environment.