Take a look at the tech world today. It’s clear that the landscape has changed dramatically in just a few years. While data protection used to dominate the conversation, we now face a broader challenge. We’re not just talking about protecting personal data anymore; we have to navigate a tangled web of laws and regulations that govern data in all its forms. Some of these laws focus specifically on data, especially in the digital space, but there’s a growing emphasis on artificial intelligence (AI) as well.
The UK government is rolling out its Data Use and Access Bill, which could signal a more unified approach in the tech sector. However, the reality is that global regulation of data and AI feels more like a chaotic intersection than a clear path forward. Many people think the GDPR marked the beginning of data regulation, but that’s not true. The 1998 Data Protection Act laid the groundwork in the UK, long before GDPR was introduced. The GDPR didn’t drastically change what we already knew about data protection, but it did bring two major shifts: it ramped up the penalties for companies that fail to comply and placed data protection firmly on the global stage, making it a topic of discussion worldwide.
The EU has been a leader in data protection, but will it also lead in AI regulation? Since 2018, we’ve seen an influx of new data protection laws influenced by European standards. These laws may vary but often share similar principles. Now, with the surge of generative AI, countries around the world are starting to consider how to regulate this technology.
Europe is once again trying to take the lead with its new AI Act. It’s moving faster than the GDPR’s adoption, fueled by the current excitement over AI. However, the buzz often leads to exaggerated promises. While Europe aims for its AI Act to have a global impact like the GDPR, it’s hard to predict whether it will truly succeed. The global community seems far from reaching a consensus on whether specific AI regulations are even necessary, let alone what those regulations should look like.
As Professor Anu Bradford from Columbia University points out, we can see three main regulatory approaches around the world. In the US, we have a company-driven approach. China has a state-led framework, while Europe focuses on rights. Each approach has its benefits and drawbacks, and these differences highlight a larger unresolved issue. Will regulatory success depend on assessing risks or focusing on outcomes? A lot hinges on whether lawmakers prioritize managing AI risks or fostering innovation and flexibility.
Professor Lilian Edwards suggests another perspective, arguing for ‘the good old fashioned law approach.’ We already have laws addressing data protection, intellectual property, consumer rights, and anti-discrimination. If these laws are in place, why do we need new ones just for AI?
But it doesn’t stop there. Each of these approaches has its own set of complexities, making the global AI regulatory scene look like a sprawling puzzle where the pieces don’t quite fit together. The navigation of diverse data protection laws has been tricky, but AI regulations are shaping up to be an even greater challenge. Global organizations now find themselves unsure about their obligations under AI regulations. As we try to decipher this landscape, it becomes clear that we’re not necessarily on a road to nowhere, but we definitely need time to figure things out.