Misusing artificial intelligence (AI) can lead to serious and costly problems. Lionsgate, a movie studio, found out the hard way that quotes from AI systems need verification like any other source. Microsoft faces a lawsuit from a German journalist because Bing Copilot wrongfully accused him of crimes he merely reported. A US telecom company is shelling out $1 million for broadcasting automated calls featuring a fake AI voice that mimicked President Biden.
Despite the risks, businesses are eager to adopt generative AI (GenAI) and are scrambling to put governance and compliance measures in place. Data privacy and security are often the main reasons for these restrictions, but regulations around copyright also play a role. Chief information officers (CIOs) struggle to pinpoint which regulations apply, from how to legally use personal data for AI training to ensuring transparency and fairness in AI outputs.
Organizations are paying close attention to upcoming legislation aimed at AI, but the landscape remains confusing and inconsistent. UN Secretary-General António Guterres aptly described it as a “patchwork” of rules that don’t always align. As changes in government occur in the UK and the US, predicting future regulations becomes more complicated, especially for UK businesses caught between American and European Union (EU) standards.
In Brazil, for example, the data protection authority temporarily halted Meta’s use of publicly available user information for AI training, based on their data protection laws. Meta had to inform users and give them options to opt out. Companies must navigate this complex regulatory environment carefully, especially as enforcement grows more stringent.
In the UK, an AI Bill is likely coming, but there’s uncertainty about its specifics. The new government is expected to take a different approach to regulation than its predecessor, who was more focused on innovation. The UK AI Safety Institute may emerge as an additional regulatory body, working alongside existing organizations. The government is looking to create an ecosystem of “AI assurance” tools to help businesses mitigate risks tied to AI deployment.
In recent statements, Baroness Jones from the UK government highlighted intentions to introduce targeted regulations for companies developing advanced AI systems. Ofcom also reminded online service providers that the Online Safety Act will apply to GenAI models, indicating that companies using chatbots will need to test them for safety and compliance.
The Information Commissioner’s Office (ICO) has begun requesting detailed disclosures from major platforms, including LinkedIn, Google, and Microsoft, about the data used to train their AI systems. Existing laws like the Data Protection Act 2018, derived from GDPR, still govern AI activities. Experts warn that companies must understand their obligations concerning personal data, as the definition is broad and encompasses a wide range of identifiable information.
Lilian Edwards, a tech law professor, stresses that organizations shouldn’t overlook legislation that doesn’t explicitly mention AI, as existing laws regarding discrimination and consumer rights still apply. Companies need to ask themselves if their AI systems might violate these laws, especially with the potential for misinformation or incorrect outputs from AI.
In the US, the absence of a comprehensive federal law on AI complicates the landscape. Each state has different regulations, some influenced by executive orders focusing on establishing frameworks for safe AI use. Meanwhile, California enacted its own laws addressing AI, including regulations around deepfakes and digital replicas.
The EU stands apart as the first jurisdiction to adopt comprehensive AI-specific legislation, mainly through the EU AI Act. AI providers are already modifying their products to comply with this new framework. If the EU’s approach sets a global standard similar to GDPR, it could streamline compliance for many businesses, but the act introduces tough penalties for non-compliance.
The EU AI Act categorizes AI products based on risk factors, focusing on user privacy, data security, and compliance requirements. Higher-risk AI applications, particularly in the public sector, will face stricter scrutiny. Companies developing AI systems for hiring or other sensitive functions will need to follow rigorous assessment procedures.
As organizations adapt to these evolving rules, they will need staff trained in responsible AI use and compliance by February 2025. It’s essential for more than just legal teams; understanding AI’s opportunities and risks is crucial for everyone working with these technologies. Being proactive rather than reactive can help businesses protect their interests and reputation in an increasingly regulated landscape.