Saturday, October 19, 2024

Lord Proposes Legislation to Govern AI and Automation in the Public Sector

Liberal Democrat peer Lord Clement-Jones has proposed a private members’ bill aimed at regulating the use of artificial intelligence (AI), algorithms, and automated decision-making technologies by public authorities. He emphasizes the necessity of preventing another incident similar to the Post Office scandal.

The bill, introduced on September 9, stipulates that if a citizen receives a negative outcome—such as a declined benefit or immigration decision based on automated systems—they should have the right to access the information behind that decision, allowing them to contest it. Should a citizen choose to appeal an automated decision, the bill mandates that the government provide an independent dispute resolution service.

Additionally, Lord Clement-Jones’s Public Authority Algorithmic and Automated Decision-Making Systems Bill requires public authorities to produce impact assessments for any AI algorithms that inform decision-making. This requirement includes conducting mandatory bias assessments to ensure compliance with the Equality Act and Human Rights Act. Public authorities would also need to maintain a transparency register to inform the public about the usage of each system.

“The Post Office/Horizon scandal underscores the severe human consequences that arise from the lack of adequate checks on these automated systems. Currently, public authorities have no legal requirement to be transparent regarding the algorithms they employ. I strongly encourage the government to endorse these reforms,” stated Lord Clement-Jones. “In the UK, we often legislate only after damage has occurred. It’s crucial to adopt a proactive stance on safeguarding citizens as they interact with new technologies. We must take a leading role in AI regulation and avoid allowing another Horizon scandal to happen.”

To enhance the clarity of automated decisions, the bill includes provisions mandating that public authorities implement systems with automatic logging capabilities, ensuring continuous monitoring and review. It also prohibits the procurement of systems that cannot be adequately scrutinized, particularly when monitoring is restricted by contractual limitations or the intellectual property rights of suppliers.

Several other Parliamentarians have advanced similar private members’ bills addressing AI. Lord Christopher Holmes, for instance, introduced the Artificial Intelligence [Regulation] Bill in November 2023, criticizing the previous government’s “wait and see” strategy as potentially damaging. Meanwhile, backbench Labour MP Mick Whitley presented a worker-centric AI bill in May 2023 to tackle the negative implications of AI in the workplace.

In April 2024, the Trades Union Congress (TUC) released a comprehensive proposal for workplace AI regulation, outlining numerous new legal rights and protections to mitigate the adverse effects of automated decision-making on employees.

Since the publication of the previous Conservative government’s AI white paper in March 2023, there has been extensive discussion about the suitability of their proposed “agile, pro-innovation” framework for AI regulation. This framework suggested that the government should rely on existing regulatory bodies to develop tailored rules specific to the usage of AI in various sectors.

Following the release of the white paper, the government heavily advocated for AI safety, arguing that businesses would be reluctant to adopt AI unless they were assured that risks—such as bias, discrimination, and effects on employment and justice—were being effectively managed. Although the government reinforced this position in its response to consultations in January 2024, asserting that it would refrain from legislating on AI until the moment was appropriate, it later indicated in February 2024 that binding regulations might be introduced for the riskiest AI systems.

The recent King’s Speech highlighted that the new Labour government “will seek to establish appropriate legislation to impose requirements on those developing the most advanced artificial intelligence models,” although no specific AI legislation is currently planned. The only reference to AI in the accompanying briefing was in relation to a Product Safety and Metrology Bill, which aims to address “new product risks and opportunities to ensure the UK remains aligned with technological advancements, such as AI.”

While private members’ bills seldom become law, they serve as an important platform for sparking debate on key issues and gauging opinions within Parliament.