Friday, October 18, 2024

Local Councils Require Enhanced Support for Responsible AI Procurement

Local authorities require enhanced support for the responsible procurement of artificial intelligence (AI) systems, as the current government guidance lacks clarity and comprehensiveness regarding how to secure AI in the public’s best interest, warns the Ada Lovelace Institute (ALI).

A recently published research report from the civil society organization highlights the substantial difficulties councils encounter when trying to navigate existing guidance and legislation, which can be ambiguous and open to different interpretations. This lack of clarity extends to fundamental aspects of AI, such as implementing ethical considerations.

ALI’s findings emerge at a time of increasing anticipation and optimism regarding AI’s capabilities in the public sector. However, they caution that the advantages of this technology can only be realized if the public sector guarantees that its adoption is safe, effective, and aligned with public welfare.

The report analyzes 16 relevant guidance, legislative, and policy documents—published between 2010 and 2024—and identifies a “lack of clarity” in applying concepts like fairness, defining public benefit, and ensuring that AI usage remains transparent and comprehensible to affected individuals.

Additionally, the report notes that since many AI technologies are provided by private companies, the procurement process should play a crucial role in evaluating the effectiveness of potential solutions, anticipating and managing risks, and ensuring that AI deployment is proportionate, legitimate, and consistent with broader public sector responsibilities.

However, ALI raised concerns regarding the technical expertise within local governments—a significant gap in current guidance—and the necessity for procurement teams to be adequately equipped and empowered to question suppliers about the societal impacts of their technologies and avenues for redress during the procurement process.

“Our research underscores the urgent need for clearer guidelines and responsibilities, as well as enforceable measures for redress. Procurement teams require improved support and explicit guidance to effectively acquire AI that is ethical, effective, and serves the interests of society,” stated Anna Studman, the report’s lead author and a senior researcher at ALI.

“AI and data-driven systems can undermine public trust and diminish benefits if their predictions or outcomes are biased, harmful, or ineffective. The procurement phase offers a vital opportunity for local authorities to scrutinize suppliers regarding the potential societal ramifications of their technologies.”

To enhance AI procurement in local councils, ALI recommends consolidating central government guidance to clarify legal obligations and best practices throughout the procurement lifecycle; developing a standard for algorithmic impact assessments that councils can utilize; and fostering consensus on defining critical terms like “fairness” and “transparency” within the public sector.

Concerning transparency, ALI emphasizes that local government bodies should adopt a holistic approach, considering internal processes, fair competition, and how to keep communities informed and empowered to challenge automated decisions affecting them.

“It is crucial for public sector procurers to feel assured about the products they are acquiring so that neither they nor the public are endangered,” remarked Imogen Parker, associate director at ALI.

“Integrating a robust and ethical procurement process amidst budget constraints presents significant challenges. However, it is vital to assess the potential costs of inaction, both financially and ethically, as starkly illustrated by the Post Office Horizon scandal.”

The report further suggests investing in training for local government procurement teams on how to utilize and audit AI systems. Additionally, it urges the government to ensure the rollout of the Algorithmic Transparency Recording Standard (ATRS) extends across the entire public sector, rather than being limited to central government departments.

Although the ATRS was developed by the Central Digital and Data Office (previously part of the Cabinet Office) in collaboration with the government’s Centre for Data Ethics and Innovation in November 2021, its adoption has been limited and was not promoted by the Conservative government in its March 2023 whitepaper on AI governance.

ALI has previously raised concerns over the deployment of “foundation” or large language models (LLMs) in the public sector, highlighting risks related to bias and discrimination, privacy violations, misinformation, security, dependency on industry, workforce implications, and unequal access.

They further noted the risk of public sector adoption of these models being driven by novelty rather than an assessment of the best solutions available.

“Public-sector users should thoroughly evaluate the alternatives before opting to implement foundation models. This evaluation involves comparing proposed use cases with more established and evaluated options that may offer greater effectiveness, better value for money, or carry fewer risks—such as using a narrow AI system or employing a human for customer service instead of developing a foundation model-powered chatbot.”