On January 30, 2025, Sarah Munby, the top civil servant at the Department for Science, Innovation and Technology (DSIT), faced tough questions from the Public Accounts Committee (PAC). They wanted to know how the government could boost public trust in its growing use of artificial intelligence (AI). Munby acknowledged the need for greater transparency about AI systems deployed in the public sector. She pointed out that clearer communication about AI usage, especially in government correspondence with citizens, would help foster trust.
Munby stressed that if the government doesn’t come across as trustworthy, it could hinder progress on AI adoption. A significant step in addressing this issue is the Algorithmic Transparency Recording Standard (ATRS). This standard emerged from a joint effort by DSIT and the Central Digital and Data Office (CCDO), rolling out in September 2022 to enhance public sector transparency around algorithmic tools.
In February 2024, DSIT announced plans to make the ATRS mandatory for all government departments within the year and to eventually extend it to the broader public sector. However, the ATRS faced criticism for its limited engagement, given the government’s multitude of AI contracts. The National Audit Office revealed that only eight out of 32 surveyed organizations indicated they were consistently compliant with the ATRS, and when the survey was conducted, there were just seven entries in the database.
Currently, the ATRS holds 33 records, with 10 local authorities contributing voluntarily on January 28. Munby admitted the need for more entries and mentioned that around 20 more are expected to be published in February, followed by many others throughout the year. She expressed a clear intention to ensure all AI usage information gets published.
She also emphasized the importance of legal frameworks in building trust. The Data Use and Access Bill includes provisions to ensure proper recourse in cases of automated decision-making, allowing individuals to challenge decisions made by AI.
While the Labour government adopted most recommendations from the recent AI action plan aimed at boosting trust and usage, the recommendations did not specifically address transparency.
In a written statement to the PAC, academics from the University of Sheffield, including Jo Bates and Helen Kennedy, stressed the importance of “socially meaningful transparency.” They argued this approach helps the public understand AI systems better, allowing for informed use and greater engagement in a data-driven society. Given the known risks of AI, such as algorithmic bias, the need for transparency that prioritizes public interests is critical.
Professor Michael Wooldridge from the University of Oxford echoed these sentiments, noting the mixed public sentiment about AI. While some are enthusiastic, many express concerns about their jobs and privacy, and even fear existential threats. Wooldridge stressed that these fears, though sometimes unfounded, are significant and must be addressed through transparency to build trust in government AI initiatives.