Last month, the UK government made a bold announcement about integrating AI into the public sector, claiming it would transform how we use technology across various services. But even as officials publicly champion this vision, many government departments have been quietly testing algorithmic tools for years, often without transparency.
The thought of AI making critical decisions about our health, education, and justice systems without proper oversight feels surreal, almost like something out of a Kafka novel. We’re only just starting to understand how deeply embedded these technologies are in our lives.
In February 2024, the Department for Science, Innovation and Technology mandated that central government departments reveal their use of algorithmic tools through the Algorithmic Transparency Recording Standard (ATRS) Hub. So far, only 47 records have emerged—over half created just this year. This lack of transparency raises concerns, especially since some AI pilots for welfare reform are being quietly abandoned amid frustrations and setbacks.
Recent reports show that the government is already using algorithms to determine crucial outcomes, like who qualifies for Employment and Support Allowance (ESA), which schoolchildren might be at risk of becoming NEET (not in education, employment, or training), and the conditions under which offenders are sentenced and released.
With so few records available, it’s fair to question how many departments are using algorithms to steer decisions about our lives without us even knowing. At the same time, the government is pushing through a bill that would reduce current safeguards against automated decision-making (ADM). The existing UK General Data Protection Regulation (GDPR) prevents purely automated systems from making significant choices, protecting us from arbitrary outcomes based on faulty logic. However, the new Data Use and Access Bill (DUAB) aims to lift these protections from many decision-making processes, exposing us to potential discrimination and errors without any way to contest them.
The proposed bill allows entirely automated decisions, as long as they don’t involve ‘special category data.’ This sensitive data covers areas like health, ethnicity, political beliefs, and more. While strong protections for this data are crucial, relying on non-special category data can still lead to harmful outcomes.
The Dutch childcare benefits scandal is a sobering example, where an algorithm wrongly flagged low-income and ethnic minority families as fraud risks, pushing many into poverty. Similarly, during the COVID pandemic, the A-level grading crisis led to unfair disparities between private and state school students, even though the grading didn’t directly involve sensitive data.
Non-special category data can reflect protected characteristics. For instance, the Durham Constabulary used a risk tool that considered residential postcodes, unintentionally perpetuating biases related to areas with socioeconomic disadvantage. Reducing existing safeguards could pave the way for more disasters like this.
Moreover, a decision isn’t considered automated if there’s meaningful human involvement. In practice, this could be a human reviewing an AI-generated hiring decision. But just having a person involved isn’t always enough.
Take the Department for Work and Pensions (DWP): They say that after their ESA Online Medical Matching Tool recommends a matching profile, an agent reviews the case before making a decision. Yet, they acknowledge that such tools might diminish the actual influence a human has if they just follow the algorithm’s lead. This trend, known as ‘automation bias,’ threatens real human input in decision-making processes.
The definition of “meaningful human involvement” can vary widely. For example, a Dutch court determined that Uber’s automated firing of drivers lacked genuine human oversight because drivers couldn’t appeal, and staff making decisions often didn’t have the necessary context. The DUAB allows the Secretary of State for Science, Innovation and Technology to redefine what that “meaningful” involvement looks like, which can lead to decisions being rubber-stamped by humans who aren’t properly informed.
The UK’s enthusiastic push for AI could reflect broader trends, but the unchecked growth of automated decisions and the erosion of related protections pose serious risks for everyone.