The House of Lords is presently diving into the Data (Use and Access) Bill, examining it closely. This week, we focused on automated decision-making (ADM). Over the past two years, we’ve seen a surge in exclusively automated decisions affecting our lives. Applications like loans and job offers increasingly hinge on algorithms rather than human judgment.
ADM is making its way into various sectors, often without proper oversight or a way to appeal decisions. From job recruitment to critical healthcare decisions, the implications are significant. It’s crucial that the existing safeguards in data protection law guarantee that people understand when a significant decision about them is automated, why it’s made, and how they can challenge it or request a human review.
The UK Data Protection Act has strict rules—Article 22 prohibits decisions with legal consequences based solely on automated processes unless the person has given explicit consent or meets certain legal exceptions. A human must review these decisions. Past incidents, like the Italian Data Protection Authority ruling against Deliveroo’s use of automated systems for managing gig workers, reinforce the need for these protections.
As we look at amendments to the ADM provisions, it’s clear we have a long way to go to ensure the bill provides adequate safeguards and rights. The current draft, specifically Clause 80, proposes significant changes. It seems to relax the restrictions on fully automated decisions, permitting them as long as individuals can express their views, request human involvement, and challenge those decisions. These rights would fall under a new Article 22C.
The bill also empowers the secretary of state to define what constitutes “meaningful involvement” in ADM decisions, allowing for potential waivers of restrictions based on technological advancements or changing societal expectations.
This shift could dilute rights significantly. To counter this, I’ve proposed two amendments. First, individuals should have the right to a personalized explanation for any automated decision affecting them. This explanation needs to be clear, simple, and accessible, detailing how their data influenced the decision. Additionally, it should be easy to obtain, user-friendly, and facilitate meaningful challenges if necessary.
Second, data controllers must ensure that human reviewers of these automated decisions possess the necessary skills, training, and authority to challenge and rectify any decisions made. Having someone in place without the power to make meaningful changes is pointless.
Protecting individuals in automated decision-making is vital for maintaining public trust in AI. How much control people feel they have over significant decisions impacts their attitudes toward these technologies. That’s why we’re pushing for these amendments, and why I emphasized public engagement and trust in my proposed AI Regulation Bill earlier this year.
During the debate on our amendments, the minister acknowledged our discussions, asserting that the Information Commissioner’s Office would update guidance on human review after the bill’s passage. However, she also expressed confidence that the new provisions would ensure meaningful human involvement, preventing automated machines from determining people’s futures.
Many of us remain skeptical. This marks the third time we’ve encountered a data bill in Parliament, and despite previous attempts, there’s still much to refine in this version. We’ll persist in advocating for improvements for everyone, believing in the potential of a data-driven society that benefits all.