Friday, October 18, 2024

Minister, Brace Yourself for the Next Horizon Scandal in Your Department

Dear incoming secretaries of state,

It is likely that your department is using algorithmic prediction and classification systems, some of which may be referred to as “AI.” These systems have the potential to cause harm to citizens and may not be compliant with the law. If left unchecked, they could lead to a situation similar to the Horizon scandal, where your position would be difficult to defend.

I recommend that you require a full Algorithmic Transparency Reporting Standard (ATRS) document for every algorithmic system used in your department that affects decisions or policies. This document should analyze the legal basis for the system, its compliance with relevant legislation, accuracy, and appropriateness for its purpose. Any use of “AI” or “generative AI” products should be included in this assessment. Systems that pose a risk of harm, are potentially unlawful, or lack proven accuracy should be discontinued.

Consider stopping spending on “AI” until its value is proven, refusing documents produced using “generative AI” products, and banning predictive systems altogether.

It is crucial to prevent a situation similar to the Post Office Horizon scandal. The National Audit Office highlighted issues with the use of AI in government, raising concerns about legality, transparency, hype, inaccuracy, and misuse. Stakeholders are aware of the risks posed by algorithmic and AI methods in government, and transparency is lacking.

There is a need to address the reliance on AI systems that may not be effective, accurate, or transparent. The completion and scrutiny of ATRS documents for all relevant systems is urgent to mitigate risks and increase public trust in your department’s work.

Sincerely,
Paul Waller, Thorney Isle Research