UK police forces are ramping up racial bias through their automated predictive policing systems, according to Amnesty International. These systems rely on profiling individuals or communities before any crime even occurs.
Predictive policing uses AI and algorithms to forecast criminal behavior, whether targeting specific people or certain locations. Amnesty’s report, titled “Automated Racism,” outlines how these tools disproportionately target poorer and racialized communities that have historically faced over-policing. This creates a vicious cycle: these communities are overrepresented in police data, which leads to more aggressive policing and even more biased data collection.
When stop-and-search data is biased, the outputs from predictive systems will reflect that same bias. The result? Increased stop-and-search incidents and more criminal consequences for these communities, perpetuating the cycle of discrimination.
Across the UK, at least 33 police forces have adopted predictive policing tools. Most of them, 32 to be exact, use geographic crime prediction systems, while 11 focus on individuals. Amnesty argues that this approach blatantly violates both national and international human rights obligations. It unjustly targets people based on race before they’ve committed any crime, leading to widespread surveillance of whole communities.
The report points out tools like the Metropolitan Police’s “gangs violence matrix,” which assigned risk scores to individuals and faced backlash for its racial implications. Greater Manchester Police’s XCalibre database also profiles people without credible evidence of wrongdoing, instead relying on perceived gang affiliations. Essex Police uses data on “associates” to label individuals as criminals based on their social circles, while West Midlands Police admits its hotspot policing tools are often inaccurate.
Sacha Deshmukh, chief executive of Amnesty International UK, emphasizes that these systems label individuals as criminals based on their skin color or economic background. He believes such predictive tools not only harm communities but also make society more racist and unfair. The UK government should ban these technologies, he argues, and give affected communities more transparency and avenues to challenge police decisions.
Amnesty is pushing for stricter regulations around police data systems, including a public register of the tools used and clear channels for challenging police profiling. Daragh Murray, a senior lecturer at Queen Mary University, warns that because these systems operate on correlations rather than actual causes, they risk fostering harmful stereotypes.
The National Police Chief’s Council (NPCC) responded by stating they employ data to enhance their crime prevention strategies. They mentioned that hotspot policing and visible patrols help keep communities safe and that they’re committed to balancing crime-fighting with building public trust. They also highlighted their Police Race Action Plan, aimed at addressing racial bias in policing practices.
However, concerns about predictive policing have been raised consistently. In 2024, a coalition of civil society groups urged the Labour government to impose a total ban on these systems due to their disproportionate effects on marginalized communities. Meanwhile, the EU’s AI Act has imposed restrictions on certain predictive policing technologies, although not all.
Reports from numerous organizations underline that these systems often deepen societal inequalities and can ruin lives. The Lords Home Affairs and Justice Committee has called the current state of AI use in policing a “new Wild West,” necessitating a tactical overhaul to prevent ongoing discriminatory practices.
In 2022, the UK government largely dismissed these findings, claiming an existing framework of checks and balances is adequate. They maintain it’s up to police forces to figure out how to utilize new technologies effectively.