Saturday, November 23, 2024

UK structural racism problem exacerbated by automated police technology

According to a joint submission by Runneymede Trust and Amnesty International to the United Nations, the use of artificial intelligence (AI) and facial recognition technologies in policing is worsening civil and political rights for people of color in the UK. The submission highlighted disparities faced by people of color in various sectors including criminal justice, health, education, employment, and immigration. The submission also addressed concerns about the impact of AI and automation in policing on people of color.

The report noted instances of live facial recognition technology misidentifying people of color and raised questions about transparency and data usage in facial recognition searches conducted by the Home Office. It also highlighted how automated systems like predictive policing and Automated Number Plate Recognition (ANPR) can lead to human rights violations and fatal outcomes, as seen in the case of 23-year-old Chris Kaba who was fatally shot by police.

The submission recommended prohibiting predictive and profiling systems in law enforcement, providing public oversight on high-risk AI usage, and imposing legal limits on AI technologies to protect human rights. Additionally, it called for an inquiry into police gang databases to examine their effectiveness, compliance with human rights laws, and potential racial biases.

Civil society groups have urged the UK government, under new administration, to ban AI-powered predictive policing and biometric surveillance systems due to concerns about discrimination and inequality in policing. These groups argue that without strong regulation, AI systems will continue to infringe on rights and worsen existing power imbalances.

The report also highlighted ongoing concerns about the use of facial recognition and biometric technologies by police, with previous warnings about the lack of oversight and the potential for the UK to become a surveillance state if these issues are not addressed. The House of Lords has also expressed concerns about the discriminatory outcomes and human rights risks associated with advanced algorithmic technologies used in policing.

Overall, the report emphasizes the need for stronger regulation and oversight of AI and facial recognition technologies in policing to protect the rights and dignity of all individuals, particularly those from marginalized communities. The Home Office is currently considering the findings and recommendations outlined in these reports.