Amnesty International is calling for an immediate halt to Sweden’s welfare system, which relies on algorithms to investigate benefit fraud. A joint investigation by Lighthouse Reports and Svenska Dagbladet revealed that the system, operated by Försäkringskassan, unfairly targets marginalized groups. This includes women, individuals with foreign backgrounds, low-income earners, and those without university degrees.
The investigation, published on November 27, 2024, found that the machine learning model used to detect fraud often misses the actual offenders—mainly men and wealthy individuals. Since its introduction in 2013, the system assigns risk scores to applicants. A high score leads to an automatic investigation while lower scores warrant less scrutiny. Those flagged for investigation undergo intrusive checks, where investigators can sift through social media, request data from banks and schools, and even interrogate neighbors. People who have been wrongly flagged report significant delays and legal difficulties accessing their benefits.
David Nolan from Amnesty Tech described the system as a “witch hunt” against anyone labeled as a potential fraudster. He pointed out that these algorithms reinforce existing biases and inequalities. Once flagged, individuals face ongoing suspicion, creating a dehumanizing experience that violates their right to social security and privacy.
The investigation analyzed data that the Swedish Inspectorate for Social Security previously collected. By testing the algorithm against fairness metrics like demographic parity, the findings confirmed that marginalized groups are disproportionately affected. Försäkringskassan hasn’t been forthcoming about how the system operates, having denied several freedom of information requests about it. When Anders Viseth, head of analytics at Försäkringskassan, was presented with the findings, he dismissed the concerns, insisting there’s no issue with the selections made.
In response to Amnesty’s call to discontinue the system, a spokesperson from Försäkringskassan defended it as a necessary measure to protect taxpayer money. They emphasized that the system targets specific applications, not individuals, and that flagged applicants will still receive benefits if entitled. They also expressed that full transparency about the algorithms could lead to fraudsters exploiting the system.
Nolan warned that if the current approach continues, Sweden risks repeating a scandal like the one in the Netherlands, where algorithms wrongly accused thousands of low-income parents of fraud, resulting in severe repercussions for many, particularly those from ethnic minority backgrounds.
As part of the AI Act that came into effect on August 1, 2024, public authorities must adhere to strict rules when using AI for essential services. The law demands an assessment of human rights risks before deploying such systems. The ISF previously criticized Försäkringskassan’s algorithm for not ensuring equal treatment, a point the agency contested. Additionally, a former data protection officer raised concerns that the system violates European data protection laws, as it lacks a legitimate basis for profiling individuals.
On November 13, Amnesty International highlighted similar issues in Denmark, where AI tools in the welfare sector risk furthering discrimination against vulnerable groups.