Amnesty International has raised serious concerns about the use of artificial intelligence by the Danish welfare authority, claiming it violates individual privacy and risks discrimination against vulnerable groups. This analysis centers on Udbetaling Danmark (UDK), established in 2012 to streamline various welfare benefit payments across five municipalities. UDK employs AI algorithms to flag individuals suspected of social benefit fraud, tools developed with the help of ATP, the largest pensions processing company in Denmark, along with several private corporations.
The report argues that these fraud detection systems infringe on the rights of those receiving benefits, particularly their rights to privacy and equality. It specifically points out that marginalized groups—such as people with disabilities, low-income earners, and migrants—face extra hurdles in accessing social benefits due to these automated systems. Hellen Mukiri-Smith from Amnesty expressed that this kind of mass surveillance creates a safety net that instead of protecting, targets those it is supposed to help.
According to Amnesty, UDK’s system likely falls under the “social scoring” category outlined in the EU’s new AI Act, which defines social scoring systems as those evaluating individuals based on their behavior or personal traits. Mukiri-Smith stressed the need for a ban on this system given its implications. UDK provided redacted documents about its algorithms but declined to allow a thorough audit by Amnesty, arguing that its practices do not fall under the EU regulations.
Amnesty has urged the European Commission to clarify what practices qualify as social scoring. They also want the Danish authorities to halt the use of the system until it’s verified that it complies with the AI Act. Mukiri-Smith added that there should be a ban on using data tied to “foreign affiliation” to assess fraud risks and insisted on transparency and oversight in developing these algorithms.
In their system, UDK, along with ATP, relies on up to 60 algorithms to identify potential fraud. This approach involves extensive data collection from personal records of millions of Danish residents, encompassing residency status, citizenship, and other details that can act as proxies for race, ethnicity, or sexual orientation. Amnesty highlighted that this data mining builds a distorted profile of individuals, tracking where they live and work, their health history, and even their international ties.
People who’ve been subjected to UDK’s scrutiny reported feeling immense psychological pressure. Stig Langvad from Dansk Handicap Foundation described the experience as akin to “sitting at the end of a gun.” UDK defended its practices, stating that their data collection methods are legally justified.
The report points out the existing biases within the welfare system itself, arguing that it exacerbates discrimination against migrants and refugees. Residency requirements for benefit claims already disadvantage people from certain backgrounds. The algorithms used by UDK include the Really Single algorithm, which assesses family dynamics to predict fraud risk but lacks clear definitions, leading to arbitrary judgments. Such vague criteria raise concerns about who gets investigated, particularly for those in non-traditional living situations.
Gitte Nielsen from Dansk Handicap Foundation conveyed the toll of this constant surveillance, noting that many members of her organization suffer from anxiety and depression because of the pressure to prove their eligibility for benefits. The algorithms also factor in “foreign affiliation,” targeting beneficiaries with strong ties to countries outside the EEA for further scrutiny. UDK has claimed that using “citizenship” as a parameter does not equate to handling sensitive personal data.
Overall, the criticisms leveled by Amnesty International paint a troubling picture of how the integration of AI in welfare systems can lead to privacy violations and discriminatory practices.