X
Tech

Dutch court rules AI benefits fraud detection system violates EU human rights

SyRI was used to predict who may be at high risk of conducting housing or social security fraud.
Written by Charlie Osborne, Contributing Writer

A Dutch court has demanded that an algorithm-based system used by the government to identify and track down potential housing and benefit cheats is dropped with immediate effect. 

As reported by DutchNews, on Wednesday, the District Court of The Hague ruled that the system conflicts with EU human rights and privacy protections.

Dubbed System Risk Indication (SyRI), the automatic, machine-learning (ML) tool was used by local Dutch authorities to draw up profiles and lists of individuals suspected of being at high risk of conducting benefits fraud. 

See also: Facebook agrees to pay $550 million to end facial recognition tech lawsuit

According to the publication, SyRI creates risk profiles from individuals that committed social security fraud in the past and then scans for "similar" citizen profiles, creating leads for potential investigations into others that may also be committing fraud, or be of a high risk of doing so in the future. 

SyRI's pooling of citizen data, otherwise kept in separate silos, gave authorities wide-ranging powers and "has been exclusively targeted at neighborhoods with mostly low-income and minority residents," according to UN human rights and poverty rapporteur Philip Alston. 

"Through SyRI, entire poor neighborhoods and their inhabitants were targeted and spied on digitally, without any concrete suspicion of individual wrongdoing," the human rights advocate said

Critics of the system argued that the use of algorithms in this manner created suspects out of innocent people, resulting in a legal challenge brought forward by rights groups and the FNV trade union in 2018. 

How to discover and destroy spyware on your smartphone (in pictures)

Discriminative practices were accepted by the court, which also raised concerns over the tool's insufficient privacy safeguards and a lack of transparency. 

CNET: Clearview AI hit with cease-and-desist from Google, Facebook over facial recognition collection

Alston applauded the decision, commenting: "By applying universal human rights standards, this Dutch court is setting a standard that can be applied by courts elsewhere. The litigation and its outcome are likely to inspire activists in other countries to file similar legal challenges to address the risks of emerging digital welfare systems."

The Dutch state is able to appeal the ruling. 

The Human Rights Watch deemed the decision a victory and said the order "has set an important precedent for protecting the rights of the poor in the age of automation."

"Governments that have relied on data analytics to police access to social security -- such as those in the US, the UK, and Australia -- should heed the court's warning about the human rights risks involved in treating social security beneficiaries as perpetual suspects," the civil rights group added. 

TechRepublic: How to protect your organization from infrastructure as code security risks

Previous and related coverage


Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0


Editorial standards