A group of 51 digital rights organizations has called on the European Commission to impose a complete ban on the use of facial recognition technologies for mass surveillance – with no exceptions allowed.
Comprising activist groups from across the continent, such as Big Brother Watch UK, AlgorithmWatch and the European Digital Society, the call was chaperoned by advocacy network the European Digital Rights (EDRi) in the form of an open letter to the European commissioner for Justice, Didier Reynders.
It comes just weeks before the Commission releases much-awaited new rules on the ethical use of artificial intelligence on the continent on 21 April.
The letter urges the Commissioner to support enhanced protection for fundamental human rights in the upcoming laws, in particular in relation to facial recognition and other biometric technologies, when these tools are used in public spaces to carry out mass surveillance.
According to the coalition, there are no examples where the use of facial recognition for the purpose of mass surveillance can justify the harm that it might cause to individuals' rights, such as the right to privacy, to data protection, to non-discrimination or to free expression.
It is often defended that the technology is a reasonable tool to deploy in some circumstances, such as to keep an eye on the public in the context of law enforcement, but the signatories to the letter argue that a blanket ban should instead be imposed on all potential use cases.
"Wherever a biometric technology entails mass surveillance, we call for a ban on all uses and applications without exception," Ella Jakubowska, policy and campaigns officer at EDRi, tells ZDNet. "We think that any use that is indiscriminately or arbitrarily targeting people in a public space is always, and without question, going to infringe on fundamental rights. It's never going to meet the threshold of necessity and proportionality."
Based on evidence from within and beyond the EU, in effect, EDRi has concluded that the unfettered development of biometric technologies to snoop on citizens has severe consequences for human rights.
They range from using facial recognition for queue management in Rome and Brussels airports, to German authorities using the technology to surveil G20 protesters in Hamburg. The European Commission provides a €4.5 million ($5.3 million) grant to deploy a technology dubbed iBorderCtrl at some European border controls, which picked up on travelers' gestures to detect those who might be lying when trying to enter an EU country illegally.
The EU's vice-president for digital Margrethe Vestager has also said that using facial recognition tools to identify citizens automatically is at odds with the bloc's data protection regime, given that it doesn't meet one of the GDPR's key requirements of obtaining an individual's consent before processing their biometric data.
This won't be enough to stop the technology from interfering with human rights, according to EDRi. The GDPR leaves space for exemptions when "strictly necessary", which, coupled with poor enforcement of the rule of consent, has led to examples of facial recognition being used to the detriment of EU citizens, such as those uncovered by EDRi.
"We have evidence of the existing legal framework being misapplied and having enforcement problems. So, although commissioners seem to agree that in principle, these technologies should be banned by the GDPR, that ban doesn't exist in reality," says Jakubowska. "This is why we want the Commission to publish a more specific and clear prohibition, which builds on the existing prohibitions in general data protection law."
EDRi and the 51 organizations that have signed the open letter join a chorus of activist voices that have demanded similar action in the last few years.
Pressure is mounting on the European Commission, therefore, ahead of the institution's publication of new rules on AI that are expected to shape the EU's place and relevance in what is often described as a race against China and the US.
For Jakubowska, however, this is an opportunity to seize. "These technologies are not inevitable," she says. "We are at an important tipping point where we could actually prevent a lot of future harms and authoritarian technology practices before they go any further. We don't have to wait for huge and disruptive impacts on people's lives before we stop it. This is an incredible opportunity for civil society to interject, at a point where we can still change things."
As part of the open letter, EDRi has also urged the Commission to carefully review the other potentially dangerous applications of AI, and draw some red lines where necessary.
Among the use cases that might be problematic, the signatories flagged technologies that might impede access to healthcare, social security or justice, as well as systems that make predictions about citizens' behaviors and thoughts; and algorithms capable of manipulating individuals, and presenting a threat to human dignity, agency, and collective democracy.