X
Tech

Facial recognition: Don't use it to snoop on how staff are feeling, says watchdog

The Council of Europe's new guidelines call for a ban on some applications of facial recognition, and stringent privacy safeguards when the technology is deployed.
Written by Daphne Leprince-Ringuet, Contributor
istock-872707982.jpg

The Council of Europe has published new guidelines for the deployment of facial recognition technologies. 

Image: metamorworks, Getty Images/iStockphoto

Some applications of facial recognition that can lead to discrimination should be banned altogether, according to Europe's human rights watchdog, following months of deliberation on how to best regulate the technology. 

The Council of Europe has published new guidelines to be followed by governments and private companies that are considering the deployment of facial recognition technologies. For example, workplaces that use digital tools to gauge worker engagement based on their facial expressions, or insurance companies using the technology to determine customers' health or social status, could all be affected by the new guidelines. 

The watchdog effectively advises that where the technology is used exclusively to determine an individual's skin color, religious belief, sex, ethnic origin, age, health or social status, the use of facial recognition should be prohibited, unless it can be shown that its deployment is necessary and proportionate.  

SEE: Guide to Becoming a Digital Transformation Champion (TechRepublic Premium)

Under the same conditions, the ban should also apply to some of the digital tools that can recognize emotions, detect personality traits or mental health conditions, and which can be used unfairly in hiring processes or to determine access to insurance and education. 

"At its best, facial recognition can be convenient, helping us to navigate obstacles in our everyday lives. At its worst, it threatens our essential human rights, including privacy, equal treatment and non-discrimination, empowering state authorities and others to monitor and control important aspects of our lives – often without our knowledge or consent," said Council of Europe Secretary General Marija Pejčinović Burić.  

"But this can be stopped. These guidelines ensure the protection of people's personal dignity, human rights and fundamental freedoms, including the security of their personal data." 

In addition to a ban on specific applications, the organization also designed regulations to protect citizens' privacy when facial recognition technology is deemed a suitable tool to use. 

For example, there should be strict parameters and criteria that law enforcement agencies should adhere to when they find it justifiable to use facial recognition tools; and where the use of the technology is covert, it should only be allowed to "prevent imminent and a substantial risk to public security." The Council of Europe also called for a public debate to regulate the deployment of the technology in public places and schools, where it argued that less intrusive mechanisms exist. 

Private companies should not be allowed to use facial recognition in environments like shopping centers, be it for marketing or private security purposes. When they deploy the technology, they must get explicit consent from those who will be affected and offer them an alternative solution. 

The Council of Europe's new guidelines were built on top of an agreement called the Convention 108+, which was first published in 1981 and constituted at the time the first legally binding document in the field of data protection. In 2018, the convention was modernized to adapt the agreement to the digital age, and now has 55 participating states. 

Despite the re-writing of the convention, experts have worried that European regulation is not suited to the age of AI and potentially leads to detrimental outcomes for citizens, especially in the case of technologies that can be problematic like facial recognition. 

Martin Ebers, the co-founder of the Robotics and AI Law Society (RAILS), told ZDNet: "We have regulatory frameworks that are not specifically tailored to AI systems, but are nevertheless applied to AI systems. For example, there are no specific rules at an EU level to deal with facial recognition systems." 

SEE: The algorithms are watching us, but who is watching the algorithms?

The last few years have seen repeated attempts from various European institutions and activists to impose stricter regulation on AI systems, and particularly facial recognition tools. In a white paper published on artificial intelligence last year, the EU said it would consider banning the technology altogether, which was shortly followed by the European Data Protection Supervisor Wojciech Wiewiórowski arguing in favor of a moratorium on the use of facial recognition in public spaces. 

Although the guidelines are a set of reference measures rather than legally binding laws, the document provides the most extensive set of proposals so far to regulate facial recognition technology in Europe. The measures will go through the European Parliament before being passed as new laws. 

Fanny Hidvégi, Europe Policy Manager at Brussels-based thinktank AccessNow, told ZDNet: "We urge the Council of Europe to take the next step and support a ban for applications that are in inherent conflict with fundamental rights. No democratic debate, temporary pause or safeguards can mitigate individual and societal harms caused by such use of these technologies."

Editorial standards