Facial recognition critics are 'ill-informed' says police chief

In the age of Facebook and Twitter, worries about police use of live facial recognition are outweighed by the opportunity the technology provides to fight crime, says the Met Police chief.
Written by Daphne Leprince-Ringuet, Contributor

In the age of Twitter, Instagram or Facebook, concerns about police use of live facial recognition systems are overblown, especially if those systems will save us from being attacked in the street. 

That's the sentiment of Metropolitan Police Commissioner Cressida Dick, who in a speech this week made the case for the use of technology as a force for good to better fight crime. Insisting particularly on the benefits of facial recognition to track criminals, Dick dismissed critics of the technology as sometimes highly inaccurate or highly ill informed.

"In an age of Twitter and Instagram and Facebook, concern about my image and that of my fellow law-abiding citizens passing through [live facial recognition] and not being stored, feels much, much smaller than my and the public's vital expectation to be kept safe from a knife through the chest," she said.

"I would say it is for critics to justify to the victims of those crimes why police should not be allowed to use tech lawfully and proportionately to catch criminals. The only people who benefit from us not using it lawfully and proportionately are the criminals, the rapists, the terrorists, and all those who want to harm you, your family and friends."

SEE: Cybersecurity in an IoT and mobile world (ZDNet special report) | Download the report as a PDF (TechRepublic)

Police forces in the UK has been looking at deploying facial recognition systems for a number of years now. The technology was recently trialed by South Wales Police in Cardiff on the day of Cardiff City versus Swansea City football derby – not without criticism from privacy campaign group Big Brother Watch, which staged a protest outside the stadium

The Metropolitan Police also started deploying facial recognition cameras across London last month to help officers tackle serious crime by arresting wanted suspects. Civil liberties groups were quick to express their disapproval, denouncing a "dangerous, oppressive and completely unjustified" move.

Big Brother Watch director Silkie Carlo said: "This decision represents an enormous expansion of the surveillance state and a serious threat to civil liberties in the UK."

But in her speech this week, Dick reiterated the arguments put forward by the police force when the roll-out of facial recognition systems was announced in the UK capital. She maintained that the people registered on such systems' watch lists are only those wanted for serious crime; that the cameras do not store any biometric data; that the technology has no ethnic bias; and that the use of facial recognition is "very clearly signposted". 

According to the Met police chief, the deployment of live facial recognition has already resulted in the arrest of eight individuals who were wanted for having "caused harm" and who police would have been "very unlikely to identify" without the technology.

Dick also said that the deployment of new technology in the police force would always be complementary to human intelligence. In the case of facial recognition, for example, she stressed that human officers will always make the final decision on whether or not to intervene. "The Met aims to be the most trusted police service in the world," she said.

The Metropolitan Police chief's speech came after a report was published this month by the UK's committee on standards in public life, which found that there was a lack of proper testing and scrutiny within the police forces when deploying a new technology. 

Led by former head of MI5 Lord Evans, the committee's report noted that there is no clear process for evaluating, procuring or deploying new technologies, such as facial recognition, for law enforcement. It found that it is often up to the individual police force to make up their own ethical frameworks, often with "mixed results".

SEE: 2020 is when cybersecurity gets even weirder, so get ready

The issue does not only concern facial recognition, but other technologies too. In 2017, for example, the UK police in Durham started using an algorithm dubbed the Harm Assessment Risk Tool (HART) to help officers make custody decisions. The system was designed to work out whether suspects were at low, moderate or high risk of re-offending.

Among the data used by Hart was suspects' age, gender and postcode. Since geographical information has the potential to reflect particular communities, the initiative immediately drew criticism from privacy campaigners like Big Brother Watch, who said the technology was "crude" and "offensive".

Lord Evans called for stricter regulations and guidelines on the use of technology in public services like policing. The Met's Cressida Dick said the recommendations had been "very helpful" and agreed with the need for the government to draw better frameworks. "We are a law enforcement organization, it is our duty to uphold the law. Give us the law and we'll work within it," she said.

Ultimately, however, Dick unequivocally defended the need for the police to make the most of new technologies, regardless of critics. She argued that half of crime is now committed online, and that the police force needs the tools to match the changing nature of threats. "Criminals make powerful use of the digital world," said Dick. "Obviously the police should use cutting edge tech too as a force for good."

Editorial standards