Facial recognition: Convenient or creepy?

Facial recognition technology may still be in its infancy but it's being rolled out in full force by some jurisdictions around the world -- whether we're ready or not.

When Queensland Police (QPS) rolled out facial recognition for the Gold Coast Commonwealth Games it was intended to keep an eye out for specific high-priority targets, except the system was so unsophisticated it ended up being used for general policing.

Queensland Privacy Commissioner Philip Green explained that due to federal legislation not being passed in time for the Gold Coast Commonwealth Games, QPS was unable to ingest data from other databases, which limited the use of the surveillance system to one-to-one matching.

"The main impediment to Queensland using that system was the federal government. But police did use the limited image database that they had and perhaps that gave an impression that they were more capable than they actually were. I don't think it was malintent. I think it suited the public perception that more was happening than it was," he told ZDNet. 

Brian Lovell, professor at the University of Queensland School of Information Technology and Electrical Engineering, described the exercise as a "complete failure". 

"At face value it looked like the [QPS] were doing something, but they put no effort into the deployment. To do this properly, you really have to decide where you put your cameras, change locations of your cameras, and have a good back end infrastructure, so it's quite a serious project and if you don't put that in you get a system that's totally unusable," he told ZDNet.  

"It can be done right, but most people want to do it on the cheap."

In QPS's defense, rolling out facial recognition software during the Commonwealth games was "representative of the Queensland Police Service's early adoption of new and emerging technology", a QPS spokesperson said.

The QPS added that it continues to explore the potential use of facial recognition. 

"The QPS maintains the skills and qualifications to undertake this work, with ongoing access to one of the two databases that were used during the Commonwealth Games," the spokesperson said.

"Facial recognition is an emerging technology and the value of this technology by the QPS into the future is the subject of ongoing review and consideration as the technology develops."

But cases like this one are not uncommon. It's one of several examples around the world that raises questions about the potential Orwellian-style scope creep and invasion of privacy that facial recognition technology poses.

Other questions that facial recognition technology raise include whether existing versions of the technology are even sophisticated enough to be trialled, let alone be rolled out. Do we need to refresh our policy frameworks around privacy to reflect such technologies? Should we be more wary about who has access to the technology, the data it uses and collects, and how it is being used?

Questions like these are not overreaching or unreasonable, says Electronics Frontiers Australia CEO Lindsey Jackson, who has warned that while the technology is experimental and is attractive, it has not been fully thought through, particularly when the use of surveillance is being used against minorities or vulnerable people.

"Facial recognition may be hot tech right now, but it can turn bad really quickly. I just don't think we have had a mature debate around this at the moment," she told ZDNet.

She points to how authorities in Xinjiang, a border region in northwest China, have been using facial recognition technology, as an example. It's understood since early 2017, facial recognition technology developed by state-owned China Electronics Technology company (CETC) has been used to track and detain Muslim Uyghur citizens in the city.

This claim about the Chinese government was raised by the Human Rights Watch when it wrote [PDF] to CETC's chairman in February 2019 asking whether it was aware of how its technology was being used by Xinjiang authorities, with questions such as whether it understood how its technology was being used to identify certain "problematic" individuals.

"Critically what we see in China, we also have these sorts of issues over here," Jackson warned.

"In Australia, privacy isn't protected as a fundamental right for individuals, and we've started to see a range of surveillance technologies and techniques that have started to affect Australians.

"It's concerning how these technologies are being used here in other countries, but by no means is Australia clean and pure in its adoption of facial recognition."

Read more: The future of edge computing and facial recognition (TechRepublic)

Now caught up in this fiasco are two of Australia's universities: University of Technology Sydney and Curtin University.

A report by ABC's Four Corners revealed how there's a AU$10 million five-year partnership between UTS and CETC.

As a result, UTS is now investigating its partnership with CETC but has confirmed that none of the research output from the partnership has been used by the Chinese company to capture the movements of Uyghur citizens. 

"Nonetheless, UTS will consider possible future applications as part of its review. UTS at this stage, has no plans for new work with CETC and will assess the current contractual agreements in light of the review," the university said.

A similar response was provided by Curtin University, with vice-chancellor Deborah Terry saying in a statement that "Curtin University has been shown no evidence that the published research has been used by the Chinese government".

Terry added the University "unequivocally condemns the use of artificial intelligence, including facial recognition technology, for any form of ethnic profiling to negatively impact or persecute any person or group" and "strongly support initiatives to regulate the use of artificial intelligence, including its application to facial recognition, to protect the right to privacy and maintain ethical standards".

Digital Rights Watch board member Lily Ryan has also criticised the technology that has proven to generate only false positives, complications, and other errors.

"There is a severe lack of strong oversight mechanisms and general enforcement for human rights and civil liberties in this country, which results in the public being understandably wary about giving the government more information in the first place," she said.

"When individuals enter into an agreement with a government agency that includes their personal information, they should have the right to understand, be informed and have a say in where that information is held and what is being used for."

"Above all else, they should be given a genuine, informed choice about whether they give away their privacy rights -- not coerced into handing it over in return for government services."

Backing away, slowly

The potential risk around this technology has been recognised by a number of jurisdictions in the US, such as San Francisco and Oakland in California, and Somerville, Massachusetts, that have banned city departments including the police from using facial recognition technology.

More recently, Orlando City Council made the decision to scrap any further trials of Amazon's real-time facial recognition technology, called Rekognition, soon after it completed the second phase of its two-phase pilot program.

During the trial, the technology was caught up in issues around technical lags, bandwidth, and uncertainty over whether the face-scanning technology actually worked, the Orlando Weekly reported.

Concerns over Amazon's technology had also been raised by more than three dozen civil rights organisations, led by the American Civil Liberties Union (ACL), that demanded Amazon to stop providing government agencies with facial recognition technology in May last year.

The company should "take Rekognition off the table for governments," said the letter to Amazon CEO Jeff Bezos, signed by groups including the Electronic Frontier Foundation, Human Rights Watch, and Data for Black Lives. 

"People should be free to walk down the street without being watched by the government. Facial recognition in American communities threatens this freedom."

A similar call was made by Amazon employees, who wrote in a separate letter to Bezos asking him to cancel sales of Amazon's Rekognition facial-recognition software to police, and more broadly to take an ethical stand on how its technology is used.

For NEC Australia, a company that has long been in the biometric technology game, it only cooperates with governments or private entities that adhere to privacy legislation and act ethically.

"The trade off in information and data gathering needs to be balanced by social data creation. The misuse of personal data that could infringe on an individual's right to privacy without a legitimate reason (and where there is no criminal investigation or security concern) is not something NEC would condone or support," a company spokesperson told ZDNet.

Australia's move towards facial recognition

Despite growing calls by human rights advocates, privacy champions, and industry condemning the way facial recognition technology is used and the interference it has on individual privacy, the Australian government continues to push ahead with its proposed Identity-matching Services Bill 2018.

If passed, the legislation would give legal authority for the Department of Home Affairs to collect, use, and disclose identifiable information to operate a central hub of a facial recognition system that would link up identity-matching systems between government agencies.

The Australia-wide initiative would allow state and territory law enforcement agencies to have access to the country's new face-matching services to access passport, visa, citizenship, and driver licence images from other jurisdictions.

The initiative would be comprised of two parts: The Face Verification Service (FVS), which is a one-to-one image-based verification service that can match a person's photo against an image on one of their government records, while the Face Identification Service (FIS) is a one-to-many, image-based identification service that can match a photo of an unknown person against multiple government records to help establish their identity.

Setting the boundaries

Although introducing facial recognition technology may not be as bad it seems, Lovell said.

Lovell pointed to how people are unfazed by the use of facial recognition technology at Australia's international airports, for instance.

"It's not a big problem at border control, and there's not a lot of concern around that. For most of us, it's a huge advantage to use facial recognition at borders because it's much faster," he said.

"People are worrying about things that don't exist at the moment. They're all that potentially could happen and not so much that we're familiar with."

He explained how people naturally would accept public surveillance if they personally benefitted from it in some way, whether that means gaining from the convenience of it or earning money.

"The trouble with public surveillance today is people think they're getting nothing for it, except an invasion of my privacy," he said.

Read more: Second-gen facial recognition tech aims to improve biometric security (TechRepublic)

At the same time, facial recognition could work if everyone complied to the implicit contract that exists with surveillance, Lovell explained.

"If you use surveillance and say it's for the purpose of public safety, but then you start using what has been captured by the cameras for another purpose that's where you have to be careful. It's like saying videos can be used for many things, but the question is are you using it for something where there's no consent."

NEC Australia emphasised that de-identifying, encrypting, and securely storing data could help protect individual rights to privacy when facial recognition technology is in use. The company also stressed the importance for unnecessary biometric templates to not be collected and stored.  

However, EFA's Lindsey Jackson said that as long as a formal policy framework for facial recognition is not in place, there would always be an opportunity for government and private industry to exploit the technology to their own advantage -- or worse, use it in more nefarious ways.

"In the absence of any laws, regulations or rights-based protection, there's nothing for Australian people to push back on. There are no boundaries when it comes to this; it's people's faces," she said.

"There's no informed consent; there's no opt-in, there's no opt-out. We could get a lot further down the track before the public and government start having strong and critical conversations about where this technology is going." 

Related Coverage

NEC Australia takes ACIC to court over biometric identification project

The ACIC cancelled the delivery of the biometric identification services project in June 2018, after the system had been substantially built and testing was already underway.

Visa: Payment industry will soon drop passwords for biometrics

Advancements in authentication and anti-fraud technologies are already making "static" cardholder verification (CVM) methods optional.

NSW looks to biometric payments and Netflix-like public transport subscriptions

Paying for transport like shopping at Amazon Go and the 'death' of the timetable as 'mobility-as-a-service' becomes the new way to travel in Sydney.

NSW looks to biometric payments and Netflix-like public transport subscriptions

Paying for transport like shopping at Amazon Go and the 'death' of the timetable as 'mobility-as-a-service' becomes the new way to travel in Sydney.

Biometrics, CDR, broadband tax: All the Bills Canberra wants to reheat in 2019

The federal government hopes to have its identity-matching framework, the Consumer Data Right, and a handful of other legislation passed by the end of the year.