X
Tech

Fawkes protects your identity from facial recognition systems, pixel by pixel

Changes made to photos undetectable to the naked eye could still prevent matches in deep learning systems.
Written by Charlie Osborne, Contributing Writer

A new tool has been proposed for cloaking our true identities when photos are posted online to prevent profiling through facial recognition systems. 

Deep learning tools and facial recognition software has now permeated our daily lives. From surveillance cameras equipped with facial trackers to photo-tagging suggestions on social media, the use of these technologies is now common -- and often controversial. 

SEE: Navigating data privacy (ZDNet/TechRepublic special feature) | Download the free PDF version (TechRepublic)

A number of US states and the EU are considering the ban of facial recognition cameras in public spaces. IBM has already exited the business, on the basis that the technology could end up enforcing racial bias. Amazon and Microsoft, too, have said they will stop providing facial recognition tools to law enforcement. 

UK and Australian regulators are also probing facial recognition software firm Clearview AI over the use of image scraping across social media platforms in the creation of substantial profiles without consent. 

Scraping images and training a neural network to find matches could lead to "highly accurate facial recognition models of individuals without their knowledge," says University of Chicago academics, who have now published a paper on a tool proposed as a means to foil these systems. 

In a paper (.PDF) due to be presented at the USENIX Security 2020 symposium, researchers Shawn Shan, Emily Wenger, Jiayun Zhang, Huiying Li, Haitao Zheng, and Ben Zhao introduce "Fawkes," software designed to "help individuals inoculate their images against unauthorized facial recognition models."

See also: The US Army uses facial recognition to train AI. Now, it needs to protect it

In what could be considered the introduction of garbage code and data to images we share online, Fawkes works at the pixel level to introduce imperceptible "cloaks" to photos before they are uploaded to the Internet. 

Invisible to the naked eye, these tiny changes are still enough to produce inaccurate facial models accepted by deep learning systems and image scrapers -- without noticeably changing how an image looks to human viewers. 

"As Clearview AI demonstrated, anyone can canvas the Internet for data and train highly accurate facial recognition models of individuals without their knowledge," the researchers say. "We need tools to protect ourselves from potential misuses of unauthorized facial recognition systems."

The Fawkes system is a form of data poisoning. The aim is to post photos which, once scraped by a machine learning service, teach the model the wrong features and misdirect them in what makes a subject unique. 

CNET: Apple's new security program gives special iPhone hardware, with restrictions attached

During experiments, Fawkes provided high levels of protection against facial recognition models, the team said, regardless of how the models were trained. In addition, even in scenarios where 'clean' images have already been made available to image scrapers, processing an image with Fawkes results in a misidentification rate of at least 80%. 

In real-world tests against the Microsoft Azure Face API, Amazon Rekognition, and Face++, the system appears to be successful in preventing users from being identified. 

Fawkes worked in 100% of cases against the Azure Face training endpoint and 34% of the time against Amazon Rekognition's similarity score system -- rising to 100% when more robust cloaking is put into place. When set against Face++, the original success rate was 0%, but when strengthened cloaking was introduced, this rose to 100%. 

TechRepublic: Phishing attacks and ransomware are the most challenging threats for many organizations

In practice, many of us already have countless images of ourselves online, and so Fawkes would likely only serve as an accessory to privacy. It is also worth noting that for every pushback against facial recognition, the technology can become smarter and overcome them over time -- and so tools like Fawkes would need to stay ahead of the curve to be useful. 

"Fawkes is most effective when used in conjunction with other privacy-enhancing steps that minimize the online availability of a user's uncloaked images," the researchers say. "The online curation of personal images is a challenging problem, and we leave the study of minimizing online image footprints to future work." 

The worst IoT, smart home hacks of 2020 (so far)

Previous and related coverage


Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0


Editorial standards