X
Tech

Civil rights groups launch effort to stop IRS use of 'flawed' ID.me facial recognition

Multiple groups are urging the IRS to stop its use of facial recognition software.
Written by Jonathan Greig, Contributor

Outrage continues to swirl around a proposed plan from the Treasury Department to require some taxpayers to submit to facial recognition and biometric surveillance in order to access their accounts online. The proposal faced further scrutiny after it was revealed the IRS planned to involve controversial facial recognition company ID.me in the effort. 

Fight for the Future, Algorithmic Justice League, EPIC, and other civil rights organizations launched a website -- called Dump ID.me -- allowing people to sign a petition against the IRS plan. 

This campaign site comes after days of criticism from privacy, justice, and civil rights groups concerned about the potential for a company like ID.me to have access to peoples' most sensitive data. 

ID.me's CEO Blake Hall faced widespread backlash for a LinkedIn post where he admitted that the company had been lying about the way its tool works. The company initially claimed it only runs a 1:1 match, but Hall revealed that it does run some 1:many matches and compares peoples' images to a massive database, news first reported by CyberScoop's Tonya Riley.  

Caitlin Seeley George, campaign director at Fight for the Future, said the plan to use facial recognition on taxpayers was bad from the start, and it only got worse as more information was revealed.

"Part of why we launched this effort is because we think it's critical that the IRS hears public concerns about this issue. There's already been a swift outcry from civil rights organizations and experts, but people broadly understand that they should not have to hand over their biometrics in order to access their IRS information (or at all, really)," George told ZDNet

"ID.me is a particularly troubling tool, especially with the revelation that they have been publicly lying about how it works and the types of verification it does," George added. "But all facial recognition tools will cause a lot of the same issues: they will amass a database of peoples' most sensitive information that can be shared with other agencies and law enforcement, and also will be a target for hackers. No government agency should be using facial recognition or other biometrics to verify identity."

Late last week, Bloomberg reported that the Treasury Department is now considering other vendors for the facial recognition project, but the outrage over the situation has sparked further concern about the widespread use of facial recognition across the federal government as well as state governments. 

Fast Company reported that ID.me is now used by the Department of Veterans Affairs, the Social Security Administration, and several other federal agencies. 

Jay Stanley, senior policy analyst at the ACLU, told ZDNet that dozens of agencies across the country mandate facial recognition in order to access government benefits. The IRS began forcing some people to use ID.me in order to access the expanded child tax credits that were part of President Joe Biden's American Rescue Plan. 

There are a range of issues with facial recognition, most notably that it has been proven repeatedly to be inaccurate with the faces of Black and brown people as well as women. Artificial intelligence researchers Inioluwa Deborah Raji and Dr. Joy Buolamwini released a study in 2019 proving that Amazon's facial recognition software made more mistakes when identifying the faces of Black people, particularly Black women.  

Also: Backlash to retail use of facial recognition grows after Michigan teen unfairly kicked out of skating rink

Stanley added that the use of facial recognition by government agencies creates a number of accessibility issues for people, noting that some state agencies use it to vet unemployment insurance recipients. It requires strong internet connections -- something many people don't have -- and puts an undue burden on people attempting to access benefits Congress has deemed them eligible for, according to Stanley. 

"Its just not right to use a technology with those kinds of biases for such a public purpose. This kind of core government functions shouldn't be done by a private company," he said, adding that ID.me would not be subject privacy laws and certain checks and balances, despite carrying out an essential government function. 

Many states are using facial recognition for government services through funding coming from the federal government, and Stanley said strings need to be attached to ensure the algorithms aren't biased. 

Aubrey Turner, executive advisor at Ping Identity, was critical of the outrage directed toward the IRS effort. Turner acknowledged the privacy and demographic bias concerns raised by watchdogs but said everyone's images are captured by traffic cameras, security cameras at the airport and through social media accounts. 

"Not going so far as to call it fake outrage, but let's be pragmatic for a moment. Overall, I think it's a good idea for the IRS to include modern identity proofing as part of the account registration/access process. Known as document-centric identity proofing within the IAM industry, the process of uploading the document (e.g., drivers license) and taking a selfie (capturing biometric data) is to attain a desired level of confidence the taxpayer user is who they claim to be while mitigating counterfeiting/forgeries. Notwithstanding the security aspect, there is also a user convenience component. This same proofing process can also be a means to reducing and removing passwords for the account enrollment process, which also has positive user experience upside," Turner said. 

"Facial recognition can certainly be creepy if used inappropriately in marketing with social media apps. But it can also be tremendously convenient clearing airport security or unlocking your smartphone. The realities of today's cyber threats means we have to find innovative and dynamic ways to prevent things like account takeover," Turner added.

"Technological innovation is accelerating faster than at any point in modern history, and there will always be misalignment between tech and regulations. There are emerging use cases and things we have yet to conceive that will certainly challenge our notions of the balance of privacy, security, and convenience. But the bottom line is that we can't let perfect get in the way of progress," Turner said.

He noted that a debate should be had about whether the government should have built the system itself, but said "private enterprises often innovate to close gaps."

Also: Facebook is shutting down its facial recognition system

Regardless of the outrage, more businesses will be leveraging identity proofing processes that utilize biometric and behavioral data, Turner added. 

"I think not using these more secure methods [is] worse than the alternatives. This is the first in an oncoming wave, so the government should be fostering this innovation and not putting up roadblocks," Turner said. 

"I think there are legitimate privacy concerns with facial recognition and biometric data that shouldn't be ignored. The time for digital identity proofing has come (will only continue to grow in government and private sectors), so we should embrace it versus being outraged without practical alternatives to the realities of today's cyber security challenges."

Buolamwini -- who has become an advocate against facial recognition since publishing her study -- released a letter to the Biden Administration last week where she said the real-world impact on marginalized communities will "likely get worse because of the unchecked proliferation of facial recognition technologies generally." 

"These technologies are being deployed at an unprecedented rate across state and federal agencies. They are imposed on the public without sufficient public scrutiny, debate, or oversight, causing harm to the populous generally," Buolamwini said. 

"No biometric technologies should be adopted by the government to police access to services or benefits, certainly not without cautious consideration of the dangers they pose, due diligence in outside testing, and the consent of those exposed to potential abuse, data exploitation, and other harms that affect us all."

Editorial standards