Biometric ID tech 'inherently fallible,' report says

Biometric identification systems -- fingerprint, handprint, voice or face recognition -- are "inherently fallible," according to a new DARPA-funded report.
Written by Andrew Nusca, Contributor

Biometric identification systems -- which are designed to recognize individuals based on traits such as fingerprints, palm prints, or voice or face recognition -- are "inherently fallible," according to a new report.

The report by the National Research Council finds that no single trait is "stable and distinctive" across all demographic groups, suggesting that biometrics are not as secure as common perception may dictate.

"For nearly 50 years, the promise of biometrics has outpaced the application of the technology," said Joseph Pato, chair of the committee that wrote the report as well as a technologist for Hewlett-Packard's HP Labs, in a statement. "While some biometric systems can be effective for specific tasks, they are not nearly as infallible as their depiction in popular culture might suggest."

Biometric systems are used to regulate access to facilities, information and other rights or benefits. The technology is used in everything from military facilities to the fingerprint reader on your company-issued laptop computer.

But how secure are they, really? The systems really only provide "probabilistic results" -- that is, confidence in results must take into consideration an inherent uncertainty in any given system.

Take a system in which a true breach of security is rare -- say, the average white collar office. Despite having accurate sensors and matching capabilities, the system can still have a high rate of false alarms. That means the operators of the system begin to put less stock in the system's alarms, thus weakening security and putting it at risk when a real threat comes along.

And those false alarms are dynamic, too: biometric factors such as voice recognition can change over time, for reasons such as age, stress or illness.

In other words: there are too many variables to accurately calibrate a biometric system, so it's not wise to put faith in them to securely lock down valuable facilities or information. Moreover, a person's biometric traits are public -- hardly secure enough to be a primary security system.

So what's the answer? The committee suggests that biometric science needs reinforcement, in the form of additional research at all levels of design and operation.

That means that biometric systems should be used more carefully and in the right context -- even if it's just one component of an overall security system, according to the report.

The big takeaways:

  • Biometric identification systems are "inherently falliable." "The chance of error can be made small but not eliminated," according to the authors.
  • The science needs strengthening, especially with regard to how biometric markers are distributed among different population groups and how people interact with the tech in the first place.
  • Biometric security requires broad, systems-level considerations.
  • Biometric systems must be evaluated for context. It's just as important as the technology at work.
  • More peer-reviewed studies must be done on the performance of recognition systems.

The committee that authored the report included researchers from MIT, Carnegie Mellon, Georgetown, Michigan State and San Jose State University, as well as Disney, IBM, Gartner and the Cleveland Clinic.

The report was funded by the Pentagon's Defense Advanced Research Projects Agency (DARPA), the National Science Foundation, the Central Intelligence Agency (CIA) and the Dept. of Homeland Security.

This post was originally published on Smartplanet.com

Editorial standards