Data breach exposes Clearview AI client list

Most of the company's clients are law enforcement.
Written by Asha Barbaschow, Contributor
security CCTV camera or surveillance system with police officers on blurry background
Getty Images/iStockphoto

A facial recognition startup, that was earlier this year revealed as being used by hundreds of law enforcement agencies in the United States, has suffered a data breach.

A statement from Clearview AI's attorney Tor Ekeland said security was the company's top priority.

As first reported by The Daily Beast, the data accessed includes Clearview's customer list, the number of accounts each customer has, and the number of searches those customers have made.

The information was disclosed in a notification the company sent to its customers.

See also: 2020 is when cybersecurity gets even weirder, so get ready  

"Security is Clearview's top priority," Ekeland said. "Unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw and continue to work to strengthen our security."

As detailed in The New York Times report, the company has a database of 3 billion photos that it collected from the internet, including websites like YouTube, Facebook, Venmo and LinkedIn. 

As reported by sister site CNET, tech giants like Google, Facebook, and Microsoft have sent Clearview AI cease-and-desist letters for scraping images hosted on their platforms.   

Following concerns earlier this year that the Clearview application had allowed a stranger to take a photograph of an individual and then have that matched in its database to a name, address, and other personal information, the company said its search engine could only be accessed by law enforcement agencies and select security professionals as an investigative tool.

"Nonetheless, we recognise that powerful tools always have the potential to be abused, regardless of who is using them, and we take the threat very seriously," the company wrote. "Accordingly, the Clearview app has built-in safeguards to ensure these trained professionals only use it for its intended purpose: To help identify the perpetrators and victims of crimes."

This followed Clearview saying its app was not available to the public, and that "it is gratifying to know that Clearview has been able to help law enforcement officials make communities safer and, most importantly, protect children".

Clearview AI, founded by Australian entrepreneur Hoan Ton-That, also faced criticism from Australian Privacy Commissioner Angelene Falk who had made inquiries to determine if the data of Australians had been collected.


Editorial standards