X
Tech

OAIC determines AFP interfered with privacy of Australians after using Clearview AI

An investigation into the Australian Federal Police has shown it has not taken any steps to improve its privacy practices since using Clearview AI's facial recognition tool without the backing of an appropriate legislative framework.
Written by Campbell Kwan, Contributor

In an investigation conducted by Australia's Information Commissioner (OAIC), it has found the Australian Federal Police's (AFP) use of the Clearview AI platform interfered with the privacy of Australian citizens.

Clearview AI's facial recognition tool is known for breaching privacy laws on numerous fronts by scraping biometric information from the web indiscriminately and collecting data on at least 3 billion people, with many of those people being Australian.

From November 2019 to January 2020, 10 members of the AFP's Australian Centre to Counter Child Exploitation (ACCCE) used the Clearview AI platform to conduct searches of certain individuals residing in Australia. 

ACCCE members used the platform to search for scraped images of possible persons of interest, an alleged offender, victims, members of the public, and members of the AFP, the OAIC said.

While the AFP only used the Clearview AI platform on a trial basis, Information and Privacy Commissioner Angelene Falk determined [PDF] the federal police failed to undertake a privacy impact assessment of the Clearview AI platform, despite it being a high privacy risk project.

By failing to do so, the OAIC said the AFP breached the Australian Government Agencies Privacy Code.

It added that the AFP did not take reasonable steps to implement practices, procedures, and systems relating to ensure the Clearview AI platform complied with the Australian Privacy Principles as well.

Read more: 'Booyaaa': Australian Federal Police use of Clearview AI detailed

The AFP submitted that it did not undertake privacy impact assessment as its use of Clearview AI platform was only under a "limited trial".

When investigating this decision, however, the OAIC said the AFP failed to provide any evidence that a project manager or trial participant conducted a threshold assessment to determine whether a privacy impact assessment was required. A threshold assessment is a preliminary assessment used to determine a project's potential privacy impacts and whether a privacy impact assessment should be undertaken.

Worryingly, the OAIC's investigation also found that the AFP has not shown any indication that it has taken, or would take, steps to prevent similar breaches from occurring again in the future. This is despite the AFP having already admitted in April last year that it trialled the Clearview AI platform despite not having an appropriate legislative framework in place

"Without a more coordinated approach to identifying high privacy risk projects and improvements to staff privacy training, there is a risk of similar contraventions of the Privacy Act occurring in the future," the OAIC wrote in its determination.

"This is particularly the case given the increasing accessibility and capabilities of facial recognition service providers and other new and emerging high privacy impact technologies that could support investigations."

In light of these privacy breaches, the OAIC has ordered the AFP to engage an independent third-party assessor to review its practices, procedures, and systems and write a report about any changes that the AFP must make to ensure its compliance with the Australian Government Agencies Privacy Code.

The report of the gaps in AFP's privacy infrastructure must be written in the next six months, and the AFP must also provide the OAIC with a timeline for implementing any actions set out in the report.

The OAIC has also ordered for all AFP personnel that handle personal information to have completed an updated privacy training program in the next 12 months.

RELATED COVERAGE

Editorial standards