China has closed a record number of personal data breaches and is seeking public feedback on draft laws to regulate the use of facial recognition data.
In the last three years, the Chinese police closed 36,000 cases related to personal data infringements, detaining 64,000 suspects along the way, according to the Ministry of Public Security. The arrests were part of the government's efforts since 2020 to regulate the internet, which also saw more than 30 million SIM cards and 300 million "illegal" internet accounts seized, reported state-owned media Global Times, citing the ministry in a media briefing Thursday.
The police had been investigating a growing number of criminal cases involving personal data violations over the past couple of years, with these targeting several industries including healthcare, education, logistics, and e-commerce.
Reported criminal cases involving artificial intelligence (AI) also had been increasing, said the ministry, citing an April 2023 incident in which a company in the Fujian province lost 4.3 million yuan ($596,510) to hackers who used AI to alter their faces.
To date, law enforcement agencies have solved 79 cases involving "AI face changing."
With facial recognition now widely used alongside advancements made in AI technology, government officials noted the emergence of cases tapping such data. In such instances, cybercriminals would use photos, in particular those found on identity cards, together with personal names and ID numbers to facilitate facial recognition verification.
China's public security departments are working with state facilities to conduct safety assessments of facial recognition and other relevant technology, as well as to identify potential risks in facial recognition verification systems, according to the ministry.
With cybercriminal ecosystems largely linked, ranging from theft to reselling of data to money laundering, Chinese government officials said these criminals have established a significant "underground big data" market that poses serious risks to personal data and "social order".
Proposed nationwide laws to regulate facial recognition
The Cyberspace Administration of China (CAC) earlier this week published draft laws that dealt specifically with facial recognition technology. It marked the first time nationwide regulations had been mooted for the technology, according to Global Times.
The proposed rules will require "explicit or written" user consent to be obtained before organizations can collect and use personal facial information. Businesses also must state the reason and extent of data they are collecting, and use the data only for the stated purpose.
Without user consent, no person or organization is allowed to use facial recognition technology to analyze sensitive personal data, such as ethnicity, religious beliefs, race, and health status. There are exceptions for use without consent, primarily for maintaining national security and public safety as well as safeguarding the health and property of individuals in emergencies.
Organizations that use the technology must have data protection measures in place to prevent unauthorized access or data leaks, stated the CAC document.
The draft laws further indicate that any person or organization that retains more than 10,000 facial recognition datasets must notify the relevant cyber government authorities within 30 working days.
In January, China put into effect regulations that aimed to prevent the abuse of "deep synthesis" technology, including deepfakes and virtual reality. Anyone using these services must label the images accordingly and refrain from tapping the technology for activities that breach local regulations.
Interim laws also will kick in next week to manage generative AI services in the country. These regulations outline various measures that look to facilitate the sound development of the technology while protecting national and public interests and the legal rights of citizens and businesses, the Chinese government said.
Generative AI developers, for instance, will have to ensure their pre-training and model optimization processes are carried out in compliance with the law. These include using data from legitimate sources that adhere to intellectual property rights. Should personal data be used, the individual's consent must be obtained or it must be done in accordance with existing regulations. Measures also have to be taken to improve the quality of training data, including its accuracy, objectivity, and diversity.
Under the interim laws, generative AI service providers assume legal responsibility for the information generated and its security. They will need to sign service-level agreements with users of their service, thereby, clarifying each party's rights and obligations.