For AI laws, China joins the U.S. in asking the public to chime in

China has drafted new regulations that it says are necessary to ensure the safe development of generative AI applications like ChatGPT.
Written by Eileen Yu, Senior Contributing Editor
Blue sketches of a human brain and computerized data visualized
Yuichiro Chino/Getty Images

China has released a new draft regulation that it says is necessary to ensure the safe development of generative artificial intelligence (AI) technologies, such as ChatGPT

While it supports the innovative use of AI algorithms to improve user experience and access to information, the growth of such applications can lead to abuse. Emphasis should be placed on such tools and data resources to be used safely and reliably, said the Cyberspace Administration of China (CAC). 

Regulations were needed to drive a healthy and sustainable development of generative AI algorithms, said the government agency, as it published the draft laws on its website. 

Also: AI projects now exceed 350,000, according to Stanford

Under the proposed rules, operators will be required to send their applications to regulators for "safety reviews" before offering the services to the public, according to a report by state-owned media Global Times. They also are not to use AI algorithms and data to engage in unfair competition. 

The CAC's draft laws further establish guidelines to which generative AI services must adhere, such as the types of content such applications can generate. These will ensure the accuracy of information and prevent the spread of false information, the regulator said. 

The draft laws are open for public feedback until May 10. 

Also: The best AI chatbots

The move comes a day after the U.S. government said it was seeking public input on AI accountability. The U.S. Department of Commerce's National Telecommunications and Information Administration (NTIA) said the feedback would guide policies that support AI audits, risk and safety assessments, certifications, and tools to establish trust in AI systems. 

"Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them," said NTIA's administrator Alan Davidson. 

Submissions for feedback to the NTIA will close on June 10. 

Also: AI may compromise our personal information if companies aren't held responsible

Efforts from China and the U.S. to gain some control come as interest in generative AI applications, such as ChatGPT, continues to grow. Key players from both markets, including Tencent, Alibaba, Google, and Microsoft, are offering or integrating the AI model into their products.

Alibaba Cloud just yesterday unveiled its large language AI platform, called Tongyi Qianwen, which is currently available to customers in China for beta testing and as an API to developers. The AI model also will be integrated with Alibaba's enterprise applications and tapped to power its products and services, including its online collaboration workplace platform DingTalk as well as its smart voice assistant, Tmall Genie


Editorial standards