US federal agencies have now been issued a guidance by the White House on how to regulate artificial intelligence (AI) applications that are produced in the US.
"This memorandum sets out policy considerations that should guide, to the extent permitted by law, regulatory and non-regulatory approaches to AI applications developed and deployed outside of the federal government," stated Russell Vought, director of the Office of Management and Budget (OMB) in the memo [PDF] for all the heads of executive departments and agencies, including independent regulatory agencies.
The OMB guidance comes 21 months after President Donald Trump signed an executive order to fast-track the development and regulation of AI in the US.
President Trump at the time touted the executive order would see the launch of the American AI initiative, which would place US resources towards ensuring that AI technology is made locally.
According to the guidance, the idea is to ensure that agencies do not introduce regulations and rules that "hamper AI innovation and growth".
"Where permitted by law, when deciding whether and how to regulate in an area that may affect AI applications, agencies should assess the effect of the potential regulation on Al innovation and growth," it said.
"While narrowly tailored and evidence-based regulations that address specific and identifiable risks could provide an enabling environment for US companies to maintain global competitiveness, agencies must avoid a precautionary approach that holds AI systems to an impossibly high standard such that society cannot enjoy their benefits and that could undermine America's position as the global leader in AI innovation."
Read also: How to govern AI in your organization: 6 tips (TechRepublic)
The guidance advises agencies to address inconsistent, burdensome, or duplicative state laws that could be hurting the national AI market.
"Where a uniform national standard for a specific aspect of AI is not essential, however, agencies should consider forgoing regulatory action," it said.
OMB listed 10 stewardship principles that federal agencies could use for AI applications. These were initially introduced as part of the draft memorandum released at the start of the year.
These principles include creating public trust in AI by reducing accidents or by protecting the privacy of individual AI users; encourage public participation in how AI applied; deliver scientific integrity and information quality; apply consistent risk assessment and management across various agencies and technologies; maximise benefits and consider costs of employing AI; pursue flexible approaches to AI so it would not harm innovation; ensure the technology is fair and non-discriminate; ensure there is transparency when it comes to disclosures; promote the development of AI systems that are safe, secure, and operate in the way it is intended; and ensure agencies share experiences to their approach to AI.
At the same time, OMB also provided examples of how agencies could take non-regulatory approaches to address potential AI risks, such as setting sector-specific policy guidance or frameworks, delivering pilot programs and experiments, or introducing voluntary consensus standards or frameworks.
In order to ensure agency plans are consistent with the guidance, agencies are required to submit their compliance plans to the Office of Information and Regulatory Affairs by 17 May 2021, where they must identify their regulatory authorities, as well as AI-related information that is collected from entities they regulate.
"The agency plan must also report on the outcomes of stakeholder engagements that identify existing regulatory barriers to AI applications and high-priority AI applications that are within an agency's regulatory authorities. OMB also requests agencies to list and describe any planned or considered regulatory actions on AI," the memo said.
Earlier this week, the HR 1688 Internet of Things Cybersecurity Improvement Act of 2020 was unanimously passed in Senate, just over a year since it was introduced in the House.
Under the Bill, the National Institute of Standards and Technology will be required to develop and publish guidelines for the federal government to only buy and use Internet of Things (IoT) devices that have met the new security rules, including adhering to minimum information security requirements for managing potential cyber risks associated with IoT devices.
The Bill noted some of the minimum considerations that should be covered in the guidelines include the secure development of IoT devices, identity management, patching, and configuration management.
When these guidelines are developed, the OMB will be responsible for issuing them to each agency, as well as provide details on how each agency should report and publish information around security vulnerabilities, how to resolve them, and coordinate with other agencies.
With the Bill now passed through US Congress, it will be handed over the president to see it signed. If it is, it will mark the first national approach to IoT security in the US. California approved its own version of the IoT security Bill back in 2018.
- Singapore releases AI ethics, governance reference guide
- Brazil creates national AI innovation network
- Trustworthy AI: EU releases guidelines for ethical AI development
- Tech trade ban: Exporting AI is going to get harder, says US
- China aims to overtake US as global leader in AI innovation (TechRepublic)