Leaked AI regulation: What it means for the U.S.

Plans for the EU's most comprehensive AI regulation recently leaked. What does that mean for the U.S.?
Written by Greg Nichols, Contributing Writer

Given the EU's leaked plans for AI regulation, which calls for a ban on specific types of AI systems, such as those that directly track individuals and create social credit scores, the topic of regulation in the U.S. has been on people's minds. AI is coming, so what can regulators do—and what should they absolutely not do—to protect citizens and consumers while also encouraging technological development?

For insights I reached out to Haniyeh Mahmoudian, Ph.D., Global AI Ethicist at DataRobot, a Boston-based company that enables customers to create and validate machine learning models from their data. As an AI bias and ethics expert, Haniyeh is well-placed to speak about the new regulations and what they means for the U.S., as well as the risks inherent in unchecked AI and what actions should be taken by regulators going forward.

GN: What are the biggest takeaways from the EU's leaked plans for AI regulation? What surprised you?

Haniyeh Mahmoudian: Speaking on side of practitioners, one thing we really appreciate about leaked draft is the assistance in clearing up some ambiguity. In the absence of legislation, there's been a lot of ambiguity on defining use cases that will impact individuals' lives, or what we call high-risk use cases. Having those high risk use cases defined is really useful, and it's important to call out that they recognize it's an evolving list as technology continues to advance.

The auditing requirement of the regulation was unexpected but not surprising. Here at DataRobot we have implemented frameworks and processes around risk and impact assessment that is applied to the projects we are working on.

GN: Why is the EU getting so serious about regulation? What are the big concerns and the big goals, based on your understanding of the plan?

Haniyeh Mahmoudian: The EU has always been serious about tech regulations and protecting its citizens from data victimization, and already has a strong regulatory foundation in place with GDPR to expand upon into AI. The driving factor is one of protection for citizens, even at the expense of economic prosperity that could be enabled by some of these technologies.

GN: Where is the U.S. with regards to considering AI regulation? Can you briefly explain the range of opinions on the matter from various constituencies/stakeholders?

Haniyeh Mahmoudian: The U.S. is balancing multiple priorities and stakeholders while working towards legislation that ensures the technology is built with our democratic ideals in mind. The U.S. Congress has proposed legislation like the Algorithmic Accountability Act, which would move our current state more towards the EU's regulations—which may be the most robust in the world. There are also institutions like the Joint Artificial Intelligence Center and National Security Commission on Artificial Intelligence that have stated that AI is needed as a security concern. Aside from government entities, we also know that organizations see AI as an economic catalyst—which means regulators need to strike the right balance between ensuring ethical, fair, and unbiased AI without stifling innovation. 

GN: What should the U.S. take away from the EU plan? Anything U.S. regulators should do differently, in your opinion?

Haniyeh Mahmoudian: It's important that regulations focus on making sure the technology is built with certain societal ideals and ethics in mind, enabling organizations to leverage AI to the world's benefit in an equitable, ethical, and explainable manner. The EU's new regulations show great promise by instantiating a collaborative committee to evaluate the technology and define 'high-risk AI' uses as the technology evolves. As previously stated, the regulations also clear up some ambiguity around what is defined as high risk AI – something that would be very valuable in the U.S. Some may consider the EU regulations to be too prescriptive. Additionally, some of the requirements may hinder small businesses and start-ups. It's certainly a balance we have to strike. 

GN: What's at stake if we don't regulate AI soon? What's a realistic timeline for regulation within the U.S.?

Haniyeh Mahmoudian: I do think we need to open our minds to some regulation in the space because of the humanistic impact of these systems. In a company, a biased hiring manager, while unethical, has a limited impact since it is a single person and a single company. However, an AI-enabled system used in the same hiring situation has the potential to really do harm to both the company and the applicants. The same is true for use cases in government or public safety – there's a risk that if we don't regulate, there could be real harm done to people. At the same time, we also have to remember that AI can be used to help with problems facing our very existence—such as food insecurity, climate change, and healthcare. It's certainly a balance to figure out how much regulation is too much or too little.  

As mentioned before, the U.S. has already moved towards the regulations and there have been legislations proposed by Congress. We are moving in the right direction but it's important to ensure the technology is built based on our ideals.

Editorial standards