X
Innovation

Even computer experts think ending human oversight of AI is a very bad idea

The UK government is thinking of scrapping the right to ask for a human to review decisions made entirely by AI systems, but some experts are warning that it is not the right way to go.
Written by Daphne Leprince-Ringuet, Contributor
gettyimages-1299491248.jpg

The right to a human review will become impractical and disproportionate in many cases as AI applications grow in the next few years, said a consultation from the UK government. 

Image: iStock / Getty Images Plus

While the world's largest economies are working on new laws to keep AI under control to avoid the technology creating unintended harms, the UK seems to be pushing for a rather different approach. The government has recently proposed to get rid of some of the rules that exist already to put breaks on the use of algorithms – and experts are now warning that this is a dangerous way to go. 

In a consultation that was launched earlier this year, the Department for Digital, Culture, Media and Sport (DCMS) invited experts to submit their thoughts on some new proposals designed to reform the UK's data protection regime. 

Among those featured was a bid to remove a legal provision that currently enables citizens to challenge a decision that was made about them by an automated decision-making technology, and to request a human review of the decision.  

SEE: Report finds startling disinterest in ethical, responsible use of AI among business leaders

The consultation determined that this rule will become impractical and disproportionate in many cases as AI applications grow in the next few years, and planning for the need to always maintain the capability to provide human review becomes unworkable. 

But experts from the BCS, the UK's chartered institute for IT, have warned against the proposed move to scrap the law.  

"This rule is basically about attempting to create some kind of transparency and protection for the individuals in the decision making by fully automated processes that could have significant harms on someone," Sam De Silva, partner at law firm, CMS and the chair of BCS's law specialist group, tells ZDNet. "There needs to be some protection rather than rely on a complete black box."

Behind the UK's attempt to change the country's data protection regulation lies a desire to break free from its previous obligation to commit to the EU's General Data Protection Regulation (GDPR). 

The "right to a human review", in effect, constitutes the 22nd article of the EU's GDPR, and as such has been duly incorporated into the UK's own domestic GDPR, which until recently had to comply with the laws in place in the bloc. 

Since the country left the EU, however, the government has been keen to highlight its newly found independence – and in particular, the UK's ability to make its own rules when it comes to data protection.  

"Outside of the EU, the UK can reshape its approach to regulation and seize opportunities with its new regulatory freedoms, helping to drive growth, innovation and competition across the country," starts DCMS's consultation on data protection

Article 22 of the GDPR was deemed unsuitable for such future-proof regulation. The consultation recognizes that the safeguards provided under the law might be necessary in a select number of high-risk use cases – but the report concludes that as automated decision making is expected to grow across industries in the coming years, it is now necessary to assess whether the safeguard is needed. 

A few months before the consultation was launched, a separate government taskforce came up with a similar recommendation, arguing that the requirements of article 22 are burdensome and costly, because they mean that organizations have to come up with an alternative manual process even when they are automating routine operations.  

The taskforce recommended that article 22 be removed entirely from UK law, and DCMS confirmed in the consultation that the government is now considering this proposal. 

According to De Silva, the motivation behind the move is economic. "The government's argument is that they think article 22 could be stifling innovation," says De Silva. "That appears to be their rationale for suggesting its removal."

The consultation effectively puts forward the need to create data legislation that benefits businesses. DCMS pitched a "pro-growth" and "innovation-friendly" set of laws that will unlock more research and innovation, while easing the cost of compliance for businesses, and said that it expects new regulations to generate significant monetary benefits. 

For De Silva, however, the risk of de-regulating the technology is too great. From recruitment to finance, automated decisions have the potential to impact citizens' lives in very deep ways, and getting rid of protective laws too soon could come with dangerous consequences. 

SEE: Programming languages: Python just took a big jump forward

That is not to say that the provisions laid out in the GDPR are enough. Some of the grievances that are described in DCMS's consultation against article 22 are legitimate, says De Silva: for example, the law lacks certainty, stating that citizens have a right to request human review when the decision is solely based on automated processing, without specifying at which point it can be considered that a human was involved.  

"I agree that it's not entirely clear, and it's not a really well drafted provision as it is," says De Silva. "My view is that we do need to look at it further, but I don't think scrapping it is the solution. Removing it is probably the least preferable option."

If anything, says De Silva, the existing rules should be changed to go even further. Article 22 is only one clause within a wide-ranging regulation that focuses on personal data – when the topic could probably do with its own piece of legislation.  

This lack of scope can also explain why the provision lacks clarity, and highlights the need for laws that are more substantial. 

"Article 22 is in the GDPR, so it is only about dealing with personal data," says De Silva. "If we want to make it wider than that, then we need to be looking at whether we regulate AI in general. That's a bigger question."

A question likely to be on UK regulators' minds, too. The next few months will reveal what answers they might have found, if any. 

Editorial standards