X
Innovation

We are still playing catch up with AI, and it's a dangerous game to play

The public sector is using artificial intelligence for high-impact decisions, but is it doing so safely? Not safely enough, according to a new report.
Written by Daphne Leprince-Ringuet, Contributor

From informing your local council's welfare decisions to scanning the faces of supporters before a football match to boost security: algorithms have already found many applications, small and big, in the public sector. 

But the UK's committee on standards in public life now reports that use of artificial intelligence is rapidly outpacing regulation when it comes to delivering certain services – and that in many cases, the failure to properly manage the technology could interfere with the exercise of citizens' rights.

"Deficiencies are notable," states the committee in a report published today. "This review found that the government is failing on openness." The very first issue highlighted by the organisation, in effect, is that no one knows exactly where the government currently uses AI. Academics, civil society groups and public officials alike said that they were unable to find out which government departments were using the technology and how.

The report warned that regulation and governance of AI in the public sector remains a work in progress and said that there is an "urgent need" for practical guidance and enforceable regulation.

SEE: How to implement AI and machine learning (ZDNet special report) | Download the report as a PDF (TechRepublic)

Lord Evans, former head of MI5 who now chairs the committee and led the research, told ZDNet: "When I set out with this project, I asked my researchers to find out where algorithms were used in the public sector, and they simply couldn't. Journalists try to find out, and they rarely can. The government doesn't publish any audit about the extent of AI use either."

"The problem is, you can't exercise your rights if you don't even know an AI is being used. So the first thing to address is openness."

That lack of transparency could pass as acceptable if the deployment of artificial intelligence did not carry any risk. When the technology is merely used to improve administrative efficiency, for example, little more explanation is needed than a statement outlining the the way the system works.

But in the case of high-impact decisions, such as allocating welfare benefits or profiling suspected criminals, things get more complicated. That's because algorithms come with a flaw that is now common knowledge: data bias. As goes the adage, "garbage in, garbage out"; or in other words, an AI system which includes racial, gender, or ideological biases will make unfair, discriminatory decisions.

The government is increasingly looking at deploying AI to high-impact decision-making processes in sectors like policing, education, social care and health. "If we use these systems where it might impact the rights of citizens, then we should be absolutely clear about what the risks are," said Evans. "And we need to manage the risk; but I am not confident that we are at the moment."

The past few years abound with examples of algorithms tried and tested in key public sector services, and often faced with outcry because of the biases they perpetrate. The Office for Standards in Education (Ofsted), for instance, started using machine learning to rate schools and prioritise inspections in 2017. Teachers didn't lose time in protesting against the tool, arguing that the algorithm was unfair, lacked transparency and would exacerbate pre-existing biases within the education system. 

For the purposes of the research, Evans spoke to a number of public sector workers including doctors, who, according to him, were "most reassuring". Medical professionals are used to integrating new technologies in their work, he said, and have a number of protocols to ensure that artificial intelligence is deployed safely; but that is not the case in some other fields.

"Within the medical body, there is proper testing and scrutiny to know exactly what the risk is and ensure it is managed properly," said Evans. "But looking at the police, for example, there isn't the same intellectual discipline when it comes to introducing new technology. This is the field where concerns have been expressed."

The report notes that there is no clear process for evaluating, procuring or deploying technologies like facial recognition within the police force. In fact, it is often up to individual police departments to make up their own ethical frameworks, which so far have had "mixed results". 

Back in 2017, for example, the UK police in Durham started using an algorithm to help officers make custody decisions. Called the Harm Assessment Risk Tool (HART), the system was trained on information about 104,000 people arrested in the city over a five-year period, and was designed to work out whether suspects were at low, moderate or high-risk of re-offending.

Among the data used by Hart was suspects' age, gender and postcode; but since geographical information has the potential to reflect racial communities, the initiative immediately drew criticism from privacy campaigners. Big Brother Watch even condemned the technology's "crude and offensive profiles".

The committee's report stressed that the way algorithms are handled by the police is "far more" representative of the wider public sector than AI in healthcare. And the government's shortcomings are not without consequences. The review found that over half of the public believes that more transparency would make them much more comfortable with the use of AI in the public sector.

Evans said: "There needs to be proactive visibility, a clear legal basis and a way of knowing how to redress and appeal if something goes wrong."

SEE: Big data and your medical records: Is it time to trust big tech?

The right to appeal effectively is likely to become a major point of focus if the government is to reassure citizens that they are not powerless against the decisions informed by algorithms. The General Data Protection Regulation (GDPR), for one, already outlines the rights of individuals against automated decisions .The European law states that organisations should introduce simple ways for citizens to request human intervention or challenge an automated decision. 

It remains to be seen, however, how the rules apply in real-life cases. Earlier this year, for instance, it was reported that the Home Office used an algorithmic tool to stream visa applications. Because of the risk the technology carried to discriminate against certain countries, campaign groups demanded more clarity on the inner workings of the tool – but the Home Office refused to provide details on the way that different countries were labelled in the algorithm's dataset.

Algorithmic black boxes are not a rarity, therefore. And if a demand for transparency is met with resistance from the government, it's even less clear how a request for appeal would be dealt with. Toss in the questions of who is responsible for a decision informed by an AI system, and it's clear why the use of AI remains controversial, and why the government is having trouble keeping up.

Editorial standards