X
Innovation

The algorithms are watching us, but who is watching the algorithms?

A two-year investigation into the private and public use of AI systems shows that more oversight is needed, particularly in government services like policing.
Written by Daphne Leprince-Ringuet, Contributor

Empowering algorithms to make potentially life-changing decisions about citizens still comes with significant risk of unfair discrimination, according to a new report published by the UK's Center for Data Ethics and Innovation (CDEI). In some sectors, the need to provide adequate resources to make sure that AI systems are unbiased is becoming particularly pressing – namely, the public sector, and specifically, policing. 

The CDEI spent two years investigating the use of algorithms in both the private and the public sector, and was faced with many different levels of maturity in dealing with the risks posed by algorithms. In the financial sector, for example, there seems to be much closer regulation of the use of data for decision-making, while local government is still in the early days of managing the issue. 

Although awareness of the threats that AI might pose is growing across all industries, the report found that there is no particular example of good practice when it comes to building responsible algorithms. This is especially problematic in the delivery of public services like policing, found the CDEI, which citizens cannot choose to opt out from.  

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

Research that was conducted as part of the report concluded that there is widespread concern across the UK law enforcement community about the lack of official guidance on the use of algorithms in policing. "This gap should be addressed as a matter of urgency," said the research. 

Police forces are fast increasing their adoption of digital technologies: at the start of the year, the government announced £63.7 million ($85 million) in funding to push the development of police technology programs. New tools range from data visualization technologies to algorithms that can spot patterns of potential crime, and even predict someone's likelihood to re-offend. 

If they are deployed without appropriate safeguards, however, data analytics tools can have unintended consequences. Reports have repeatedly shown that police data can be biased, and is often unrepresentative of how crime is distributed. According to data released by the Home Office last year, for example, those who identify as Black or Black British are almost ten times as likely to be stopped and searched by an officer than a white person. 

An AI system that relies on this type of historical data risks perpetuating discriminatory practices. The Met Police used a tool called Gangs Matrix to identify those at risk of engaging with gang violence in London; based on out-of-date data, the technology disproportionately featured black young men. After activists voiced concerns, the matrix's database was eventually overhauled to reduce the representation of individuals from Black African Caribbean backgrounds. 

Examples like the Gangs Matrix have led to mounting concern among the police forces, an issue that is yet to be met with guidance from the government, argued the CDEI. Although work is under way to develop a national approach to data analytics in policing, for now police forces have to resort to patchy ways of setting up ethics committees and guidelines – and not always with convincing results. 

Similar conclusions were reached in a report published earlier this year by the UK's committee on standards in public life, led by former head of MI5 Lord Evans, who expressed particular concern at the use of AI systems in the police forces. Evans noted that there was no coordinated process for evaluating and deploying algorithmic tools in law enforcement, and that it is often up to individual police departments to make up their own ethical frameworks. 

The issues that the police forces are facing in their use of data are also prevalent across other public services. Data science is applied across government departments to decisions made for citizens' welfare, housing, education or transportation; and relying on historical data that is stocked with bias can equally result in unfair outcomes.  

Only a few months ago, for example, the UK government's exam regulator Ofqual designed an algorithm that would assign final year grades to students, to avoid organizing physical exams in the middle of the COVID-19 pandemic. It emerged that the algorithm produced unfair predictions, based on biased data about different schools' past performance. Ofqual promptly retracted the tool and reverted back to teachers' grade predictions.  

Improving the process of data-based decisions in the public sector should be seen as a priority, according to the CDEI. "Democratically elected governments bear special duties of accountability to citizens," reads the report. "We expect the public sector to be able to justify and evidence its decisions." 

SEE: Keeping data flowing could soon cost billions, business warned

The stakes are high: earning the public's trust will be key to the successful deployment of AI. Yet the CDEI's report showed that up to 60% of citizens currently oppose the use of AI-infused decision-making in the criminal justice system. The vast majority of respondents (83%) are not even certain how such systems are used in the police forces in the first place, highlighting a gap in transparency that needs to be plugged. 

There is a lot that can be gained from AI systems if they are deployed appropriately. In fact, argued the CDEI's researchers, algorithms could be key to identifying historical human biases – and making sure they are removed from future decision-making tools.  

"Despite concerns about 'black box' algorithms, in some ways algorithms can be more transparent than human decisions," said the researchers. "Unlike a human, it is possible to reliably test how an algorithm responds to changes in parts of the input."  

The next few years will require strong incentives to make sure that organizations develop AI systems that comply with requirements to produce balanced decisions. A perfectly fair algorithm might not be on the short-term horizon just yet, but AI technology could soon be useful in bringing humans face to face with their own biases. 

Editorial standards