X
Business

Big bad data: We don't trust AI to make good decisions

The lack of trust in AI systems comes after a number of bad algorithm-driven decisions.
Written by Daphne Leprince-Ringuet, Contributor

The UK government's recent technological mishaps has seemingly left a bitter taste in the mouth of many British citizens. A new report from the British Computer Society (BCS), the Chartered Institute for IT, has now revealed that more than half of UK adults (53%) don't trust organisations that use algorithms to make decisions about them.

The survey, conducted with more than 2,000 respondents, comes in the wake of a tumultuous summer, shaken by student uproar after it emerged that the exam regulator Ofqual used an unfair algorithm to predict A-level and GCSE results, after the COVID-19 pandemic prevented exams from taking place.

Ofqual's algorithm effectively based predictions on schools' previous performances, leading to significant downgrades in results that particularly affected state schools, while favoring private schools. 

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

The government promptly backtracked and allowed students to adopt teacher-predicted grades rather than algorithm-based results. It might have been too little, too late: only 7% of respondents surveyed by the BCS said that they trusted the algorithms used specifically in the education sector. 

The percentage is joint lowest, along with the level of trust placed in algorithms used by social services and the armed forces; and stands even lower than that of respondents who reported trusting social media companies' algorithms to serve content and direct user experience (8%).

Bill Mitchell, director of policy at BCS, told ZDNet that recent events have "seriously" knocked back people's trust in the way algorithms are used to make decisions about them, and that this will have long-term consequences.

"But at the same time, it has actually raised in people's mind the fact that algorithms are ubiquitous," added Mitchell. "Algorithms are always there, people are realising that is the case, and they are asking: 'Why should I trust your algorithm?'"

"That's spot on, it's just what people should be asking, and the rest of us involved in designing and deploying those algorithms should be ready to explain why a given algorithm will work to people's advantage and not be used to do harm."

The prevalence of hidden AI systems in delivering critical public services was signaled by the UK's committee on standards in public life last February, in a report that stressed the lack of openness and transparency from the government in its use of the technology.

One of the main issues identified by the report at the time was that no one knows exactly where the government currently uses AI. At the same time, public services are increasingly looking at deploying AI to high-impact decision-making processes in sectors like policing, education, social care, and health.

With the lack of clarity surrounding the use of algorithms in areas that can have huge impacts on citizens' lives, the public's mistrust of some technologies used in government services shouldn't come as a surprise – nor should attempts to reverse the damaging effects of a biased algorithm be ignored.

"What we've seen happening in schools shows that when the public wants to, they can very clearly take ownership," said Mitchell, "but I'm not sure we want to be in a situation where if there is any problem with an algorithm, we end up with riots in the streets."

Instead, argued Mitchell, there should be a systematic way of engaging with the public before algorithms are launched, to clarify exactly who the technology will be affecting, what data will be used, who will be accountable for results and how the system can be fixed if anything goes wrong.

In other words, it's not only about making sure that citizens know when decisions are made by an AI system, but also about implementing rigorous standards in the actual making of the algorithm itself. 

"If you ask me to prove that you can trust my algorithm," said Mitchell, "as a professional I need to be able to show you – the person this algorithm is affecting – that yes, you can trust me as a professional."

SEE: Programming languages: Julia users most likely to defect to Python for data science

Embedding those standards in the design and development phases of AI systems is a difficult task, because there are many layers of choices made by different people at different times throughout the life cycle of an algorithm. But to regain the public's trust, argued Mitchell, it is necessary to make data science a trusted profession – as trusted as the profession of doctor or lawyer.

The BCS's latest report, in fact, showed that the NHS was the organization that citizens trusted the most when it comes to decisions generated by algorithms. Up to 17% of respondents said they had faith in automated decision-making in the NHS, and the number jumped to 30% among 18-24 years-olds.

"People trust the NHS because they trust doctors and nurses. They are professionals that must abide by the right standards, and if they don't, they get thrown out," said Mitchell. "In the IT profession, we don't have the same thing, and yet we are now seeing algorithms being used in incredibly high-stake situations."

Will the public ever trust data scientists like they trust their doctor? The idea might seem incongruous. But with AI permeating more aspects of citizens' lives every day, getting the public on board is set to become a priority for the data science profession as a whole.

Editorial standards