X
Innovation

Battleground over accountability for AI

AI deployments are saturating businesses but few are thinking about the ethics of how algorithms work and the impact it has on people.
Written by Anthony Caruana, Contributor

There's little doubt that artificial intelligence (AI) is having a massive impact on IT budgets, operations, and user experiences. But an area of AI that is receiving increasing attention is ethics. As people and companies become more dependent on the use of algorithms to make and support decisions, the inherent biases of software developers and the data pools they depend on to build their models have come under close scrutiny.

This has been seen in systems such as COMPAS -- a system used in the USA to assist with sentencing people convicted of a crime. The aim of the system was to remove racial bias in sentencing. But the system failed. As the algorithms driving it are proprietary, it's hard to know precisely what went wrong, but the data set it used was historical sentencing so there's little surprise that racial biases of the past influenced its outcomes. It incorrectly predicted recidivism rates, sentencing African-American offenders more severely than white offenders.

Similar systems are being used for approving finance as well. These use a variety of factors including race, gender, and age to determine whether someone gets a loan -- even though these factors may have no bearing on the ability to repay a loan.

And the Australian government's robodebt fiasco highlights the human impacts of flawed AI and machine learning systems.

At a panel discussion moderated by Joe Bradley, the chief data officer at LivePerson, research data from the company found that 88% of businesses in the APAC region have implemented some form of AI. In Australia, that was a little lower at 82%. Yet, just two in five of those Australian companies had put ethical standards in place to ensure their AI would be used responsibly. It's important to note that the definition of AI from the research was quite loose as the companies self-reported on the use of AI and didn't use an agreed upon definition.

See also: Telemedicine, AI, and deep learning are revolutionizing healthcare (free PDF)

When it comes to answering the question "What is AI?", perhaps the best answer is one from Lachlan McCalman, a machine learning researcher from Gradient Institute. He said AI encompasses everything from simple linear extrapolations in spreadsheet models through to complex machine learning.

"The three components I think of as being really important are data driven, automated decision systems. They reason about the world through data. They, to some degree, are automated without human oversight and make consequential decisions. Those three things have a unique set of challenges," said McCalman.

Bradley said there is precedent for governments and citizens of the world getting together to rectify standards for some things. For example, he said ozone layer depletion caused by the use of chlorofluorocarbons has been arrested. Fellow panellist Miriam Vogel, an executive director of EqualAI and former lead of the Implicit Bias Training program for the Obama Administration, pointed to how airbag standards were changed when it was found early airbag designs did a great job of protecting men who were about 180cm tall but didn't adequately protect women because the data used to design them was biased. Their position was that governments can put laws in place to prevent negative societal outcomes. They also pointed to the GDPR as a further example.

Vogel said that many people have a view that AI systems are neutral but they don't understand how many human touch points are involved in their development. With successful AI being reliant on diversity in their data sets and development teams, the under-representation of different gender and cultural groups in the IT industry, he said, has exacerbated any problems relating to AI neutrality.

Lyndon Summers, the operations manager at Open Universities Australia, agreed that we need expertise from diverse backgrounds. He noted that some of the most successful service developments and improvements he has seen came from listening to call centre staff, as well as developers and software engineers.

"One of the biggest values is the human touch points," said Summers. "We need to find the right balance between people and automation and, if we are going to increase the level of automation we use, we have to find roles for the people we displace and perhaps get them into roles to help us build even more automation".

Underlying much of the discussion between the panellists and audience questions was the understanding that poorly implemented AI can magnify problems -- something the Australian government became acutely aware of with the robodebt system.

LivePerson's research found that 70% of businesses believed the company that creates a piece of AI ought to be held responsible for any harm caused by the program. But companies that deploy AI and provide data for training models were also nominated as having a high level of responsibility. Over a fifth suggested the lead developer of an AI program should be held accountable -- which is perhaps why Republican representative from Iowa, Steve King, asked Google CEO Sundar Pichai to hand over the names of over 1,000 Google employees that have worked on the search engine's algorithm to examine for "a built-in bias" during a hearing of the US House Judiciary Committee.

As the world grapples with how to manage the human machine interaction, it's clear that AI is as flawed as its human creators. Only through conscious awareness and planning to overcome preconceived biases will the AI of the future be able to deliver outcomes that are inclusive and representative. Perhaps without minimum standards or a regulatory framework, humans are otherwise destined to recreate our flawed existence in the virtual world.

Related Coverage

NFL and AWS using data to improve player health

They are combining AWS' cloud and its capabilities in machine learning and artificial intelligence with NFL data to improve player health and safety.

Digital twins will be like real-life Sim City: Citrix

Safi Obeidullah said the technology will enable businesses to respond faster to change and make more accurate decisions.

AirAsia shutters call centres to go all-in on chatbot and voice AI

The future of customer service is messaging and voice-based artificial intelligence, according to the airline.

Welcome to 2030: Are you ready for the internet of the senses?

Brain controlled VR and taste-morphing food. Ericcson releases top predictions for the year 2030.

Editorial standards