X
Innovation

Human Rights Commission publishes guide to recognising and preventing AI bias

The technical paper highlights how human rights should be considered when AI systems are being developed.
Written by Aimee Chanthadavong, Contributor

A new technical paper has been released demonstrating how businesses can identify if their artificial intelligence (AI) technology is bias. It also offers recommendations for those making AI systems to ensure they are fair, accurate, and comply with human rights.

The paper, Addressing the problem of algorithmic bias, was developed by the Australian Human Rights Commission, together with the Gradient Institute, Consumer Policy Research Centre, Choice, and the Commonwealth Scientific and Industrial Research Organisation's (CSIRO) Data61.

Human Rights Commissioner Edward Santow, in his foreword, described algorithmic bias as a "kind of error associated with the use of AI in decision making, and often results in unfairness".

He continued, saying that when this occurs it can result in harm, and therefore human rights should be considered when AI systems are being developed and used to make important decisions.

"Artificial intelligence promises better, smarter decision making, but it can also cause real harm. Unless we fully address the risk of algorithmic bias, the great promise of AI will be hollow," he said.

In developing the paper, five scenarios were used to highlight potential attributes of algorithmic bias. For instance, one scenario had out-of-date historical data that was no longer representative of current world scenarios demonstrated bias.

In another scenario, the paper uncovered that label bias could arise when there are disparities between the quality of the label across groups that are distinguished by protected attributes, such as age, disability, race, sex, or gender.

Read also: AI and ethics: One-third of executives are not aware of potential AI bias (TechRepublic)

The paper revealed there are five general approaches that could be taken to mitigate algorithmic bias. These include acquiring more "appropriate" data, such as data of under-represented cohorts to help reduce inequality between current data and the AI system's data set; increasing the model complexity as over-simplified AI models would be less accurate; modifying AI systems so it considers societal inequalities, as well as other inaccuracies or issues that could cause algorithmic bias, among others.

The paper also recommended that finding a fairer measure to be used as the target could also help mitigate algorithmic bias. It pointed out, for instance, using someone's credit history to predict their creditworthiness could be useful for older individuals with a track record but it may be a disadvantage to young people who are applying for their first loan.

At the same time, the paper highlighted potential downfalls and considerations if these mitigating approaches were taken. By acquiring more data for the AI systems it could be a resource-intensive exercise, and it could be difficult to predict the benefits of additional data until it is used, and that it could come at an additional cost, the paper said.

Further, pre-processing any data could hide protected attributes and reduce the accuracy of AI systems, the paper stated.

"The good news is that algorithmic biases in AI systems can be identified and steps taken to address problems," Gradient Institute CEO Bill Simpson-Young said.

"Responsible use of AI must start while a system is under development and certainly before it is used in a live scenario. We hope that this paper will provide the technical insight developers and businesses need to help their algorithms operate more ethically."

The release of the paper continues ongoing discussions that have been had around AI and ethics in Australia. Earlier this year, a 44-page report, commissioned by the Department of Industry, Science, Energy, and Resources, was published about what Australia's AI standards should look like

At the time, Santow suggested that Australia could join the global conversation by finding a niche. 

"We should think in terms of what our national strengths are and how we can leverage off those," he said.

"We literally don't have the number of people working in AI as they do in a country like the United States and China, so we need to think about what our niche is and go really, really hard in advancing in that. Our niche could be to develop AI in a responsible way consistent with our national standards." 

Prior to that, CSIRO's Data61 had published an AI discussion paper while the federal government announced its AI ethics principles. Both aimed to develop guidelines that were not only practical for businesses but also to help citizens build trust in AI systems. 

Related Coverage

Genevieve Bell and what the future of AI might look like

Safe, responsible, and diverse are her three wishes.

New Zealand establishes algorithm charter for government agencies

A standards guide on how to use algorithms across government.

AI and ethics: The debate that needs to be had

Like anything, frameworks and boundaries need to be set -- and artificial intelligence should be no different. 

Human Rights Commission wants privacy laws adjusted for an AI future

It is one of 29 proposals that the commission has proposed as it seeks to address the impact of new technologies, such as artificial intelligence, will have on human rights. 

Battleground over accountability for AI

AI deployments are saturating businesses but few are thinking about the ethics of how algorithms work and the impact it has on people.  

Editorial standards