X
Innovation

CSIRO promotes ethical use of AI in Australia's future guidelines

For Australia to realise the benefits of artificial intelligence, CSIRO said it's important for citizens to have trust in how AI is being designed, developed, and used by business and government.
Written by Asha Barbaschow, Contributor
artificial-intelligence-in-hand.jpg
Getty Images/iStockphoto

The Commonwealth Scientific and Industrial Research Organisation (CSIRO) has highlighted a need for development of artificial intelligence (AI) in Australia to be wrapped with a sufficient framework to ensure nothing is set onto citizens without appropriate ethical consideration.

Data61, CSIRO's digital innovation arm, has published a discussion paper [PDF]Artificial Intelligence: Australia's Ethics Framework, on the key issues raised by large-scale AI, seeking answers to a handful of questions that are expected to inform the government's approach to AI ethics in Australia.

Highlighted by CSIRO are eight core principles that will guide the framework: That it generates net-benefits, does no harm, complies with regulatory and legal requirements, appropriately considers privacy, boasts fairness, is transparent and easily explained, contains provisions for contesting a decision made by a machine, and that there is an accountability trail.

"Australia's colloquial motto is a 'fair go' for all. Ensuring fairness across the many different groups in Australian society will be challenging, but this cuts right to the heart of ethical AI," CSIRO wrote.

CSIRO said that while transparency and AI is a complex issue, the ultimate goal of transparency measures are to achieve accountability, but that the inner workings of some AI technologies are not easy to explain.

"Even in these cases, it is still possible to keep the developers and users of algorithms accountable," it added. "On the other hand, AI 'black boxes' in which the inner workings of an AI are shrouded in secrecy are not acceptable when public interest is at stake."

Conceding that there is no one-size-fits all solution to the range of legal and ethical implications issues related to AI, CSIRO has identified nine tools it says can be used to assess risk and ensure compliance and oversight.

These include impact assessments, reviews, risk assessments, best practice guidelines, industry standards, collaboration, monitoring and improvement mechanisms, recourse mechanisms, and consultation.

Taking into consideration the importance of data in building AI, CSIRO said data governance is crucial to ethical AI, noting that organisations developing AI technologies need to ensure they have strong data governance foundations or their AI applications risk breaching privacy and/or discrimination laws or being fed with inappropriate data.

"AI offers new capabilities, but these new capabilities also have the potential to breach privacy regulations in new ways," it wrote. "If an AI can identify anonymised data, for example, this has repercussions for what data organisations can safely use."

As a result, CSIRO said organisations should constantly build on their existing data governance regimes by considering new AI-enabled capabilities and ensuring their data governance system remains relevant.

Identifying some of the key approaches to issues related to AI and ethics, CSIRO said automated decision-making must be given weighted thought, noting that human-in-the-loop principles should be applied during the design phase of automated decisions systems and a clear chain of accountability should also be considered.

"There must be a clear chain of accountability for the decisions made by an automated system. Ask: Who is responsible for the decisions made by the system?," it said.

When it comes to predicting human behaviour, the framework highlights that while AI is not driven by human bias, it is programmed by humans which can pose risks that result in ethical conundrums.

"Developers need to pay special care to vulnerable, disadvantaged or protected groups when programming AI," CSIRO said. "Full transparency is sometimes impossible, or undesirable (consider privacy breaches). But there are always ways to achieve a degree of transparency."

A statement from Minister for Human Services and Digital Transformation Michael Keenan said the government will use the paper's findings and the feedback received during the consultation period to develop a national AI ethics framework.

It is expected the framework will include a set of principles and practical measures that organisations and individuals can use as a guide to ensure their design, development, and use of AI "meets community expectations".

See also: Labor promises 'human eye' to oversee automation if elected

The federal government allocated AU$29.9 million over four years into AI and machine learning, which it said at the time would support business innovation across digital health, digital agriculture, cybersecurity, energy, and mining.

Submissions to CSIRO's report close May 31, 2019.

READ MORE

RELATED COVERAGE

Why Australia is quickly developing a technology-based human rights problem(TechRepublic)

Human rights advocates have called on the Australian government to protect the rights of all in an era of change, saying tech should serve humanity, not exclude the most vulnerable members of society.

Data61 leads new 'ethical' artificial intelligence institute

The non-profit will investigate how to fix the ingrained bias problem that AI systems display.

Robots in the battlefield: Georgia Tech professor thinks AI can play a vital role

To Professor Ronald C Arkin, technology can and must play a role in minimising the collateral damage of civilians in war zones, but not without regulation.

Editorial standards