​Data61 leads new 'ethical' artificial intelligence institute

The non-profit will investigate how to fix the ingrained bias problem that AI systems display.

ai-1.jpg
Zapp2Photo, Getty Images/iStockphoto

The Commonwealth Scientific and Industrial Research Organisation's (CSIRO) Data61, alongside IAG and the University of Sydney, has created a new artificial intelligence (AI)-focused institute, aimed at exploring the ethics of the emerging technology.

The Gradient Institute, Data61 explained, is an independent non-profit charged with researching the ethics of AI, as well as developing ethical AI-based systems, focused essentially on creating a "world where all systems behave ethically".

"By embedding ethics into AI, we believe we will be able to choose ways to avoid the mistakes of the past by creating better outcomes through ethically-aware machine learning," Institute CEO Bill Simpson-Young said.

"For example, in recruitment when automated systems use historical data to guide decision making they can bias against subgroups who have historically been underrepresented in certain occupations.

"By embedding ethics in the creation of AI we can mitigate these biases which are evident today in industries like retail, telecommunications, and financial services."

In addition to research, it is expected the new institute will also explore the ethics of AI through practice, policy advocacy, public awareness, and training, specifically where the ethical development and use of AI is concerned.

The institute will use research findings to create open source ethical AI tools that can be adopted and adapted by business and government, Data61 said in a statement Thursday.

"As AI becomes more widely adopted, it's critical to ensure technologies are developed with ethical considerations in mind," Data61 CEO Adrian Turner added. "We need to get this right as a country, to reap the benefits of AI from productivity gains to new-to-the-world value."

See also: AI 'more dangerous than nukes': Elon Musk still firm on regulatory oversight

Speaking with ZDNet during Data61's annual conference this year in Brisbane, acting director of Engineering and Design at Data61 Hilary Cinis said ethics is all about the reduction of harm.

One way around ingrained ethical bias, she said, was to ensure that teams building the algorithms are diverse. She said a "cultural rethink" around development needs to happen.

Similarly, Salesforce user research architect Kathy Baxter said at the Human Rights & Technology Conference in Sydney earlier this year that one main problem that arises is bias can be difficult to see in data. Equally complex, she said, is the question of what it means to be fair.

"If you follow the headlines, you'll see that AI is sexist, racist, and full of systematic biases," she said.

"AI is based on probability and statistics," she continued. "If an AI is using any of these factors -- race, religion, gender, age, sexual orientation -- it is going to disenfranchise a segment of the population unfairly and even if you are not explicitly using these factors in the algorithm, there are proxies for them that you may not even be aware of.

"In the US, zip code plus income equals race. If you have those two factors in your algorithm, your algorithm may be making recommendations based on race."

She continued by suggesting that research needs to be conducted in advance, to determine who is going to be impacted and give additional perspectives outside of the "Silicon Valley bubble, or whatever location bubble" the developers are in.

"Artificial Intelligence learns from data and data reflects the past -- at the Gradient Institute we want the future to be better than the past," Simpson-Young on Thursday added.

Simpson-Young will be joined by Dr Tiberio Caetano, co-founder and chief scientist at Ambiata, a wholly owned subsidiary of IAG, who will direct the institute's research into ethical AI in the capacity of chief scientist.

RELATED COVERAGE

Why Australia is quickly developing a technology-based human rights problem (TechRepublic)

Human rights advocates have called on the Australian government to protect the rights of all in an era of change, saying tech should serve humanity, not exclude the most vulnerable members of society.

The malicious uses of AI: Why it's urgent to prepare now (TechRepublic)

In an extensive report, 26 experts offer artificial intelligence security analysis and tips on forecasting, prevention, and mitigation. They note the AI-security nexus also has positive applications.

Artificial intelligence: Trends, obstacles, and potential wins (Tech Pro Research)

More and more organizations are finding ways to use artificial intelligence to power their digital transformation efforts. This ebook looks at the potential benefits and risks of AI technologies, as well as their impact on business, culture, the economy, and the employment landscape.

Robots in the battlefield: Georgia Tech professor thinks AI can play a vital role

To Professor Ronald C Arkin, technology can and must play a role in minimising the collateral damage of civilians in war zones, but not without regulation.