X
Innovation

Setting the AI standard: What it could look like in Australia

What role can Australia play in the global conversation about artificial intelligence?
Written by Aimee Chanthadavong, Contributor

Deliberation, consensus, and communication are necessary when conversations about what Australia's artificial intelligence (AI) standards should look like, according to the country's national standards body. 

And it's exactly the approach Standards Australia took when it launched a discussion paper [PDF] about developing AI standards for Australia in June last year. It ended up engaging with regulators, industry, and academics, including Australian Federal Police, Microsoft, Deakin University, Data61, the Australian Human Rights Commission, and KPMG.

"It's been that long," Jed Horner, Standards Australia strategic advocacy manager, told ZDNet. "The process had to take the time it did ... [and] it's been about a year all up." 

The results of those discussions have since been published in a 44-page report [PDF], commissioned by the Department of Industry, Science, Energy, and Resources. It stated how important it is for Australia to be part of the global conversation in developing AI rules globally. 

Delving into this point further, Human Rights Commissioner Edward Santow suggested one of the ways Australia could join the global AI conversation is to find a niche. 

"At the end of last year, Germany announced its national AI strategy, and they did it really smartly," he said, during a panel discussion at the recent launch of the report in Sydney. 

"What they said was when you're thinking about cars, you're not going to buy a German car because it's the cheapest; you're going to buy a German car because you know it's been really well-engineered. It's going to be safer, more reliable, and have better performance. When you think about German AI, we want you to think the same thing."  

"What I'm saying is we should think in terms of what our national strengths are and how we can leverage off those."  

"We literally don't have the number of people working in AI as they do in a country like the United States and China, so we need to think about what our niche is and go really, really hard in advancing in that. Our niche could be to develop AI in a responsible way consistent with our national standards." 

Horner agreed, saying there are a few considerations that need to be taken into account to decide where the country's strengths lie. 

"We want to continue to be a destination of choice for large companies who want to grow their presence here, and that's big tech -- we need that in our economy. It's about the economic footprint. It matters, and it drives an ecosystem."

"The second point is to play to our strengths. It's not a new idea, but if you look at countries around the world, New Zealand plays to agritech, which makes complete sense."  

"If you think about the consumer focus work they do in the US, again, in Australia what does that look like? It might be resource management. It's something that sounds bland but our water, our food supplies, all of those things are critical -- every country is worried about it, so how can we turn it into an advantage?"

"We need to focus on narrow strengths where we're niche, rather than try to be all people, which is the mistake some people make including some policy leaders. The idea that we can lead the world is a nice one, we just don't have the investment they have in Silicon Valley or China." 

But why exactly should Australia care about being involved in setting AI standards? 

According to the report, up to 80% of global trade -- or $4 trillion annually -- is affected by standards, or associated technical regulations. However, the issue is that organisations are not obliged to comply to any standards, unless it's legislated. 

See also: Artificial intelligence: Cheat sheet (TechRepublic)

It was a point that Standards Committee on Artificial Intelligence chair Aurelie Jacquets argued during the panel discussion, saying that without governance and regulation, standards could only have so much influence. 

"If you don't have good governance, all the things you see that are currently happening, which is basic automation, you're not going to get scaled. Have good governance, focus on those frameworks, and not just your data, and then we can promote good quality AI at an international level," she said. 

Horner, however, cautioned about the potential risk of legislating standards, pointing out it could hinder innovation. He used the National Institute of Standards (NIST) cybersecurity framework as an example. 

"What happens is the NIST for cybersecurity are pushed down supply chains to the extent that if you don't follow them you can't do business with the major players," he said. 

"We're thinking of the global sense of what we can do to help businesses in Australia both be responsible and actually access new markets, rather than slowly take a certification approach here, which is more about responding to a social concern, but it doesn't think through the mechanics of business." 

Horner explained it's also partly the reason why so many different parties were encouraged to be involved in the discussion for this particular Standards Australia report. 

"We want industry to co-shape what these products look like and it becomes a user-friendly document, which makes life easy for everyone," he said. 

"If you regulate standards that haven't been developed by industry, the danger you impose is massive costs overnight. It's both a technological challenge and a financial one. That's why we engage with people from such a cross-section to get agreement. It actually does happen." 

Futureye managing director Katherine Teh emphasised how introducing standards is "creating new ground" for creating conversations about how AI could be beneficial for society, while also addressing the fear factor surrounding the technology. 

"Fear is what drives regulation when you don't resolve the difference. What we're thinking -- and we're seeing with the pandemic now -- is the fear reaction might actually overshoot because it's intuitive and we already have that with AI," she said during the panel discussion."

"All settings are in place for an overreaction from society about the implications and this version of the world where we're no longer in charge of our own destiny if computers take over ... There's also this perception that 83% of people think they're going to lose their jobs and 25% are going to be out of work, and AI is the cornerstone for all of that."

"It comes to represent something of the fourth industrial revolution and we have to address those sorts of challenges and fears by getting people to get familiar about how they're being used, what the opportunities of engagements might be, how algorithms are being overlaid." 

Other recommendations that were made in the report included streamlining requirements in areas such as privacy risk management to increase Australian businesses' international competitiveness; developing standards that take into account diversity, inclusion, fairness, and social trust; and growing Australia's capacity to develop and share best practice in the design, deployment, and evaluation of AI systems. 

The report builds on CSIRO's Data61 AI discussion paper that was released April last year and the federal government's AI ethics principles announced in November. Both aimed to develop guidelines that were not only practical for businesses but also to help citizens build trust in AI systems. 

Releasing this report also follows in the footsteps of countries such the US, UK, China, and Germany that have each established their own AI standards. 

Similar principles have also been set out by the Organisation for Economic Cooperation and Development (OECD), with debates about developing a workable international AI framework for the global tech community still ongoing. So far, 42 countries, including Australia, have committed to developing consensus-driven AI standards based on the OECD AI principles. 

These principles include that AI should benefit people and the planet by driving inclusive growth, sustainable development and wellbeing; be designed in a way that respects the rule of law, human rights, democratic values, and diversity; provide transparency and responsible disclosure around AI systems; and organisations that are developing, deploying, or operating AI be held accountable. 

Related Coverage

AI and ethics: The debate that needs to be had
Like anything, frameworks and boundaries need to be set -- and artificial intelligence should be no different.

NAB, CBA, Telstra, and Microsoft to test Australian government AI ethics principles
The businesses have voluntarily put their hands up to test the principles in real world scenarios.

Battleground over accountability for AI
AI deployments are saturating businesses but few are thinking about the ethics of how algorithms work and the impact it has on people.

Human Rights Commission wants privacy laws adjusted for an AI future
It is one of 29 proposals that the commission has proposed as it seeks to address the impact of new technologies, such as artificial intelligence, will have on human rights.

CSIRO promotes ethical use of AI in Australia's future guidelines
For Australia to realise the benefits of artificial intelligence, CSIRO said it's important for citizens to have trust in how AI is being designed, developed, and used by business and government. 

Editorial standards