Australians need to have a diplomatic discussion about the potential impact of advanced artificial intelligence (AI) and the boundaries that need to be established to ensure AI is developed and used for good, according to federal parliamentarians Bridget McKenzie and Ed Husic.
Speaking at the Australian Computer Society's (ACS) Reimagination Thought Leaders Summit, Senator McKenzie, chair of Foreign Affairs, Defence, and Trade Legislation Committee, said if bright minds like Stephen Hawking and Elon Musk are warning of "evil AI" destroying humankind if not properly monitored and regulated, then this is something that as a nation needs to be publicly discussed.
"I think 'man against machine' has been a powerful narrative with our species for a very, very long time," McKenzie said during a panel discussion. "We always end up winning because somehow we always write the script so that we're smarter in the end than the machine."
"But I think when the creators of this technology ... have concerns, I think we mere mortals really should pay attention because they're the guys that have actually developed this technology, they understand its potential.
"Sometimes I think we can get very excited about the potential development of the next step in your scientific endeavour, and forget that it is part of a wider ... society and a civilisation."
McKenzie referenced the view of Swedish philosopher Nick Bostrom, that we could end up creating and designing our own demise.
"I think we do need to be very, very cognisant of that fact because there's not a lot of research. Everyone's so excited about the potential that they're not doing the research into the impact," McKenzie told ZDNet.
The discussion should not be restricted to technology or digital transformation enthusiasts, she added; rather it should be broadened to "our suburban streets, our schools, our public squares".
"You don't want [AI] to be the solution to so many of our societal problems that when the public says 'hang on' and raises concerns, that it's already too late. We need to be having those discussions early, and for very rational and reasonable reasons," McKenzie said.
"It's not about being fearful of technology or not being the cool kid on the block. It's about having a rational concern and actually understanding the potential of this technology. The potential of this technology is that it's not just a robot listening to your commands ... it's a robot that is able to think.
"How much technology-enabling infrastructure can you have on your body before you become a robot?"
Shadow Minister for the Digital Economy Ed Husic said there is an opportunity for Australia to "champion the issue", adding that initial discussions about AI do not need to focus on regulation.
"Let's have a discussion about ... the boundaries we need to put in place. Obviously, we want people to have the creative freedom to develop and use AI in a way they can give maximum benefit to humanity, but where's the trip wire? I think we haven't really focused on that enough," he said during a panel discussion.
"I think our country should make this a diplomatic priority, working with like-minded nations to start thinking on a world stage about what we're going to do. If the World Economic Forum says it's something we should think about, and they wouldn't necessarily rush to a regulatory response, then I think it's something we should think about."
While Terri Butler, Shadow Assistant Minister for Universities and Deputy Chair of Standing Committee on Employment, Education and Training, agreed it's healthy to be fearful of the idea of self-aware machines with a lot of power, Trent Zimmerman, federal member for North Sydney, said "we should be alert, but not alarmed".
Zimmerman added that he does not think society should ever fear technology.
"Like any field of human endeavour -- for example advances in medical research and science -- you have to make sure there's an ethical framework in place," he said during the panel discussion.
"I think predictions of the third world war ... and the armageddon outcomes of AI are sensationalist, fed by Hollywood."
Ryan Gariepy, founder and CTO of Clearpath Robotics -- who was the first to sign an open letter to the United Nations calling for a ban on the development and use of autonomous weapons, otherwise known as "killer robots" -- said unlike other potential manifestations of AI which "still remain in the realm of science fiction", autonomous weaponry is on the cusp of development.
Musk and Google's Mustafa Suleyman were among a group of 116 founders of AI and robotics companies to sign the open letter.
"Once developed, lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways," the letter states.
"We do not have long to act. Once this Pandora's box is opened, it will be hard to close."
The letter warns that this arms race threatens to spur the "third revolution in warfare" after gunpowder and nuclear arms. As such, autonomous weapons need to be added to the list of weapons prohibited under the UN's Convention on Certain Conventional Weapons, which includes blinding laser weapons.
"Nearly every technology can be used for good and bad, and artificial intelligence is no different," said Toby Walsh, Scientia professor of artificial intelligence at the University of New South Wales, and one of the key organisers of the letter. "It can help tackle many of the pressing problems facing society today: inequality and poverty, the challenges posed by climate change, and the ongoing global financial crisis.
"However, the same technology can also be used in autonomous weapons to industrialise war. We need to make decisions today choosing which of these futures we want."
Others, like Swiss neuroscientist and founder of Starmind Pascal Kaufmann, believe "true AI" does not exist yet because companies are likening the human brain to a computer. However, the brain does not process information, retrieve knowledge, or store memories like a computer does.
Kaufmann told ZDNet earlier this year that AI will remain a stagnant field of technology until "the brain code has been cracked".
As it exists today, AI is often just the "human intelligence of programmers condensed into source code", and that until we understand natural intelligence through neuroscience, we will not be able to build the artificial kind, Kaufmann said.
A mandatory subscription from Sony to use your robot? The future has gone to the dogs.
IBM Watson has taken up residency with Australia's Suncorp Group to help with its online insurance claims process.
In the hopes of creating next-generation artificial intelligence, Ben Goertzel's new company SingulartyNET used blockchain tech to create an open and distributed developer AI network.
Tokyo has become a testing ground for autonomous vehicle technologies.
Walmart is deploying automation technologies to handle repeatable tasks and save employees time. After a small test, the firm is expanding the program to 50 stores.