The Korea Advanced Institute of Science and Technology (KAIST) will soon open an artificial intelligence weapons lab in collaboration with Hanwha Systems, a defence company building cluster munitions, despite United Nations bans.
Ahead of a UN meeting in Geneva on April 9, slated to discuss the challenges posed by lethal autonomous weapons, over 50 AI and robotics researchers from 30 countries have declared that they will boycott all contact with KAIST once the "killer robot" lab opens.
The group of professors boycotting KAIST has written a letter to the South Korean university, saying they will no longer collaborate with KAIST until assurances have been provided that the centre will not develop autonomous weapons "lacking meaningful human control".
"At a time when the United Nations is discussing how to contain the threat posed to international security by autonomous weapons, it is regrettable that a prestigious institution like KAIST looks to accelerate the arms race to develop such weapons," the letter reads.
"If developed, autonomous weapons will be the third revolution in warfare. They will permit war to be fought faster and at a scale greater than ever before. They have the potential to be weapons of terror. Despots and terrorists could use them against innocent populations, removing any ethical restraints.
"This Pandora's box will be hard to close if it is opened."
The boycott, organised by University of New South Wales (UNSW) Scientia Professor of artificial intelligence Toby Walsh, follows open letters released in 2015 and 2017 that warned of the dangers of autonomous weapons.
The 2017 letter comprised 116 signatories, including entrepreneur Elon Musk, who has previously warned of the dangers AI presents, and Google's Mustafa Suleyman.
"Back in 2015, we warned of an arms race in autonomous weapons," said Walsh. "That arms race has begun. We can see prototypes of autonomous weapons under development today by many nations including the US, China, Russia, and the UK. We are locked into an arms race that no one wants to happen. KAIST's actions will only accelerate this arms race. We cannot tolerate this."
A 2015 report, released by Human Rights Watch and Harvard Law School, detailed how a lack of regulation could cause human deaths without accountability, stating that under current laws, programmers, manufacturers, and military personnel would all escape liability for deaths caused on the field by fully autonomous weaponry.
It also suggests that there is not likely to be any legal framework that would clearly state where responsibility lies in the production and deployment of such weapons -- and therefore no retribution or restitution when errors occur.
Another open letter also signed by Musk, late physicist Stephen Hawking, Apple co-founder Steve Wozniak, and thousands of researchers in 2015 called for a ban on the development of autonomous weapons, fearing that a reality of armed quadcopters -- designed to search and eliminate people -- is just years away.
A concern raised in the 2015 letter was that unlike nuclear weapons, autonomous weapons can be built with off-the-shelf equipment. As a result, they'll be cheap and easy to mass produce, and could be traded on the black market, possibly ending up in the hands of terrorists, dictators, and warlords.
The man building a spaceship to send people to Mars has used his South by Southwest appearance to reaffirm his belief that the danger of artificial intelligence is much greater than the danger of nuclear warheads.
Founders of AI and robotics companies around the world are urging the UN to ban the development and use of autonomous weapons before it's too late.
Academics and a group of NGOs have different opinions on how autonomous weapons should be defined and regulated.
With the arrival of cyberwarfare, every device had become a battleground. Here's everything you need to know.
Like most technology, there is a fine line between good and evil in its use. What happens when AI built with good intentions goes bad?