X
Innovation

​UN to host discussion about AI in warfare

AI researchers warn world leaders that autonomous weapons shouldn't be allowed to make fatal decisions.
Written by Kelly McSweeney, Contributor
ai-warfare-military-drones-istock.jpg

iStock/Getty Images

Recent advancements in artificial intelligence (AI) make it possible to design weapons systems that can target and attack without human intervention. This week, experts are expected to attend a meeting to discuss lethal autonomous weapons systems at the United Nations (UN) Palais des Nations in Geneva.

Representatives from more than 70 UN member states are expected to attend the meeting. According to a press announcement from the UN, this is the first formal inter-governmental discussion on what machine autonomy means for the law of armed conflict and the future of international security.

This is the first meeting of the Convention on Conventional Weapons (CCW) Group of Governmental Experts on lethal autonomous weapons systems. The CCW is a formal agreement that bans inhumane weapons, including: incendiary weapons, mines and booby-traps, and weapons designed to injure through very small fragments. The agreement was established in 1980, but it can be amended to include new concerns, such as a 1995 protocol that preemptively banned blinding lasers. Now killer robots are up for debate, since Asminov's laws are not actual laws.

"It's time for countries to move from talking about the ethical and other challenges raised by lethal autonomous weapons concerns to taking preventative action," Mary Wareham, coordinator of the Campaign to Stop Killer Robots, said in a statement. "Endorse the call of the scientific community and non-governmental organizations to preemptively ban weapons that would select and fire on targets without meaningful human control."

Wareham is among a group of AI experts and human rights advocates that have been working to urge the UN to ban autonomous weapons. In August, Elon Musk and 115 other leaders in robotics and AI signed an open letter to the UN. The ominous letter stated:

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora's box is opened, it will be hard to close.

A separate group of AI researchers also just released a melodramatic video depicting what could happen if fully autonomous weapons are developed. (Hint: swarms of autonomous drones attack civilians.)

This week's meeting isn't focused on a ban on fully autonomous weapons, but it's a step in that direction.

Speaking to reporters in Geneva ahead of the event on November 10, Ambassador Amandeep Singh Gill of India, chair of the event, made it clear that we shouldn't expect a ban yet. He said, "It would be very easy to just legislate a ban: whatever it is, let's just ban it." He added, "But I think that we, as responsible actors in the international domain, we have to be clear about what it is that we are legislating on."

Instead, it will be a conversation about the legal and ethical challenges with forthcoming military technology. The next step will likely be a mandate to continue work on autonomous weapons within the framework of the CCW.

Editorial standards