X
Innovation

Can the Pentagon's new draft rules actually keep killer robots under control?

The Department of Defense has just unveiled its draft guidelines on the ethical use of AI in warfare.
Written by Daphne Leprince-Ringuet, Contributor

Killer robots, whether they're the product of scaremongering or a real threat to the international power balance, now have their very own set of ethical rules. However, the newly published Pentagon guidelines on the military use of AI are unlikely to satisfy its critics. 

The draft guidelines were released late last week by the Defense Innovation Board (DIB), which the Department of Defense (DoD) had tasked in 2018 with producing a set of ethical rules for the use of AI in warfare.

The DIB has spent the past 12 months studying AI ethics and principles with academics, lawyers, computer scientists, philosophers, and business leaders – all chaired by ex-Google CEO Eric Schmidt.

SEE: Special report: How to automate the enterprise (free ebook)    

What they came up with had to align with the DoD's AI Strategy published in 2018, which determines that AI should be used "in a lawful and ethical manner to promote our values"

"Maintaining a competitive advantage in AI is essential to our national security," the new document reads. "The DIB recommends five AI ethics principles for adoption by DoD, which in shorthand are: responsible, equitable, traceable, reliable and governable."

With those five pillars of AI principles, the document covers a broad range of issues. It stresses that when deploying autonomous systems, for instance, humans should exercise "appropriate" levels of judgement – that is, responsibility. 

Equitability means that those systems should be free of unintended bias. And by traceable, the DIB means that AI tools should also be completely transparent, and that experts need to have access to the necessary data to understand exactly how they operate.

Autonomous systems should be consistently tested to make sure that they are reliable. Finally, they will have to be governable, which means that they should know how to stop themselves when they detect that they are likely to cause unintended harm. 

The new document runs to no less than 65 pages – which is unusually long. By contrast, to regulate the development of autonomous weapons, the DoD since 2012 has been relying on a 15-page "directive", which established "guidelines designed to minimize the probability of consequences and failures in autonomous and semi-autonomous weapon systems"

Fifteen pages, as it turned out, were not quite enough to establish an ethical framework for autonomous warfare. 

The DoD learnt this lesson when in 2018 it failed to reach a deal with Google to develop AI for drone video analysis, because the company's employees vocally objected to their work potentially being used to kill people. 

After 4,000 staff petitioned for Google to quit the deal, and a dozen employees left the company because of its involvement, the tech giant reported that it wouldn't renew its contract with the Pentagon. 

The lack of trust in the way that the military might employ civilian technology is one of the reasons that the DoD commissioned new ethical standards. 

"This is the first time in recent history that neither DoD nor the traditional defense companies it works with controls or maintains favorable access to the advances of computing and AI," said the report.

It is easy to see why this situation is problematic in the current geopolitical context. While China announced it is making AI a national priority, Russia is ramping up research in the Era technopolis for the deployment of AI in the military sphere – that's reason enough to worry for the DoD.

So, are the DoD's new guidelines strict enough to build civilian trust in the military use of AI?

That might be pushing it slightly, said Amanda Sharkey, lecturer in computer science at the UK's University of Sheffield and member of the Campaign to Stop Killer Robots. "The draft document helpfully highlights various risks," she told ZDNet. "But on crucial points, it remains disappointing."

The most important issue, she continued, is that of responsibility, which the new guidelines tackle by recommending that "human beings should exercise appropriate levels of judgment" when using an AI system. 

"That sounds fine," she said. "But it doesn't reflect what is really needed. If a human is supervising a swarm of weapons, they need to have enough time and information to deliberate before making a decision. That is 'meaningful', not 'appropriate' control."

So for Sharkey, 'appropriate' is not specific enough, and opens the door to poorly informed decision-making.

SEE: Microsoft: We want you to learn Python programming language for free

Any shortcomings in the new guidelines may in part be because AI is a new technology – an issue that the DIB recognizes in its recommendations.

The document, for example, calls for rigorous testing and verification across the entire life cycle of AI systems to ensure they are reliable. But it also accepts that the very nature of machine learning means that traditional verification techniques are "insufficient, aging, or inadequate".

For Anders Sandberg, researcher at the Future of Humanity Institute at the University of Oxford in the UK, it is evident that the old methods won't work with the new technology – something that is reflective of a bigger challenge that the DoD is facing in trying to design ethics for AI.

"Ethics is very much a work in progress. Principles only start taking effect when they became part of an industry's DNA. That takes time", he told ZDNet.

Sandberg doesn't entirely dismiss the idea that the new rules could have an effect on the future application of AI in warfare. However, he doubts that developers will suddenly have reliability and governability on their mind when programming new systems.

It will take some time, he said, before they start realizing that thinking about ethics is also part of their job. 

"The problem being, of course, that when it comes to using AI for military purposes, time is not something we can afford," he added.

Editorial standards