X
Innovation

War machines: DOD's ethical principles for battlefield AI

As role of AI in defense grows, can new ethical guidelines avert irresponsible adoption?
Written by Greg Nichols, Contributing Writer

Earlier this year, the Department of Defense announced the adoption of its ethical principles for Artificial Intelligence.ZDNet has been tracking this process since it gained momentum, but DoD's official adoption of the principles was overshadowed in the general media when COVID hit the US.

For some perspective on what the new framework means for the growing use of AI in war, I reached out to Justin Neroda at Booz Allen Hamilton, who is vice president and leader in the firm's Analytics and AI business.

One thing Neroda made clear is that these principles are in no way abstract. In fact, artificial intelligence is currently being used in a variety of applications in national defense. Examples include AI's function creating more effective cyber defenses, its use predicting and repairing military equipment before it breaks down, as well as implementations designed to improve the performance and readiness of soldiers. 

"Initially, AI was focused on back-office operations and has shown that it can effectively increase the efficiency of these operations," explains Neroda. "There is now an increasing focus on operationalizing AI to move pilots and prototypes out of the lab and into operations where they can have an increased impact."

As Neroda points out, a key factor in the continued expansion of AI applications in defense is improved automation and standards associated with what is called MLOPs, the structured process for development, testing, deployment, and monitoring of AI/ML solutions. That's really where a lot of the work is focused now.

So what are the DOD's newly adopted principles for the ethical adoption of AI?

The DOD's principles for ethical adoption of AI layout a framework of factors that must be taken into consideration when designing AI solutions for defense, and they highlight elements that are critical for keeping ethics at the forefront when implementing AI solutions. The principles are far-reaching, but broadly speaking they include the mandate that AI be equitable and traceable. In other words, levels of bias in training data used for AI development, which is a key weakness in any big data application, must be identifiable and measurable.

"This requires robust configuration management of model training data to ensure trained models can be directly linked to training data," says Neroda.

The principles of stress reliability. "To measure the level of reliability, standard metrics will be required and agreed upon and then continuously monitored through development and deployment to ensure the reliability is maintained at an acceptable level given the risk of the AI being utilized."

Governability is also a key issue. AI will need to have a monitoring capability that proves a level of proficiency. More than that, it will need a mechanism to disengage and revert to alternative approaches or trigger automated retraining when proficiency is not met. 

Where are we on AI and defense compared to other countries?

The good news, perhaps, in a world that's feeling increasingly fractured and isolated, is that the US isn't acting unilaterally. In fact, the US and Europe have adopted very similar AI principles this year. The US also recently joined the G7 AI panel for setting ethical guidelines for the use of AI, which was created to guide the responsible adoption of AI-based on shared principles of human rights, inclusion, diversity, innovation, and economic growth.

Not all regimes are of a mind when it comes to AI, however.

"Other countries have taken a more aggressive stance towards AI in how they have implemented it," says Neroda, "but moving at this pace without taking into account things like the AI principles, in the long run, will either result in applications of AI that don't meet required performance thresholds or have the potential to result in non-optimal implementations of AI and require significant rework to meet AI solution objectives. Historically, early adopters of new technologies have not necessarily been the most successful."

Will critics of the growing use of automation technologies in battle be satisfied? It's not likely, although the principles outlined by the DoD do seem to represent a genuine engagement with mounting concerns of the use of paradigm-shifting technologies in battle.

"These principles are a start in a journey to satisfy these skeptics, but to fully address them, a more defined quantitative framework and process for measuring the compliance with these principles will be required to ensure future AI solutions adhere to these principles at an acceptable level."

Editorial standards