X
Innovation

The Pentagon says its new AI can see events 'days in advance'

The US military has been trialing the use of an algorithm that anticipates the enemy's next move in advance.
Written by Daphne Leprince-Ringuet, Contributor
210713-f-zd999-001.jpg

GIDE was designed to increase access to real-time information that can help leaders prepare for enemy action and hopefully deter it, rather than react to conflict once it has started.  

Image: US Air Force / Tommy Grimes

The US military is testing the use of cutting-edge data gathering tools combined with artificial intelligence to predict enemies' next moves up to days in advance.  

Speaking at a press conference, the commander of the US Northern Command (NORTHCOM) Glen VanHerck revealed that trials have been ongoing to improve the military's use of data when making key strategic decisions, with the third part of an initiative called the Global Information Dominance Experiment (GIDE) showing promising results. 

GIDE was designed to increase access to real-time information that can help leaders prepare for enemy action and hopefully deter it, rather than react to conflict once it has started.

SEE: Attacks on critical infrastructure are dangerous. Soon they could turn deadly, warn analysts

The latest experiment carried out by the Pentagon saw 11 US commands simulate the takeover of a crucial site such as the Panama Canal.  

VanHerck explained that during the simulated operation, data was gathered from various sensors spread out across the globe, both military or civilian; the information was then run through an AI model capable of detecting patterns and giving the alert when spotting signs like a submarine preparing to leave port, for example. 

Knowing what the enemy might be preparing to do in advance let commanding officers take measures such as deploying troops, in an effort to deter conflict.  

"What we've seen is the ability to get way further what I call left, left of being reactive to actually being proactive," said VanHerck. "And I'm talking not minutes and hours, I'm talking days." 

Deployed to the wider force for real-world situations, the technology could put together information in real-time from existing satellites, radars, undersea sensors, as well as cyber and intel capabilities, and make it available through the cloud for AI models to process.  

"The ability to see days in advance creates decision space. Decision space for me as an operational commander to potentially posture forces to create deterrence options to provide that to the secretary or even the president," he said.

All of this information is already available, stressed VanHerck, but it currently takes hours and days for dedicated analysts to browse through the mountains of data that are generated every day, before noticing patterns of interest.  

"Keep in mind that it's not new information. It's information that today is just not analyzed and processed until later in the time cycle, if you will," said VanHerck. 

"And all we're doing is taking and sharing it and making it available sooner. So that our key decision-makers will have options versus being reactive where they may be forced to take some kind of escalation option." 

The algorithms described by VanHerck could look at the average number of cars in a parking lot in enemy locations, for instance; it could count the airplanes parked on a ramp and trigger a warning when noticing change, and it could even spot missiles being prepared to launch. This could provide the Pentagon with days of advanced warning, according to the commander.  

Using AI to better inform military decisions is a key objective that the Pentagon has made clear for some time, especially as other countries ramp up the use of technology in the defense sector.  

But the growth of automation tools in warfare is raising serious concerns among some advocacy groups that algorithms might be empowered to inform life-and-death decisions, and eventually even to make those decisions themselves. 

SEE: Fewer troops, but more tech: Military downsizes as it shifts to AI, drones and cyber

The GIDE experiments, in fact, were carried out together with other groups within the US Department of Defense, including Project Maven – an initiative that sparked controversy in 2018, when Google employees rebelled against the company's involvement in the initiative. 

After Google was contracted to help build the technology for Project Maven, which aimed to develop AI that could spot humans and objects in large amounts of video captured by military drones, thousands of staff signed a petition calling for the company to pull out. Employees cited fears that they would be involved in an initiative that would contribute to identifying potential targets. 

VanHerck, for his part, was keen to address concerns over the use of AI in GIDE trials. "Humans still make all the decisions in what I'm talking about," he said. "We don't have any machines making decisions.

"We're not relying on computers to take us to create deterrence options or defeat options." 

According to the VanHerck, the software capabilities trialed in GIDE are already available and ready to be fielded across combatant commands. To further improve the impact of the technology, he continued, will also require collaboration with international allies and partners, who are expected to be brought in to participate in what could be a global exchange of real-time intelligence. 

Editorial standards