The military wants to build lethal tanks with AI

The ATLAS project will give combat vehicles autonomous targeting capabilities.

US military wants to add AI to next generation of tanks The ATLAS project will give combat vehicles autonomous targeting capabilities.

The US Army is launching a new initiative to design vehicles equipped with artificial intelligence (AI) capabilities for increased lethal accuracy and ground combat capabilities.

The Army Contracting Command (ACC) has sent out a request on behalf of the US Army Combat Capabilities Development Command (CCDC) for contractors, vendors, and academics to submit proposals to the Advanced Targeting and Lethality Automated System (ATLAS) program.

ATLAS is a program designed to improve military technology through computer vision, AI, and machine learning (ML). 

In particular, the military wishes to develop "autonomous target acquisition technology" which can be merged with fire control systems in ground combat vehicles.

The US Army envisions a future in which AI will give combat vehicles the capability to "acquire, identify, and engage targets" with a speed boost of up to three times in comparison to current, manual processes.  

The program may include sensors, processing technology, image recognition, world modeling, range determination, AI & ML algorithms, augmentation, LIDAR laser systems, and rangefinders, among many other technologies.

See also: Google Earth accidentally reveals secret military sites

However, the military is keen to emphasize that any project proposals are still subject to the Department of Defense (DoD) Directive 3000.09, of which two particular clauses are of interest:

  • Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force;
  • Semi-autonomous weapon systems that are onboard or integrated with unmanned platforms must be designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator.

CNET: At hearing on federal data-privacy law, debate flares over state rules

"All uses of machine learning and artificial intelligence in this program will be evaluated to ensure that they are consistent with DoD legal and ethical standards," the ACC added.

These assurances, however, may not quell the rising concern that many have when it comes to the development of "killer robots" made possible through advances in AI and ML.

The Campaign to Stop Killer Robots group, for example, is a coalition of non-governmental organizations (NGOs) that hopes to ban fully autonomous weapons in order to "retain meaningful human control over the use of force."

TechRepublic: Software vulnerabilities are becoming more numerous, less understood

The NGOs involved in the campaign warn that countries including the US, UK, Russia, and China are developing military applications with "significant autonomy in the critical functions of selecting and attacking targets," and unless stringent safeguards are in play, "the world could enter a destabilizing robotic arms race."

Earlier this month, the US Army awarded a $39.6 million contract to FLIR Systems to develop tiny, handheld drones for reconnaissance missions by ground troops. The drones, which weigh less than 33 grams, are able to take HD images and record live footage. 

Previous and related coverage