Home & Office

DARPA: Our goal is 100x faster network card for tomorrow's AI

DARPA is asking for help to create the super-fast network interface card that industry has so far failed to produce.
Written by Liam Tung, Contributing Writer

DARPA wants to break a network speed bottleneck caused by network interface cards (NICs) that aren't cut out to support tomorrow's demands for artificial intelligence. 

To achieve its goal, the Department of Defense agency is launching the Fast Network Interface Cards (FastNICs) program. 

The FastNIC isn't a technology just yet, but DARPA's FastNICs program is seeking engineering talent to help achieve its ambition to boost the performance of the network stack on servers by 100 times.

SEE: Sensor'd enterprise: IoT, ML, and big data (ZDNet special report) | Download the report as a PDF (TechRepublic)

As DARPA explains, networking performance has tracked the same path as computing performance explained by Moore's Law

But NICs, the hardware that connects the computer to an Ethernet network, have not kept pace and, with a few other factors like memory and software design, application throughput is too slow for the future of AI and distributed computing. 

"The true bottleneck for processor throughput is the network interface used to connect a machine to an external network, such as an Ethernet, therefore severely limiting a processor's data ingest capability," said Dr Jonathan Smith, a program manager in DARPA's Information Innovation Office (I2O). 

The network throughput on the best hardware today is about 10^14 bits per second (bps), or about 12 terabits per second. Data is processed in aggregate at about 10^14 bps, according to Smith. 

The key problem is that current network stacks are much slower, capable of delivering about 10^10 to 10^11bps application throughputs, which are in low gigabit-per-second ranges.

DARPA wants to solve a particular challenge in distributed computing since it depends heavily on communication between computing nodes. The lack of significantly faster NICs is also becoming a challenge for AI in deep neural network training and image classification. 

But what DARPA wants from FastNICs isn't a small task and would require an overhaul of the whole network stack. 

"Specifically excluded is research that primarily results in evolutionary improvements to the existing state of practice," DARPA notes in the tender document.

Under FastNICs, DARPA will select a challenge application and give it hardware and software support. The hardware development side of the project will require researchers to demonstrate 10tbps network interface hardware using existing or road-mapped technology. And the hardware needs to be able to attach to servers using standard interfaces. 

"There is a lot of expense and complexity involved in building a network stack – from maximizing connections across hardware and software to reworking the application interfaces," explained Smith. 

"Strong commercial incentives focused on cautious incremental technology advances across multiple, independent market silos have dissuaded anyone from addressing the stack as a whole," he added. 

SEE: An AI privacy conundrum? The neural net knows more than it says

DARPA's key areas of interest are distributed machine learning and sensors for things like UAVs and self-driving cars. Faster network interfaces would help all cores in a cluster of computers continue working towards a single purpose as they do, but faster. 

"Recent research has shown that by speeding up the network support, the entire distributed machine-learning system can operate more quickly," said Smith. 

"With machine learning, the methods typically used involve moving data around, which creates delays. However, if you can move data more quickly between machines with a successful FastNICs result, then you should be able to shrink the performance gap."


DARPA reckons today's network interface cards are creating a network speed bottleneck.

Image: DARPA
Editorial standards