X
Innovation

Flex Logix has two paths to making a lot of money challenging Nvidia in AI

The programmable chip company scores $55 million in venture backing, bringing its total haul to $82 million.
Written by Tiernan Ray, Senior Contributing Writer
2020-10-linley-fall-1-inferx-x1-chip-final-slide-4.jpg

The world of artificial intelligence processing is creating numerous opportunities for young companies, especially in the hotly contested area of inference, where a trained neural network is used on a device to make actual predictions.

That's the realm of Mountain View, Calif.-based Flex Logix, the seven-year-old startup that has for several years been going after Nvidia's position in the market for "inference at the edge."

Flex Monday announced it has gotten $55 million in Series D funding, lead by Mithril Capital Management, and joined by existing investors Lux Capital and Eclipse Ventures, bringing its total to date to $82 million.

Geoff Tate, chief executive, told ZDNet in an interview that the company can be successful both licensing its intellectual property to other chip designers, as it already does, or via sales of its forthcoming inference chip, the InferX X1. 

"We have two paths to being worth a lot of money," said Tate in an interview by phone last week. "Both can be worth tens of billions of dollars" as a business, he said, referring to the licensing and the outright chip sales approaches.

Twenty-eight years ago, Tate helped create a chip company called Rambus. Its technology is in every DRAM memory part that ships today. Tate thinks AI can be an even bigger market over time. Tate's personal investment vehicle, the Tate Family Trust, is also a returning investor this time around. He put his own money into each of the preceding rounds. 

Flex's "NMAX" technology is based on what's known as an "eFPGA," a kind of programmable chip consisting of tons of identical compute elements, called multiplier-accumulators, that perform the matrix multiplications that are the fundamental task in neural networks. The multiplier-accumulators make up a "systolic array," a kind of logic mesh, surrounded by lots of SRAM memory.

Also: AI startup Flex Logix touts vastly higher performance than Nvidia

The chip is aimed at the "edge," meaning, devices outside the data center, such as self-driving cars or IoT gadgets. 

The chip technology is already being licensed to multiple semiconductor designers, including Dialog Semiconductor, which is using it to "enhance" various mixed-signal chips. SiFive, a startup pursuing the RISC-V chip architecture, is basing one of its chips around the technology. And Flex has multiple deals with the US government, which wanted alternatives to Xilinx FPGAs. 

Xilinx FPGAs are "all made in Taiwan," which is not acceptable in some cases to US agencies such as Sandia Labs for security reasons.

The licensing business is cash-flow positive, said Tate. Licensing can itself be worth a lot of money, as evinced by ARM Ltd., the subsidiary of SoftBank Group whose CPU core designs are licensed by every chip company on the planet. 

ARM is in the process of being acquired by Nvidia for $40 billion, observed Tate. "It's just $2 billion in revenue, but their revenue is all margin, so you get a much higher valuation as a licensing company," said Tate, referring to ARM's gross profit of around 95%. 

Flex's own chip, the InferX X1, is sampling to customers now, with volume production expected in Q2, said Tate.  

The optimization of the NMAX architecture has focused on 2-D imaging, such as the YOLO V3 task, the real-time object detection task that is a popular benchmark for neural nets in the computer vision market. 

Hence, inferencing at the edge might involve neural nets for things such as medical devices, imaging, or robotics, or surveillance systems.

The only real contender there has been Nvidia, said Tate, with its Xavier NX part. Flex aims to lure customers away from Nvidia with both performance advantages and cost savings.

"When we talk to customers today, we find they are using tens of thousands per year" of Nvidia chips, said Tate. Nvidia's cheapest part, he said, is $350 per thousand pieces versus about $150 for the InferX X1.

2020-10-linley-fall-1-inferx-x1-chip-final-slide-18.jpg

But cost savings alone won't be enough, he said.

"The customers where we can save them some money, that's not going to be enough, no one will take a risk to their jobs for a few bucks," given that, like the old IBM adage, nobody ever gets fired for going with Nvidia. "But if we can also give them performance that can make their products drastically better, and let them take market share, then they'll take the risk.

"We are talking with customers in that category," said Tate. The funding is going to be useful for building a team, said Tate, to sell to those product companies.

"We outperform Xavier NX," said Tate. "Xavier is part ARM processor, but we are a lot more efficient," said Tate. The InferX X1 measures 54-square millimeters, whereas the Nvidia part is seven times bigger, he said.  

Flex measures its superiority by benchmarking against the Nvidia Xavier NX's performance on the YOLO V3 test. In results shared previously, the InferX X1 generates many multiples of the frame-rate-per-second of the NX, where more frames processed per second is better.

"The goal is that in a year from now, customers look at us as a safe alternative to Nvidia." Tate declined to list prospective clients. Categories of prospectives include things such as factory inspection and medical imaging. "Basically any place where there is a sensor that is detecting a two-dimensional image."

"We have two businesses, both of which by themselves can make us a multi-billion-dollar company," said Tate. "We have two ways to win, and they're very synergistic."

Editorial standards