Xilinx refines AI chips strategy: It’s not just the neural network

Xilinx hopes to take a big chunk of the market for semiconductors that process machine learning inference tasks by convincing developers it's not only about the neural network performance. It's about the entire application.
Written by Tiernan Ray, Senior Contributing Writer

Chip maker Xilinx on Tuesday held its annual "analyst day" event in New York, where it told Wall Street's bean counters what to expect from the stock. During the event the company fleshed out a little more how it will go after a vibrant market for data center chips, especially those for machine learning. 

That market is expected to rise to $6.1 billion by 2024 from $1.8 billion in 2020. 

The focus for Xilinx is a raft of new platform products that take its capabilities beyond the so-called field-programmable gate arrays, or FPGAs, that it has sold for decades. That requires selling developers of AI applications on the notion there's more than just the neural network itself that needs to be sped up in computers.

Data center is a small part of Xilinx's overall revenue, at $232 million in the fiscal year ended in March, out of a total of $3.1 billion in company revenue. However, it is the fastest-growing part of the company, rising 46% last year. The company yesterday said data center revenue growth is expected to accelerate, rising in a range of 55% to 65% this fiscal year, versus the compounded annual growth of 42% in the period 2017 through 2019.


Xilinx expects to gain ground in machine learning inference by virtue of "tiles," compute blocks that connect to one another over a high-speed memory bus, inside the "AI Engines" portion of its "Versal" programmable chips.


To do so, Xilinx is moving past its heritage in FPGAs, to something more complex. FPGAs contain a kind of vast field of logic gates that can be re-arranged, so they have the prospect of being tuned to a task and therefore being of higher performance and greater energy efficiency.

Xilinx now wants to sell platform chips, which still possess programmable logic gates but also integrate several functional elements that are more particular to a task, such as machine learning, all on a single silicon die.

The company's chief executive, Victor Peng, started his presentation for the event with a graph of applications, a kind of sandwich in which the machine learning algorithms were stuck in the middle of two other parts, a pre-processing step and a post-processing step. Xilinx's focus at the moment is for inference tasks, when machine learning models deliver predictions, not for training. 

Also: Intel's AI chief sees opportunity for 'massive' share gains

It would be good, said Peng, if chips for machine learning not only sped up the processing of the neural network part in the middle, but also the pre- and post-processing parts before and after. 

"It's not just about the neural net processing, even though that's what gets talked about," said Peng. "It's about the entire application."


Xilinx's CEO, Victor Peng, emphasized during the company's analyst day event that it's not just the neural network, the entire application needs to be made to perform better, a pitch he hopes will advance the company's argument for its Versal and other integrated "platform" products.


For example, said Peng, in autonomous vehicle technology, such as "advanced driver-assistance systems," or ADAS, "the total latency is what you care about, but machine learning is only a single step." 

To handle all that, Xilinx is betting Intel CPUs and Nvidia GPUs are both too limited. Rather, developers will want systems-on-a-chip that have programmable logic but also some specific functional units. 

That starts with currently available Zynq processors from Xilinx, which include an ARM CPU embedded within a sea of programmable logic gates. The next step is a product called "Versal," which Xilinx is just rolling out now. Versal has several programmable cores of different compute functions. One is an "AI Engines" logic block.

The AI Engines are actually a collection of tiles, individual areas on chip with vector processing and dedicated memory caches that are connected to one another through a high-speed bus. (For more details, read the Versal white paper.)

Also: Qualcomm President Amon intends to win in cloud where company failed in past

It remains to be seen, of course, whether machine learning developers, who increasingly appreciate the benefits of FPGAs, will cotton to a platform approach. Aside from Intel and Nvidia, Xilinx faces competition from Advanced Micro Devices, which sells both CPU and GPU; and from Qualcomm, which just announced a forthcoming machine learning part; and from cloud vendors, such as Google, which are building their own parts; and from a raft of startups, some of them coming from an FPGA background, such as Efinix.

(You can watch the entire Webcast on the Xilinx investor relations website.)

During a question-and-answer period, Peng emphasized that the Versal "family" will consist of "six different products," suggesting the capabilities of the platform approach will expand. He assured the financial analysts in attendance at the meeting that Versal "will be really disruptive," and that it is "not just silicon" but also various software tools -- some of which are in development, hence the company's heightened spending this year.


Xilinx foresee rapid growth in the data center chips market, including parts for machine learning.


Peng's deputy in charge of data center, Salil Raje, added that the company will take advantage of the fact that machine learning approaches can be written to the device simply by importing TensorFlow or other code that is standard in machine learning development. "A lot of apps now work on a framework basis, we benefit from that," said Raje. "We just have to connect to the framework."

Must read

Peng added some historical perspective, noting that when he worked many years ago at the company then known as ATI, which was eventually bought by AMD, "no one knew how to spell GP-GPU," referring to the "general-purpose GPU," the currently hot product from Nvidia and AMD. Also back then, "nobody knew that heterogenous [computing] was the answer," he added, meaning, chips that include different kinds of circuitry mixed together, such as Versal. "Now everyone understands the future is heterogenous computing, so we can in a sense drift behind that," making an analogy to the way cyclists ride behind one another to get the benefit of a wind tunnel behind a lead rider.

One financial analyst in the room pointed out that it seemed the market for innovative inference chips was taking a long time to materialize -- Intel still dominates the market with its CPUs. "There is still a lot that is being done on CPU," Peng conceded, adding, "but customers like Twitch are coming to us because they just can't get it done on the CPU," referring to the video game streaming video operation owned by Amazon. Amazon, along with Baidu, has rolled out multiple "availability centers" around the globe where you can rent use of Xilinx's FPGAs.

Peng urged patience. He said Xilinx still needs to "get the whole software stack there," meaning various tools over and above Zynq and Versal. He pointed out that new plug-in processing cards, which make the chips easy to deploy, called Alveo, were only released at the end of 2018. 

"We always thought it would be a big opportunity, but over time," he said.

Cloud services: 24 lesser-known web services your business needs to try

Editorial standards