X
Innovation

Intel previews its next processor geared for machine learning

Code-named Knight Mill, the new version will increase the single-precision performance of Xeon Phi chips.
Written by Stephanie Condon, Senior Writer
20160817094735.jpg

Intel Executive Vice President Diane Bryant speaks at the Intel Developer Forum in San Francisco, Calif., on August 17, 2016. (Image: ZDNet)

Intel Executive Vice President Diane Bryant on Wednesday teased the next generation of the Xeon Phi processor, which will be geared for even higher performance and efficiency for deep-learning models.

Coming in 2017, the processor is currently codenamed Knights Mill. It maintains the onload model of the Xeon Phi, eliminating the need for a GPU. The new chip will increase Intel's single-precision performance, optimizing the silicon for a machine-learning workload.

"We commit to you a very long road map of optimized solutions for artificial intelligence," Bryant said.

Intel processors already power 97 percent of servers deployed to support machine-learning workloads, the company says. Yet just 7 percent of servers deployed last year are running machine learning, Bryant noted. Of course, Intel expects that figure to rapidly grow.

Nvidia questions whether the Xeon Phi can match up to its GPUs, but Intel argues that using the onload model is preferable -- particularly from a developer point of view. And while Google has built its own custom chip, the Tensor Processing Unit (TPU), it's still investing in Xeon chips.

Bryant on Wednesday also announced that Intel is partnering with the National Energy Research Scientific Computing Center (NERSC), the flagship computing center for the US Department of Energy's Office of Science, to advance the frontier of machine learning at scale. Together, she said, they'll take on challenges like cataloguing all objects in the universe.

Intel's commitment to machine learning was also recently illustrated with the acquisition of Nervana. The company's specific silicon IP and expertise in the deep learning space should help beef up Intel's artificial intelligence portfolio.

Editorial standards