Intel on Thursday announced the availability of its 3rd generation Xeon Scalable processors, touting the new wave as enabling the acceleration of artificial intelligence (AI) development.
Also: Lenovo launches data center servers aimed at AI, analytics workloads
Codenamed Cooper Lake, Intel said the processor is designed for deep learning, virtual machine density, in-memory database, mission-critical applications, and analytics-intensive workloads.
Speaking with media, Lisa Spelman, Intel corporate vice president and general manager of the company's Xeon and memory group, said customers can expect an average estimated gain of almost double on "popular workloads" over double on VMs, albeit when it is compared with 5-year-old equivalents.
"We have 35 million Xeon scalables deployed and we really see this as the foundation of the world's data centric infrastructure," Spelman said.
Spelman said Cooper Lake has received a deep learning "boost", thanks to x86 support of Brain floating point 16-bit (blfoat16) numeric format and Vector Neural Network Instructions (VNNI), which she said brings enhanced artificial intelligence inference and training performance, with close to double the performance for AI training and inference on image classification workloads when compared to the prior generation.
"Intel Deep Learning Boost with bfloat16 delivers 1.7x more AI training performance for natural language processing vs. the prior generation," the company added in a spec sheet.
According to Spelman, the latest offering is the "foundation for AI" and it's the only mainstream data centre CPU on the market with integrated deep learning acceleration.
"We believe most of our customers do begin their journey on AI on Xeon," she said.
"The 3rd generation of Intel's Xeon scalables evolves our 4 and 8 socket processor offering, it delivers built in AI acceleration with our next generation of Intel DL boost, which is enhanced with blfloat16 and it delivers that persistent memory leadership with the new Intel Octane persistent memory 200 series," Spelman continued.
"We have also enhanced the speed select technology, which was new in the 2nd generation -- this provides users control over the base and turbo frequencies of specific cores and gives the options for a customer to maximise the performance of a certain application or a higher priority workload so they can guarantee a quality of service while still utilising the remaining computing capability and assets that are available for other workloads."
Intel said Cooper Lake delivers multi-socket core count density with up to 28 cores per processor and up to 224 cores per platform in an 8-socket configuration, which it touted as driving enhanced performance, throughput, and CPU frequencies compared to 2nd gen Intel Xeon Scalable processors.
Memory subsystem enhancements to the 3rd gen processors include support for up to six channels of DDR4-3200 MT/s and 16Gb DIMMs, with up to 256GB DDR4 DIMMs per socket.
The company also announced the Optane persistent memory 200 series, which Spelman said provides customers up to 4.5TB of memory per socket to manage data intensive workloads, such as in-memory databases, dense virtualisation, analytics, and high-powered computing.
"It's delivering up to 18 TB of in-memory data in a four socket system, which is perfect for tackling those large data analytics challenges and in an unexpected power loss, Optane provides over 225-times faster CPU access to persistent data, compared to reading from a mainstream NAND SSD," she said.
Available from Thursday, the Optane persistent memory 200 series delivers an average of 25% higher memory bandwidth, when compared to the first generation.
Also packed into Thursday's announcement was the availability of Intel 3D NAND SSDs -- the Intel SSD D7-P5500 and P5600.
Labelling them as being for systems that store data in all-flash arrays, Intel said the new 3D NAND SSDs are built with its latest triple-level cell 3D NAND technology and "an all-new low-latency PCIe controller to meet the intense IO requirements of AI and analytics workloads and advanced features to improve IT efficiency and data security".
Available in the U.2 15mm form factor, Intel said the SSDs offer 1.92 TB, 3.84 TB, and 7.68 TB at 1 Drive Write per Day (DWPD) and 1.6 TB, 3.2 TB, and 6.4 TB at 3 DWPD.
Intel also disclosed its upcoming Stratix 10 NX FPGAs, expected to be available in the second half of this year.
The company said it expects its first AI-optimised FPGAs targeted for high-bandwidth, low-latency AI acceleration will offer customers customisable, reconfigurable, and scalable AI acceleration for compute-demanding applications such as natural language processing and fraud detection.
Stratix 10 NX FPGAs include integrated high-bandwidth memory, high-performance networking capabilities, and new AI-optimised arithmetic blocks called AI Tensor Blocks, which it said contain dense arrays of lower-precision multipliers typically used for AI model arithmetic.
The Stratix 10 NX FPGA offers high-performance AI Tensor Blocks -- up to 15x more INT8 throughput than Stratix 10 FPGA digital signal processing (DSP) block for AI workloads, and up to 57.8 G PAM4 transceivers and hard Ethernet blocks for high-bandwidth networking.
Correction: Previous version of this article had Copper Lake as the processor code name. It is Cooper Lake.