AMD doesn't have a fantastic record when it comes to server processors, but its new EPYC products are finding favour among customers looking for more 'bang for the buck' -- especially when it comes to HPC (High Performance Computing) and, interestingly, storage applications. What customers want, customers get so, despite an initial lack of enthusiasm from vendors used to simply following Intel, we're seeing a growing number of new EPYC-powered servers being released. Before we look in detail at some of these, however, let's summarise the advantages for buyers of AMD EPYC compared to Intel Xeon servers.
A lot of column inches have been devoted to the differences between two architectures, but the headline is in core count, with the Xeon Scalable Processor Family topping out at 28 cores per socket while AMD's EPYC 7000 Series processors can have up to 32.
EPYC processors also benefit from eight memory channels and support for up to 2TB of memory per socket compared to 6 channels and 1.5TB of RAM with Xeon. AMD also wins out big-time on I/O, with support for 128 PCIe lanes per socket whereas Xeons support just 48. Intel processors, on the other hand, have more cache, plus support for the latest 512-bit AVX instruction set -- although code has to be rewritten to exploit these extensions, which are of most interest to developers of high-end HPC applications.
There has also been much debate over real-world performance differences and cost differentials, plus there's also the little matter of brand loyalty. Still, AMD is causing a real stir in the server market and it didn't take much effort to find three new products to examine -- two from SuperMicro specialist Boston Limited and the third from Dell-EMC.
The first of the Boston servers is a new take on its existing SuperMicro-based Quattro, effectively a 2U mini blade platform designed to accommodate four server sleds through slots at the rear, each holding an independent motherboard with two AMD EPYC processors on board. A redundant pair of 1100W slimline power supplies keep the new Quattro running, while at the front there are 24 low-profile (2.5-inch) storage bays organised into four sets to give each server six of its own for direct attached storage.
Any of the EPYC 7000 series processors can be specified, and because the servers are independent you can fit different processors in each sled to suit the expected workloads. The chips themselves are then covered by specially designed heatsinks to insure good airflow in the limited space available, with shared fans in the main chassis to push the air around. That said, it's worth noting that, with the trend towards ever higher processor TDPs (up to 180W on the 32-core EPYCs), cooling needs to carefully planned. As such, you're advised to have no more than four Quattro chassis in a rack without additional cooling measures.
In terms of memory there's a set of 16 DIMM slots arranged either side of the processor sockets, enabling each server to have up to 2TB of RAM using ECC DDR4 modules clocked at 2,666MHz. Advice here, however, is to fully populate with DIMMs regardless of capacity in order to maximise performance. That's due to the way memory is accessed and shared between the four CPU modules that comprise the EPYC processor.
On the I/O front, an integrated 12Gbps SAS controller is used to connect the processors to the storage at the front of the chassis with the drive bays physically connected to each server by a shared backplane. The drives themselves can be either 2.5-inch SAS SSDs or, for maximum performance four NVMe and two SAS devices. A separate all NVMe configuration is also available, and each node additionally has an on-board M.2 interface to take an NVMe storage card, which customers routinely specify to boot the servers.
A dedicated Ethernet interface is built onto the motherboard for use by the integrated IPMI remote management controller, while server networking is handled by a proprietary SIOM (Super I/O Module). This fits into a custom PCIe connector located at the rear of the motherboard.
A number of SIOMs are available to order, providing Ethernet, InfiniBand and Omni-Path connectivity at up 100Gbps, all with pass-through support for IPMI remote management. The network ports are at the back and with four sleds crammed into just 2U the wiring can get messy. Still, the end result is network connectivity that leaves the two PCIe x16 slots free for other purposes.
Who's it for?
Capable of hosting up to 256 processing cores in just 2U of rack space, the Quattro will be of interest to both HPC customers and buyers building large-scale VM farms. Storage is limited, but the ability to plug and play servers makes it easy to scale, with customers able to start with just one or two server sleds and add more when demand rises. That capability makes the Quattro a good choice for buyers seeking a hyperconverged infrastructure (HCI) platform. Indeed, market leader Nutanix used something very similar to power its appliances when it started out, and with EPYC processors on-board the concept is even more compelling.
The second Boston product is a 1U single-socket server which, at first glance, appears to be an entry-level or small business solution. However, that's far from the case thanks to its AMD EPYC processor, 2TB memory capacity and support for up to 10 super-fast NVMe drives. Boston positions its new server as a powerful and cost-effective alternative to more expensive 2P Xeon platforms. Crucially, VMware vSphere and vSAN are all licenced on a per-socket basis, enabling customers to save thousands in licensing costs by upgrading from 2P Xeon to this kind of single-socket EPYC server while also boosting performance.
We looked at pre-production model of the 1U server which, just like the Quattro, is based on a Supermicro motherboard -- this time with just the one socket to take any of the EPYC 7000 Series processors, including 32-core variants.
Resplendent beneath heatsink and plastic ducting, the processor is sandwiched between sixteen DIMM slots capable of taking a full 2TB of ECC DDR4 2,666MHz memory -- the same as on the 2P motherboards used in the Quattro.
Redundant 750W power supplies and Integrated IPMI remote management with a dedicated Ethernet interface come as standard, with two additional 10GBase-T ports for wider network connectivity. The server also has two full-height PCIe x16 expansion slots and a further low-profile socket, although it's on the storage side that things start to get really interesting as it's all NVMe and pretty impressive for a server of this size.
To start with, there are two M.2 connectors on the motherboard to take NVMe adapters to boot the Boston server and provide limited system storage. Beyond that, however, the 1U chassis has an impressive set of ten 2.5-inch drive bays arranged across the front and, similarly, cabled for use with NVMe solid state drives -- although four can be used to accommodate SAS/SATA devices if needed.
For networking there are two 10Gbps Ethernet ports managed by an integrated Broadcom controller, plus a separate Gigabit port for IPMC remote management. That leaves the three PCIe x16 slots for further expansion.
Who's it for?
The 1U Boston 1113S-WN10RTA is a completely different breed of server from the Quattro. At one level it will appeal to budget-conscious companies seeking to reduce their virtualisation costs by switching from 2P Xeon to 1P EPYC servers. On another, enterprise customers wanting a high-performance storage platform may be interested. Indeed, with that in mind, Boston is talking to partners about developing highly scalable SDS (Software Defined Storage) solutions that combine the 1U server with NVMe over fabric technologies.
Finally, there's the Dell EMC PowerEdge R7415 which, like the 1U Boston, is a single-socket server capable of accommodating any EPYC 7000 Series processor and pairing it with up to 2TB of memory. Physically, however, it's a much larger 2U system capable of accommodating up to 24 2.5-inch hot-swap SATA/SAS or NVMe drives.Dell EMC markets it both as a standalone server and as a validated vSAN node ready to exploit the licence savings made possible by having only one CPU.
A highly configurable platform, the review server was fitted with a 32-core EPYC 7551P processor with just eight of the available DIMM slots filled using 32GB DDR4 RDIMMs, adding up to 256GB altogether. These are located in the middle of the chassis with an impressive heatsink on top of the AMD chip to keep it cool, with the usual arrangement of memory slots on either side.
Network connectivity is handled through a LAN on Motherboard (LOM) arrangement with two Gigabit ports built in as standard, plus an optional mezzanine card which, on the system we looked at, added two more 10GbE ports. You also get the usual embedded iDRAC remote management and lifecycle controller, plus lots of fans to maintain an even temperature and a redundant pair of 800W power supplies to keep the server running. There's even space to accommodate up to four PCIe expansion cards but, as with the Boston 1U server, it's the storage options that will be of most interest to buyers of the AMD-powered PowerEdge.
Those options here start with a choice between a chassis with just 12 3.5-inch SATA/SAS drive bays at the front (plus an optional extra two at the rear) or enough bays to take 24 low-profile (2.5-inch) devices using a mix of SATA/SAS and NVMe technologies. The bays are all hot-pluggable, supported by a fixed backplane that, on the review system, was split so that half the bays were cabled for pure SAS/SATA and the other half the full mix of SATA/SAS and direct-connect NVMe. The bays on the review system were only sparsely populated with a pair of 1.6TB NVMe U.2 drives in bays on the right side of the chassis and five 400GB 12Gbps SAS SSDs at the other end in the slots without NVMe support.
The NVMe drives are, of course, connected to the processors by the PCIe bus, while the SAS SSDs were cabled into a PERC H740p RAID controller located in a custom socket on the motherboard.
Along with the cables needed for NVMe, there's a lot of wiring in a very small space, although the end result is surprisingly tidy and workmanlike. It's also a very scalable storage setup, hence why it's being offered as a preconfigured vSAN node.
Who's it for?
According to the Dell EMC website, the PowerEdge R7415 is optimised for virtualisation and business analytics as well as scale-out, high-capacity SDS -- much like the 1U Boston server. With its greater storage capacity and management options, however, the R7415 is clearly targeted at a more demanding enterprise demographic with bigger budgets, able to use the extra capacity to support big data, hybrid cloud and other storage-hungry applications.
EPYC: The bottom line
So there we have it: three very different servers designed to address the needs of distinct market segments, but all looking to do so by taking full advantage of the extra cores, memory channels and PCIe lanes provided by the AMD EPYC processor.
As well as performance benefits compared to Xeon-based alternatives, cost savings are also possible -- although cheaper processors are a only a small part of that equation. In fact, there's a much greater benefit to be had from the ability to do more with less, to reduce server spend and also to save on licensing by switching from 2P Xeon to single-socket EPYC platforms. Because server configurations vary, those benefits are hard to quantify, but there are definite savings to be had -- and a growing number of buyers are prepared to go for EPYC over Xeon in order to realise them.