- 24 server nodes in a 3U rack
- Front-facing access to all ports
- Consistent remote management
- Single-socket server nodes lack scalability
- Carrier swaps affect two servers at a time
- Front-mounted wiring can impede access
- Power supply concerns when fully populated
Based on the Microcloud platform from Supermicro, the Quattro 1396-T from Boston Limited squeezes a full 24 industry-standard servers into just 3U of rack space. Each server also comes with local storage, network connectivity and comprehensive remote management which is no mean feat, enabling a rack of 1U server alternatives to be compressed down to hardly any space at all. Energy and cooling requirements are also minimised but, as no more than a collection of single-socket servers, the Quattro 1396-T microserver does have its limitations.
A computing Tardis
From the outside the Quattro 1396-T looks much like a standard rack server, but that all changes when you look a little closer. Designed for front-facing access, there's very little at the back other than a couple of power sockets and four large-diameter cooling fans. These aren't particularly noisy, at least not by machine-room standards, with thermostatic monitoring and variable speed control to keep everything cool without breaking the sound barrier. Slip one of the fans out for maintenance, however, and you'll really hear the remaining blades ramp up to compensate until a replacement is fitted.
Power comes from two platinum-rated (94 percent efficiency) 2000W redundant supplies. These plug in at the front, slap bang in the middle of the unit with six slots either side to take the metal sleds that hold the processing nodes — that's servers to you and me.
Each sled carries two nodes, effectively independent servers but engineered on a single motherboard with sockets to take two Xeon E3-1200 v3 (Haswell) processors, one per node. A variety of processor SKUs can be accommodated here, the review system shipping with quad-core E3-1220 v3 chips clocked at 3.1GHz. With a TDP (Thermal Design Power) of 80W, however, that's 1,920W for a fully populated chassis, which is close to the limit in terms of what the power supplies can cope with in redundant mode. For some applications, therefore, processors with a lower TDP may be a better option. It's worth noting that you don't have to fit the same processors on each sled.
Eight memory slots are available, four per node, capable of equipping each server with 32GB of memory using DDR3 ECC 1,600MHz RAM. Not a huge amount compared to what can be fitted in a standard rack server, but good enough for the kind of cloud applications the Quattro 1396-T is designed to handle. Low-profile UDIMM modules are employed here to fit within the confines of the sled metalwork, with 1.5v DDR3L memory to minimise power requirements.
Storage and networking
On previous Quattro servers, storage was remote from the sled. On the 1396-T, however, you can fit four 2.5-inch SATA disks or SSDs onto the sled itself. That's two disks or SSDs per server node, with RAID 0/1 support via the on-board Intel C224 controllers for redundancy.
Interestingly the system we tested had 500GB Seagate Constellation drives but, according to Boston, the falling cost of SSDs is making them a popular choice on this platform. All the more so given that on-board storage will mostly be used to host the OS and application software while data is held elsewhere on the network. There's also a limit of 24 hard drives per chassis altogether (presumably due to power requirements), which doesn't apply if (lower power) SSDs are used.
On the networking front, two Gigabit Ethernet connections are available to connect each node to the LAN, with a separate port for connection to an independent management network. Each sled also has a KVM connector to allow for local attachment of a monitor and, via a pair of USB ports, a local keyboard and mouse. A small switch is used to choose the node to manage and the USB ports can also be used to access an external storage device, for example, when first loading the OS onto each server. Alternatively this can be done via the built-in IPMI management controller, which has virtual media capabilities.
Unlike a blade server, there's no communication backplane linking the sleds in the Quattro 1396-T, and no shared storage, networking or unified management capabilities. The only connections are into the power feed and chassis monitoring circuitry. Moreover, although implemented in pairs on a shared motherboard, each node is an independent server running its own operating system.
As as result, performance is largely down to the processor involved on what's effectively a dense collection of single-socket servers. Each node in the review system was loaded with Windows Server 2012 R2 returning a multi-core Geekbench score of 10693 in our tests, which is about right for a single-socket server with a quad-core E3-1220 v3.
Equally, each node could boot to a hypervisor, although there's no option to do this from an internal SD Card or USB memory stick. Moreover, if it's virtualisation you want, you're better off with a server with a lot more cores.
Other concerns we had were the need to power down both nodes should you have to swap a sled for any reason, and the need for careful cabling. Our review unit only had a few sleds cabled up, but with everything plugged in at the front a fully populated system will require a mass of patch leads that could be an obstruction when it comes to physical maintenance.
Finally it's also worth emphasising that you can't do much to upgrade individual servers within the Quattro 1396-T to make them go faster. Rather, the selling point is the ability to squeeze the equivalent of 24 independent servers into just 3U of rack space and do so at an affordable price (£16,669 ex. VAT for a fully populated 24-node system works out at £694.50 per node). For companies with lots of 1U servers that's a very tempting proposition, enabling them to reduce their power and cooling bills without compromising on performance. However, those seeking a more scalable solution may need to look elsewhere.