X
Home & Office

Has Nehalem hit a new sweet spot?

Two things have happened recently that could presage major changes in the way that servers are used and deployed. In particular, I'd argue that the x86 architecture has cemented its place as the flavour du jour in all but the most demanding or batch-processing heavy of environments.
Written by Manek Dubash, Contributor

Two things have happened recently that could presage major changes in the way that servers are used and deployed. In particular, I'd argue that the x86 architecture has cemented its place as the flavour du jour in all but the most demanding or batch-processing heavy of environments.

How so? The first event was the launch last week of Intel's Nehalem-EX Xeon processors. The headline news is that the new 45nm chips sport what in the CPU world is called RAS - reliability, availability, and serviceability (forget remote access service). What this means in practice, according to Intel and its OEMs including HP, IBM and Dell, is that the new Xeons can address twice the memory, reduce energy consumption by up to 90 percent or more (hmmm), and detect double-bit errors -- something that Sun's SPARC chips first featured over five years ago but which is new to Intel.

This last issue, while perhaps appearing abstruse, adds a comfort zone for those contemplating building servers that support multiple virtual servers running mission-critical applications. Not web servers but invoicing and billing, or essential business processes delivering services to customers.

And here's the clue to the thrust of the new CPUs. Running multiple VMs and asking customers to trust their businesses with what may still be perceived as jumped-up PCs is a tall order. So Intel has also added greater memory addressability. This means a two-socket Xeon server can house up to 512GB, and a four-socket 1TB. This addresses the issue that most VM hosts are not CPU- but memory-bound. Adding RAM means more VMs per host and, ultimately, lower costs.

And you can still have more CPU if you need it, via Intel's so-called turbo-boost feature, which overclocks the chip as requirements demand and as its thermal envelope allows. The on-chip energy management system will shut down a core if it can shunt the load onto fewer cores, thus saving power, the consequence of which is that the remaining live cores can spin at higher clock speeds. The fewer cores that are running, the faster the rest can clock.

With these chips, Intel may just have done the job of moving Xeon-based VM hosts up the mission-criticality stack, strengthening Intel's position vis-

Editorial standards