X
Business

Datacentre 2020: Greener, faster, more flexible

The average datacentre lasts between 15 and 20 years, so when the current generation of datacentres near the end of their working life, will their replacements be at all familiar?
Written by David Braue, Contributor

The average datacentre lasts between 15 and 20 years, so when the current generation of datacentres near the end of their working life, will their replacements be at all familiar?

There was a boom in datacentre construction at the beginning of this decade, which means that millions of dollars worth of server hardware and related equipment still needs to be written off before a major wave of datacentre rebuilding can be expected.

Unfortunately, extrapolating current IT trends into the future is an uncertain exercise because disruptive technologies upset the status quo with considerable regularity.

We always had to design servers for maximum power. We were going the wrong way.

Tony Parkinson HP VP industry standard servers Asia Pacific

That said, the emergence of several macro trends in datacentre design gives us an idea of what will shape server and datacentre design in coming decades.

Going green
With the growing emphasis on green computing, recent innovation around server design has focused on reducing power consumption as a quick way of minimising the carbon footprint of the average datacentre. High-end datacentres, fully loaded with high-density blade servers and companion switches, can now consume as much as 40 kilowatts per rack — many times the rated power consumption of datacentres just a few years ago.

Read This:

chipduel97x72.jpg

Datacentres in crisis: Moore's law can't stand the heat

Over the past few years, the amount of electricity required to power a server in a datacentre has more than doubled. So although buying the server costs almost 20 percent less than it did two years ago, that server will cost significantly more to run.
Read More »

This level of power consumption not only increases the demand for electricity, it significantly increases the amount of heat coming out of the servers. All this heat must be cooled, using equally expensive and electricity-guzzling air conditioning systems.

For years, "we always had to design servers for maximum power," says Tony Parkinson, Asia-Pacific vice president of industry standard servers with HP, whose server lines range from low-end commodity systems to room-filling high-performance computing (HPC) clusters. "We were going the wrong way. You can just keep building bigger power supplies but then you've got the whole power dynamics to deal with."

These dynamics are forcing many datacentre operators to consider their options.

Last June, Google, Intel and a number of computer component companies launched the Climate Savers Computing Initiative in an effort to increase energy efficiency in PCs. They claimed that around half the electricity used by a modern server is wasted in the AC/DC conversion process.

One potential improvement is moving the thermally inefficient process of AC-to-DC conversion away from individual servers. By giving servers DC-only power supplies — a relatively easy modification — companies can relocate the process of AC-to-DC conversion outside the datacentre, moving hot power supplies outside the datacentre and avoiding the need for air-conditioning systems to be cranked up to compensate.

Various datacentre designs also allow operators to increase ambient air temperature from 18 degrees Celsius to around 22 degrees Celsius, HP's Parkinson says — enabling a massive reduction in the expense and carbon emissions related to air conditioning.

In the long term, such techniques will likely become widespread as environmental considerations become fundamental in datacentre design. This will pave the way for innovations such as water-cooled servers, which replace inefficient server fans with a more effective heat-exchange system. Once common, water-cooled designs fell out....

...of favour for seemingly easier air-cooled designs but began making a comeback after IBM recently developed, and began licensing, water cooling technology that reduces heat output by 55 percent compared with conventional air-cooled systems.

Twenty years from now, the common power reticulation and environmental systems will have likely grown out of the many ideas that are emerging from the recognition that current power structures are simply unsustainable. Since all macro trends point towards a tighter supply of electricity in the future, the continuing steady expansion of corporate datacentres will force companies to minimise consumption as much as possible — and, potentially, reward those companies who do with US-style rebates.

Does this mean next-generation datacentres should be designed with wind farms and solar panels on the roof? Perhaps. Certainly — as the state of South Australia recognised recently after learning BHP's Olympic Dam copper and uranium mine would require nearly half the state's power by 2010 — power consumption issues have become intimately linked with business strategies.

Core competency
While many companies are adopting virtualisation software to reduce the number of physical servers they use, in the long term, more efficient equipment designs will dominate efforts to strip down power consumption. Invariably, this requires an absolute step away from the one application, one server (and one power supply) mantra that has guided datacentre design for far too long.

Blade servers, which combine multiple physical servers into a single chassis, have been one major disruptive force in this area. Parkinson sees blade servers as the best long-term design strategy, and envisions the blade chassis as the natural home for network switches and all other types of datacentre equipment.

Does this mean next-generation datacentres should be designed with wind farms and solar panels on the roof?

Because they run multiple servers using a single shared power supply, Parkinson argues, blade servers are inherently more manageable and consume less energy per server than powering each one individually. "It's pooled power," he explains. "Our mantra is that we'll blade everything."

However, simply putting existing servers and CPUs onto blades won't maintain environmental benefits in the long term because although server blades provide efficiencies of scale, their benefits are limited without a fundamental redesign of the components going onto those blades.

Multi-core computing is a major step in this direction, and one that current pundits feel will extend the shelf life of current silicon-based processes for decades to come. "In terms of the general notion that performance is going to increase, this will be manifested in a larger number of cores," explains Jerry Bautista, US-based director of technology management and teraflops research with Intel.

As the head of Intel's 80-core research project, Bautista has seen perhaps farther than anyone into the future of processor design. In his mind, the shift towards multi-core computing presents the biggest disruptor — with the greatest implications for server design into the future.

The 80-core project, also known as the Teraflops Research Chip (TRC), has as its goal the creation of a single chip to deliver a full 1 trillion floating point operations (FLOPS). That's similar performance to...

...that of ASCI Red, a 1996 supercomputer that used nearly 10,000 200MHz Pentium Pro processors and required a million watts (1MW) of electricity. The TRC, by contrast, consumes just 62W of power.

TRC, which is a blanket term for more than 100 Intel projects being conducted worldwide, is a proving ground for new architectures including a 'tile' approach in which large numbers of identical cores are arranged in a matrix and linked using high-bandwidth interconnects. "In our simulations, we've gone out to thousands of cores for these critical applications," says Bautista.

Our mantra is that we'll blade everything.

Tony Parkinson HP VP industry standard servers Asia Pacific

Increasing core density is only one part of the TRC effort: complementing the architecture is a shift towards new developments such as a built-in 5-port message router, fine-grained power management to reduce overall consumption, and 3D stacked memory, which increases data density by allowing the addressing of data along three axes instead of the current two. "The architecture is not so much about the core as it is about the way data moves," says Bautista.

By tying memory to individual cores of the chip architecture, Intel can substantially reduce the latency incurred by the movement of data into and out of the processing core; in current chip architectures, the interconnects that link CPU and memory are a major bottleneck that will be addressed later this year with the QuickPath common system interface interconnect, which is built into the upcoming Nehalem and Tukwila processors.

Given the industry's inexorable progression along a performance curve driven by Moore's Law, the odds that innovations such as QuickPath will even figure in processor architectures by 2020 or beyond are quite low. However, the research work being done in projects like TRC will have long-term implications on processor design — particularly in terms of bolstering the multi-core concept to turn CPUs into clusters combining processing core, memory, cache and other previously separate components.

Tackling the rest
Researchers are also working on reworking long-established designs for other key server components: emerging from decades of development, for example, phase change memory (PCM) chips — a faster, higher-density memory architecture — were this month debuted to the market by Numonyx, a joint venture between Intel and STMicroelectronics. PCM is expected to improve performance considerably over current options, and as economies of scale kick in, it could displace flash memory in many applications.

Solid state disks (SSD) are the hard drive alternative pioneered years ago by Australian firm Platypus Technology. Recently they have hit the mainstream after being offered as an option in notebooks from Apple, Dell, and others.

The architecture is not so much about the core as it is about the way data moves.

Jerry Bautista, Intel director of technology management and teraflops management

SSD has already gained favour as an improved form of cache in some server installations, but high costs have limited its applicability. Over time, however, declining costs and better reliability — particularly important given reports that SSD-based laptops have serious reliability issues — will bring SSD into the mainstream.

Hard drives aren't likely to go anywhere anytime soon, however, since they continue to offer the best price-per-performance ratio of all storage options — and that is only likely to continue as hard drive manufacturers continue to squeeze more and more storage out of the same-sized disks.

Speed is an issue, however: current 15,000-rpm designs are pushing the limits of mechanical viability, warns Parkinson, a trend that should give ever-improving SSD technology a much higher profile in coming decades. Within the datacentre, SSD will likely find a home as a temporary storage medium, holding the most frequently-accessed information in its role as the primary tier for hierarchical storage management (HSM) solutions.

Shining a light
Current research into server architectures is focused on reducing power consumption, increasing the modularity and scalability of chip designs, and eliminating persistent bottlenecks — such as interconnections and memory-chip latency.

While researchers have proved incredibly resourceful in extending the life of current architectures, the next generation of research is taking server design in a completely new direction.

A major focus is the movement towards replacing existing electronic wiring with connections based on optics and the transmission of light. Numerous companies have been working on developing such optical solutions. In March, IBM announced a major breakthrough with the creation of a Ultraperformance Nanophotonic Intrachip Communication research program.

Widespread interest in optical server interconnects is driven by one basic fact: light propagates faster through fibre-optics than electrons through wires. This translates into unprecedented speed and capacity that has, for example, allowed telecommunications carriers and big businesses to...

... use ever-improving dense wave-division multiplexing (DWDM) technology to successively boost transmission speeds per fibre-optic strand from hundreds of megabits per second to 1Gbps, 2Gbps, 10Gbps and most recently a jaw-dropping 40Gbps.


Similar techniques have led to rapid improvement in other areas: fibre-optic-based Fibre Channel technology, used to connect the various parts of a datacentre's storage area network (SAN), has rapidly improved from 1Gbps to 2Gbps, 4Gbps and now 8Gbps without requiring a cabling upgrade. Pushing this technology into server designs will improve performance substantially in coming decades, all the while reducing power consumption in a trend that can't help but become fundamental to datacentre design.

Given the efficiency of computing at atomic scale, it's worth noting two areas that will also figure in the future: nanotechnology and quantum physics. Nanotech is already driving major innovation in a variety of industries, and will certainly be leveraged in coming years to build computing and storage systems from large numbers of incredibly small components.

As manufacturing processes improve, nanotech innovation is likely to provide mechanical complements to the optical interconnects that will become widespread in server technologies.

Quantum computers have long been mainly theoretical, but recent progress in controlling qubits — the quantum analogue to electrical bits — has suggested a growing role for quantum computers that could eventually be many times more powerful than existing systems.

It's like buying a car which could be a Ferrari for taking a girlfriend out on a date, but change to a ute when the owner needed to go shopping.

Last November, D-Wave Systems demonstrated a 28-qubit quantum computer that it plans to scale up considerably in coming years; if it is successful, even optical systems could become obsolete within two generations of datacentre.

Polymorphic datacentre — made to measure computing
Within the next five years, datacentres could contain stockpiles of components — such as memory and processors — that could be configured in real time in order to manage unusual workloads, which is a concept HP calls polymorphic computing.

The computing needs of a business change every day, said Martin Fink, senior VP and general manager of business critical systems at HP. The next generation of datacentres, according to Fink, will contain a generic system that could morph into whatever was required to solve a specific problem. It's like buying a car which could be a Ferrari for taking a girlfriend out on a date, but change to a ute when the owner needed to go shopping, according to Fink.

In a polymorphic computing datacentre, Fink said, there will be a bunch of CPUs on one wall, communications infrastructure on another, storage on another and so on. These components can then be assembled in real time to solve a workload problem, using only the computing resources required for the job. If, while the workloads are being processed, there is pressure on any one group of components — such as memory — more can be sourced, he said.

Fink believes this type of functionality will be available in as soon as five years, and he has research teams in the US and the UK working on it. However, before it can become a reality, two barriers need to be overcome, he said.

The first is one touched upon earlier — in order for various components such as CPU and memory to be able to work together while being physically separated by a significant distance, optical connectivity needs to be improved.

The second barrier to HP's polymorphic computing dream is the creation of software required to organise which components need to link together to process a workload. HP has already made good progress on this, according to Fink. "We think we are delivering parts of that with the Insight Dynamics VSE."

Can the software keep up?

Jerry Bautista, Intel director of technology management and teraflops management

Software was also highlighted as a problem by Intel's Bautista, who asked "can the software keep up?"

Developers who have long been used to formulating computing processes in serial-based problem solving manner are having to get used to rebuilding their data crunching systems around massively parallel architectures — which used to be the exclusive domain of HPC systems but are rapidly moving towards commodity status thanks to ever denser multi-core systems.

Expect a major change in the way applications are designed in the future, as developers rush to adapt their thinking processes to the significant architectural changes coming down the track.

Alternatively, at the Intel Developers Forum in Shanghai earlier this year, the chipmaker touted a programming model called Ct that allows developers to use their C++ programs for parallel computing applications "without having to modify a single line of code".

The datacentre of tomorrow
So what will servers and datacentres look like one or two generations from now?

Expect an all-optical design that extends from the processor core to remote sites, leveraging various types of optical network switches to route data at speeds that would make current network engineers' heads spin. Hard drives will still be around, but will have higher density and play second fiddle to memory-based storage. And through it all, continuous innovation in design and manufacturing will keep the industry well ahead of the predictions of Moore's Law.

"Manufacturing is continually delivering phenomenal process innovation that continues to drive down the critical dimensions we're working with.

"They're not only delivering a higher performance process, but they appear to be doing a better job of the fundamentals of the process, moving us down the path of smaller and smaller critical dimensions," added Bautista.

Suzanne Tindal from ZDNet.com.au contributed to this article.

Editorial standards