The 21st century data center: You're doing it wrong

The 21st century data center: You're doing it wrong

Summary: Outdated designs are keeping data centers from reaching their full potential.

SHARE:

Many data centers that exist today have been built with archaic design principles. Although these facilities were built to last for a decade or more, the're just not as efficient as they could be. Technology is moving at a rapid pace, but data centers seem to be trapped in a time warp of old-fashioned ideals and unfulfilled potential.

The problem even plagues data centers that are built today.

"Frankly, the problem with data centers are the people that get the project. It's the first time they've done it, and the guy that did the last data center retired 10 years ago and didn't take any notes," says David Cappuccio, Gartner's managing vice-president and chief of research for infrastructure.

Human nature drives a lot of data center designs, he says, and people tend to stick to tried and tested methods, even if they're not considered best practice by today's standards.

One example of this are data centers that use raised floors for cooling. Many IT pundits have discredited this method of cooling as wasteful, including Schneider Electric's territory manager for the Federal government and the ACT, Olaf Moon.

"With air, it requires more energy to push large volumes of heat away, but you only need small quantities of water to transport similar heat loads"
— Greg Boorer, Canberra Data Centres

"There are many reasons why you shouldn't have raised floors, yet we have 90 percent of data centers with raised floors today," says Gartner's Cappuccio.

New data centers being built are beginning to trend away from raised floors, but Canberra Data Centres (CDC) managing director Greg Boorer says that there's still more that can be done to make facilities more efficient.

Boorer suggests moving away from air cooling altogether, and instead, use water cooling directly where the IT equipment is housed.

"With air, it requires more energy to push large volumes of heat away, but you only need small quantities of water to transport similar heat loads," Boorer told ZDNet. "Using chilled water is far more efficient."

CDC started using water cooling for its data centers five years ago, and has since boasted a 60 percent saving in energy cost.

But the downside of water cooling is that it may not be an option for existing data centers, according to Boorer.

"Once you have an existing data center built on traditional design, or even some of the newer designs that are pushing air around, I would say it's prohibitively expensive to retrofit it with water cooling," he said. "People have made an investment — a rather large investment — in a data center, and they have to try and squeeze out as much life from it as possible to get as much return on investment as they can.

"It's only with new data centers that [you] have the opportunity to do things in a more efficient way."

Cappuccio notes that engineering firms that are consulted to build data centers know about the newer and more efficient ways to do things. But rather than try something new, they prefer the stock standard cookie-cutter approach to creating data centers because it's fast and easy, he said.

To that end, companies that are working with an engineering firm to build a new data center need to provide a lot of guidance and know what they want to achieve, says Cappuccio.

Using flawed methodologies to calculate how big a facility should be and how much equipment should go inside is one of the more common sins committed by data center builders. Companies usually base these calculations on the physical growth of their IT infrastructure over time.

c21-dc-cdc
Canberra Data Centres encourages the use of water cooling for new builds, or more efficient air cooling for existing facilities.
(Images: Canberra Data Centres)

The logic is that if a company has 1,000 servers and that number is growing at 10 percent per year, then it will need 1,100 servers the following year, and therefore, it will require a bigger facility to house them.

The maths is sound, but that's too simplistic and wrong when factoring in Moore's Law, according to Cappuccio.

Processors will get smaller and be able to manage more compute power as technology advances, which, in turn, affect the number of servers needed. A server that exists today will be able to retain its size while doing more compute work inside with newer technology. So when an organisation plots out its IT capacity needs in the back-end for the next few years, they shouldn't think that needing more capacity means additional servers.

"I've seen a lot of data centers being built that are too big," says Cappuccio. "We're finding people with data centers that are three to four years old when they realise they have far too much space, and are still providing air conditioning to those areas. So they begin to shrink them, putting up walls, bringing down the ceiling so they don't air condition the extra space."

Those spare areas may get turned into storage space or extra office rooms, he said. But the bottom line is this: building large data centers to accommodate future growth based on flawed calculations is wasteful.

Boorer also sees this as a persistent problem in the data center industry.

"Even with some of the more contemporary designs, these facilities are still slightly oversized, so it might be five years before the ICT load catches up to the datacentre infrastructure," he says. "In those five years, there would be tremendous inefficiencies.

"What you need to do is immediately start thinking about containment; use as much containment of air around data halls to stop the mixing of hot air and cool air. Efficient airflow is important to reduce the overall load on data center cooling."

Increased frequency in technology upgrading is another important aspect in improving data center efficiency, according to Cappuccio. Technology refreshes are historically done based on depreciation cycles, but many IT departments would yield to the temptation of keeping old equipment. Keeping old gear in data centers not only takes up floor space, but continues to chew up power, he said.

"IT guys don't like throwing stuff away, and they have to change their thinking on technology refreshes," says Cappuccio. Shorter refresh cycles for technology would mean higher capital investment, something CFOs would not be thrilled about, but it would save money in the long run, he said.

"If you sell it to the CFOs as: 'If we do this, we don't have to build a next-generation data center', suddenly, it becomes a small investment now versus a huge capital investment in, say, three years," says Cappuccio. "It becomes more justified."

"IT guys don't like throwing stuff away, and they have to change their thinking on technology refreshes"
— David Cappuccio, Gartner

IT departments also can't just focus on upgrading what Cappuccio terms 'little IT' — that is, more information technology such as storage servers and telecom gear. They need to think about 'big IT' — infrastructure technology such as PDUs, racks, and UPSs — as well.

"You don't look at those things as one-time installs — you look at them as long-term installs," he says. Different technologies within the data center will require refreshing at different times due to the varying pace of change.

"Apply that logic, suddenly the way we manage that building — the entire infrastructure — begins to change," says Cappuccio.

Power play

But how you build a data center is just one part of the journey to increase IT efficiency — it's more about good operations, according to Digital Realty senior vice-president and regional head for Asia-Pacifc Kris Kumar.

"Most of this is about general housekeeping," says Kumar. "You can get the best car possible, but if you don't maintain it with servicing, it's obviously going to drop down dramatically in performance over time." 

"A data center is no different — it's a bunch of machines that are working together to produce an environment that's good for servers and computers."

But looking after a data center doesn't mean you have to treat it like a delicate child. Several years ago, it was still the general consensus to turn data centers into freezers so that servers contained within wouldn't overheat. But servers don't need arctic temperatures to function.

"They just need to have an even temperature that can even be up to 27 to 30 degrees Celsius inside a room," says Kumar. Considering that cooling is one of the biggest energy drainers for a data center, turning up the thermostat by a few degrees can save a bucketload of cash, not to mention reduce the carbon footprint for a facility, according to Kumar.

If companies were to run the same processes in tiny computer rooms within their own office buildings, it could be up to three times less efficient compared to outsourcing to a third-party data center, according to Kumar.

"The data center industry is actually saving a lot of energy globally when folks start to outsource their stuff," he says. "Nobody ever talks about how much money [the data center industry] is actually saving."

Topics: The 21st Century Data Center, Data Centers, Australia

Spandas Lui

About Spandas Lui

Spandas forayed into tech journalism in 2009 as a fresh university graduate spurring her passion for all things tech. Based in Australia, Spandas covers enterprise and business IT.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

2 comments
Log in or register to join the discussion
  • But Wait .. There's More

    I've migrated 2 Data Centres and Virtualised one. The first was from one air-cooled centre to another. Some savings, some efficiencies, but not really the ROI possible. The second was to a water cooled facility. The client also used this as an opportunity to 'house clean'. Virtualising all the easier servers, and starting a program to convert or replace the remaining ones. The savings of a comprehensive architecture review cover far more than power. It's touched on in the article, but really it is crucial. Only by virtualisation can the full benefits of improving server efficiency be utilised. Otherwise, server acreage will continue to grow. The lead system architect and a supported CIO are the key drivers of real efficiency that modern data centres enable.
    oldheretic
  • quit reiterating nonsense

    Just keep your DC tidy and organized. Yes, cooling is a significant cost, but not that much really: power costs about 10% of a machine's purchase price per year, and any idiot can run a PUE 1.4 center. So we're talking 4% - that's good to know, but shouldn't be driving your decision process.

    It's important to treat this calmly, since people (examples in this article) like to go crazy about heat and cooling. You can't just decide to run your servers warmer - the correct temperature depends on their power dissipation, the thermal resistance of heatsinks and airflow. Yes, water-cooling is great, worth a percent or two of overhead, but you wind up with vendor-lockin (what's that costing you?)
    markhahn