X
Tech

Building a data center circa 2007

Jon Vander Hill has built two data centers since 2000 and is on his third for Berbee, the data center hosting unit of CDW. Since then the landscape has changed dramatically.
Written by Larry Dignan, Contributor

Jon Vander Hill has built two data centers since 2000 and is on his third for Berbee, the data center hosting unit of CDW. Since then the landscape has changed dramatically. Power costs matter more than ever, as does cooling design.

CDW Berbee is currently building a its third data center, with plans to open it in 2008. The data center will be built in Fitchburg, Wisconsin and will effectively double the company's capacity. When the data center opens, CDW Berbee will have two in Wisconsin (Fitchburg and Madison) with another in Minneapolis.

Here are some of the key themes Vander Hill, infrastructure manager for CDW Berbee, highlighted as the company builds its data center.

The revenge of Moore's Law. Doubling the number of transistors on a processor every 18 months is just great in theory. At some point you have to pay the electric bills. "Each time there's a jump in computing power there's just a little more heat," says Vander Hill. "There's an exponential growth curve for power consumption."

Vander Hill's statement isn't a news flash for anyone building a data center, but here's what's stunning: Power was largely an afterthought when Berbee built its data centers in 2000 and 2001. Now, power is the primary concern. Chips are smaller, servers are smaller and data center racks hold more. It's expensive to cool that stew.

Hosted data center billing--real estate vs. power metering. The power issue is slowly changing the way hosting firms like Berbee view their services. There's an evolution going on, says Vander Hill. Until recently, the relationship with your hosted data center provider went like this: The customer said they needed space. The hosting company asked how much. And then they sealed a deal based on square footage.

Now the game is more about power density and kilowatts. Instead of space, the vendor-customer conversation revolves around how many kilowatts you're looking for. Vander Hill says Berbee's existing facilities work off of a power density of 1.5 kilowatt per rack standard. The new data center will be 5 kilowatt standard. Some facilities go as high as 7 kilowatt to 10 kilowatt standard. Racks run roughly 7 feet tall. "It's a question of how many kilowatts do you need in a rack," says Vander Hill.

Kilowatt pricing. Vander Hill says that Berbee--and much of the industry--is moving toward a tiered pricing model based on kilowatts. "The power is driving the economics," says Vander Hill. That said, data center hosting firms still see their services based on selling real estate. Although pricing is based on kilowatts, vendors still view sales as real estate driven, says Vander Hill. After all, a rack does take up a certain amount of square footage. Given that power is driving the economics of data center hosting, a metered approach may make sense in the future. However, Vander Hill says this pricing approach may take time since the economics may not be in the industry's favor.

The cooling system. Berbee is working with Chatsworth Products, which designs racks, to get the cooling where it needs to go. Vander Hill says one requirement was to have a direct vented rack with a chimney plugged into the back. The general idea is to target specific aisles where hot spots could occur. "A lot of data centers have a free flow of air where cool air comes up from the floor," explains Vander Hill. "With higher power densities, that design doesn't work." Why? Hot air will still push its way down to thwart the cool air coming from below. The only way to alleviate this issue would be to build increasingly high ceilings--you run out of headroom at some point. Instead, you need a direct venting system. Team Technologies, a Cedar Falls, Iowa firm, is working with engineers to design the cooling systems.

Rack density. Here's why there's such a cooling problem. Racks hold more gear and are deeper than ever. Vander Hill says racks were typically 19-inch deep "pizza boxes" that could hold blade servers. Now these racks go 3 to 4 feet deep. Direct air cooling is one of the few ways to cool those densely packed servers.

A bevy of blades. I asked whether Berbee would accept any hardware that wasn't rackable. Vander Hill says Berbee does, but it's becoming rare. Most hardware fits in a rack. There are exceptions-- storage arrays for instance and mainframes--where Berbee has to take the dimensions and estimate a "power footprint" to come up with pricing. "We look at the physical dimensions to figure out its 5 kilowatt rack equivalence," says Vander Hill.

Befriend your utility. Given that power is the primary focus of any new data center construction it pays to work closely with the companies providing the juice. Vander Hill says the power rates in the Midwest are good, but can't compare with hydroelectric power centers in the Pacific Northwest. Vander Hill says Berbee is purchasing backup generation services for its new facility from its local utility. How does this work? The utility provides backup generators as long as it can use the 1.5 megawatt generators to alleviate peak loads when necessary. The benefit for Berbee: It doesn't have to build and manage backup generators and keep them fueled with diesel. The backup services are baked into Berbee's utility rates.

What's the lifespan of a data center? Vander Hill says the biggest challenge with aging data centers is all the moving parts necessary to cool them. Fans and compressors repair costs add up. Vander Hill estimates that the average lifetime of a data center is 15 to 20 years before they aren't cost effective to maintain. Power saving software and virtualization technology could extend the life of data centers in the future, he notes.

Editorial standards