X
Tech

The Right Kind of Room at the Data Center

Continued strong processor and storage growth, along with significant product integration and form-factor reduction, is forcing operations to rethink the notion of data center space. "Square feet" as a descriptor is losing relevance, unless weight, available power, and cooling capabilities are also factored in.
Written by Rich Evans, Contributor

Continued strong processor and storage growth, along with significant product integration and form-factor reduction, is forcing operations to rethink the notion of data center space. "Square feet" as a descriptor is losing relevance, unless weight, available power, and cooling capabilities are also factored in.

META Trend: Through 2006, storage management automation and process immaturity will limit net annual enterprise storage capacity growth to 55%-65% (67%-84% gross procurement), with price/capacity improving 35% per year. To effectively leverage and manage enterprise media assets, users will require a data/media center of excellence. Through 2004/05, software value-added functions, manageability, integration, and interoperability will be the primary enterprise storage differentiators.

Rarely do technology advances and changes exceed analyst projections, except for parameters that markedly affect operations groups. The mundane but critical data center infrastructure parameters of component weight per square foot (sq ft), power consumption per sq ft, and air conditioning required per sq ft are all growing faster and thereby besting the traditional facilities planning assumptions of only 12-18 months ago - especially for dense storage and emerging blade arrays. For example, it was expected that data center design points of 400 watts/sq ft would suffice until 2003/04 (typical design points were 100 watts in the late 1990s, with 200 watts considered a reasonable design point for 2000-02), with requirements doubling again by 2008/09. However, for aggressive vendors exploiting new, denser components (with requisite high-power densities), the simple fallout of high-speed intracomponent intercommunications exploiting powerful non-blocking switches is the basic need for high power and cooling. Long distances (several feet) and less dense cabinets are not an option for these high-speed storage and blade designers.

It is safe to say that the former goal for 2008/09 of a data center with 800 watts per sq ft will have to be moved ahead by two to three years to 2005/07, and by the end of the decade, we will see power requirements pushing 1000 watts/sq ft. For example, current contemporary dense storage designs, such as HDS’s 9980V, are pushing “safe” 400 watts/sq ft designs. A fully configured solution requires 16.9 kilowatts with a physical floor space requirement of 26.9 sq ft. This translates into about 42 sq ft of data center real estate (approximately 400 watts of power available per sq ft) being required just for power, or a 1.57 area ratio (42 sq ft needed for component power requirements, divided by the component space requirement of 26.9 sq ft - in general, an area ratio of 2+ is considered reasonable because of component clearances).

Moreover, floor loads for this (and systems to follow from EMC) are averaging more than 240 pounds/sq ft, and heat loads are 22 kilowatts. For a contemporary 400 watts/sq ft data center (built on a slab for load-carrying capability - second story might be iffy with these loads), this new generation of equipment can be accommodated, at least for 18-24 months. Factoring in 18- to 24-month technology and form-factor life cycles, a contemporary data center will be less than 30% filled by 2005, and by 2007, only 20% filled if a center cannot move to 400 watts/sq ft or more.

Equally importantly, customers looking at already built-out space must do in-depth power, weight, and cooling due-diligence studies, calculating achievable floor space density based on markedly increasing weight, power, and cooling densities. Customers with design limitations of 100 and 200 watts per sq ft, if that is not upgradeable, will have to live with very low usage, or possibly may need to convert raised floor to office space.

More importantly, users caught in 40%+ storage growth and three-year refresh cycles (and another 20% of new and power-hungry components added as yearly replacements) must incorporate strong facilities planning as part of standard data center procedures that tightly couple plan, build, and run organizations. Literally, “hot” boxes can undermine planned utilization schemes, leaving large portions of an average data center unusable in only 24-36 months.

Needed: Data Center Floor-Charging Algorithms?
For many operations groups faced with ongoing data center floor space challenges, it is time to think through a charging algorithm. Although we would be the first to admit that it will have very little effect on purchases, it will provide a vehicle (i.e., definable facilities costs, technology tradeoffs) for a more open facilities discussion with clients, who can help argue the case for ongoing facilities enhancements.

For 80%+ of operations, data center floor space chargeback is based on a simple algorithm: percentage of floor space used by a device multiplied by data center facilities costs. More importantly, a very large portion of this group does not even “see” the facilities costs, which are passed along as part of general facilities charges - almost always as costs below the line. However, for the remaining 20% or less, three definite chargeback camps are emerging, all of which charge (to a degree) based on space used and power consumed. These more complex algorithms factor in percentage of data center resources consumed and penalize for space “poorly” used. The three camps are:

  • Data centers that have upgraded available power per sq ft to “good” (which is close to 400 watts/sq ft)
  • Data centers that have average power, or about 200 watts/sq ft
  • Data centers that have below-average power available (100 watts or less per sq ft)
For the first camp, floor space is charged based on a simple percentage of use of the whole, with one exception relating to ensuring that power “available” is in line with “requirements.” For example, if a large piece of equipment is added and it consumes only 50% of the average power (per sq ft) consumed by other tenants, the floor space cost for this user is upped by 20%-30%, because this large device is poorly using a critical resource.

For the second and third camps noted, floor space charges are based on a simple percentage of use of the whole, but there are penalties for power-hungry devices. If a device consumes more power/sq ft than the data center design limit, there is an additional charge. For example, in the second camp, if a device occupies 4 sq ft and consumes 300 watts/sq ft, the square-foot charge for the 4 sq ft device is 1.5x the average floor costs (300 watts/sq ft needed divided by 200 watts/sq ft available).

For centers with poor power, the third camp provides a more draconian cost uplift, because the data center is significantly underpowered. For example, a device with a power ratio of 4 (200 watts/sq ft device divided by 50 watts/sq ft available) might experience more than “just” a 4 space multiplier. When a data center is so far below average in power capabilities, the 200 watts/sq ft device might have a 10x greater floor charge (4 for power multiplied by a 2.5 economic factor to encourage data center upgrades). As data centers continue to age, we expect that, by 2005, 40% of operations will institute differential floor-costing algorithms to encourage user involvement in obtaining critical upgrades, and by 2007, instead of just three camps, there will be four, with the fourth camp doling out even more penalties for power-hungry devices.

Business Impact: Although technology continues to shrink form factors and increase speed, parameters such as power consumption and cooling require substantial facilities investment. Insufficient or poorly timed investment can affect revenue as well as customer satisfaction.

Bottom Line: Powerful but power-hungry storage and processor components require that operations and facilities groups better quantify current and future power, cooling, and weight requirements - or face running out of space.

META Group originally published this article on 13 March 2003.

Editorial standards