The 21st Century Data Center: An overview

Summary:Data centers range from on-premise facilities running traditional enterprise applications to massive outsourced installations offering a variety of cloud services. We examine the state of play in data center-land, and consider some of the trends that will shape its future.

The 21st century data center has its roots in the mainframe-housing computer room of the 1960s and 70s. Both are large, well-secured facilities built to house complex and expensive IT infrastructure that require sophisticated power and cooling systems. The modern data center began life in the 1990s as banks of servers within companies' own premises (often replacing mainframes in former computer rooms). Next came off-site colocation facilities where companies could house their servers in a variety of spaces — private suites, lockable cages or racks, for example. Finally we have today's purpose-built palaces of processing, storage and connectivity, which typically deliver some combination of colocation, managed hosting and (public, private and/or hybrid) cloud services.

Some of these data centers are truly gargantuan. The world's largest, in terms of available power supply and cooling system capacity, is Switch Communications Group's SuperNAP-7 in Las Vegas, a colocation facility covering 407,000 square feet (37,811 square metres, or 3.78 hectares, or 9.34 acres). Such is the cloud-computing-driven demand for high-density colocation that Switch has announced expansion plans for up to 2.2 million square feet (204,387 square metres, or 20.4 hectares, or 50.5 acres) of data center space, beginning with SuperNAP-8 (expected to open in May) and then SuperNAP-9, which will break ground in 2014.

c21-dc-switch-intro
Switch's SuperNAP complex in Las Vegas has expansion plans for a total of 2.2 million square feet (204,387 square metres) of data center space. Images: Switch Communications Group

These days, the 'data center user' ranges from massive IT infrastructure and services providers like Amazon and Google to any business user or consumer who connects to the internet. However, for the purposes of this article, a typical user is an IT manager in a business with an on-premise data center running a mixture of traditional enterprise applications and virtualised workloads that's already using some cloud services — most likely SaaS applications — and is considering the next move: expand on-premise capacity, move more workloads to the public cloud, or adopt a hybrid strategy?

Data center usage is currently undergoing a transformation due to the increasing use of 'cloud' (i.e. outsourced, external) infrastructure, services and applications. According to Cisco, nearly two-thirds (64 percent) of all data center traffic will be processed in cloud facilities by 2016, rather than in 'traditional' (i.e. on-premise) data centers. In 2011, by contrast, the estimated split was 61 percent traditional and 39 percent cloud.

Data center design and construction
A typical data center is an industrial building that provides floor space for housing IT equipment, along with all of the necessary power distribution, cooling, cabling, fire suppression and physical security systems. Data centers are normally located in places where the cost of electricity and land is low, but where there's also a sufficient pool of labour to staff the facility (unless it's a so-called 'lights-out' data center that's administered remotely and has next to no on-site staff).

Data centers are classified on a four-level Tier system based on work by the Uptime Institute, a data center industry consortium, and ratified by the Telecommunications Industry Association as the TIA-942 standard. For each tier the standard specifies the architectural, security, mechanical and telecoms requirements needed to achieve a given level of availability: 99.671% (28.8 hours downtime per year) for Tier 1; 99.741% (22h downtime/year) for Tier 2; 99.982% (1.6h downtime/year); and 99.995% (0.4h downtime/year) for Tier 4.

Another widely quoted figure relating to data centers is the PUE or Power Usage Effectiveness. Developed by The Green Grid consortium, PUE is a simple ratio of the total power used by a data center to the power used by the IT equipment. The ideal PUE ratio is 1.0, with real values ranging from over 2.5 down to around 1.1 (see below). Given the size of its data center operation, it's no surprise to find Google in the forefront of power efficiency: the search giant reports a 'comprehensive' PUE of 1.12 across all of its data centers for Q4 2012 ('comprehensive' because Google includes all sources of power overhead and takes measurements all year round). Another way of expressing PUE is as its reciprocal, which is known as Data Center Infrastructure Efficiency or DCiE. Google's score on this metric is 0.893, or 89.3 percent.

To gauge the state of play in data-center-land, the Uptime Institute kicked off an annual survey in 2011. Its 2012 Data Center Industry Survey, conducted in March/April, is based on responses from 1,100 facilities managers, IT managers and C-level executives from North America (50%), Europe (23%), Asia (14%) and Central/South America (7%).

Among the survey's key findings was that 30 percent of respondents expected to run out of data center capacity in 2012, with the majority planning to keep current sites going by consolidating servers and upgrading facilities infrastructure. Backing up this point, compared to the 2011 survey, 10 percent fewer respondents planned to build a new data center, while 10 percent more planned to push more workloads to the cloud.

Key drivers towards cloud adoption identified by the survey were cost reduction (27%), scalability (23%), customer/user demand (13%) and speed of deployment (13%). The main impediments to cloud adoption were security concerns (64%), compliance/regulatory issues (27%), cost (24%) and lack of internal cloud computing management expertise (20%).

The Uptime Institute also found that organisations were making more precise PUE measurements in 2012 than the previous year, with the average reported PUE falling in the 1.8-1.89 range. Power-saving strategies revolved around hot aisle/cold aisle containment and raised server inlet air temperatures (finding the sweet spot between IT equipment fan power and cooling energy).

c21-dc-ui-pue
The Uptime Institute's 2012 Data Center Industry Survey reports a wide range of PUE (Power Usage Effectiveness) figures from its 1,100 respondents. Only six percent of respondents claim a PUE of less than 1.3. Image: Uptime Institute

Other trends noted in the survey include increased interest in prefabricated modular data centers or components (9% deployed, 8% planning to deploy, 41% considering) and the beginnings of a move towards implementing Data Center Infrastructure Management (DCIM) tools (see the following pages for more on these trends).

Continued

Topics: Data Centers, Cloud

About

Hello, I'm the Reviews Editor at ZDNet UK. My experience with computers started at London's Imperial College, where I studied Zoology and then Environmental Technology. This was sufficiently long ago (mid-1970s) that Fortran, IBM punched-card machines and mainframes were involved, followed by green-screen terminals and eventually the pers... Full Bio

zdnet_core.socialButton.googleLabel Contact Disclosure

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Related Stories

The best of ZDNet, delivered

You have been successfully signed up. To sign up for more newsletters or to manage your account, visit the Newsletter Subscription Center.
Subscription failed.