X
Tech

Ride ‘em hot and fast

The high deserts of Oregon and Washington may have inspired many a cowboy movie, but they’re finding a new role in the fast growing world of high capacity data centres.
Written by Simon Bisson, Contributor and  Mary Branscombe, Contributor

The high deserts of Oregon and Washington may have inspired many a cowboy movie, but they’re finding a new role in the fast growing world of high capacity data centres. The fourth and fifth generation of data centres are following cheap power, and the hydroelectric schemes of the northwest US (along with the climate, long cold winters and short hot summers) are making those high plains the place to go…

At this year’s Microsoft TechEd, in the decidedly data-centre-unfriendly heat and humidity of Atlanta, the company’s data centre evangelism, Rick Bakken, walked an audience of IT professionals through the design and the philosophy of Microsoft’s massive data centres. He talked about more than the architecture and the technologies, also the way Microsoft runs and manages its ever-growing number of online services.

When you mention Microsoft and data centres, it’s easiest to think of its pioneering third generation centres, like the massive 700,000 square foot site it built two years ago in Chicago. That building cost over a billion dollars, just for the basic infrastructure – before any servers were installed. That’s a huge sunk capital cost, and the company realised that it needed to change the way it designed its data centres – focusing on one of the largest costs, the need for cooling.

The result was a new way of building data centre components, with all the components needed for a module in two containers – one containing support tools and infrastructure, the other the servers. Part of the shift was a move to cloud approaches, using the network to manage the network by moving up the stack. Instead of relying on hardware for resilience, the new design used lots of copies. While the individual reliability for a server was lower, it was a lot cheaper to run.

Microsoft’s fourth generation data centres build on this philosophy, eschewing large buildings for simple concrete pads and basic weather shelters. Bakken said that “There won’t be enormous buildings – we’re not doing that again!”. Bakken also noted that Microsoft keeps all its data centre staff in a single organization, from architects to administrators, to technicians. He says it’s “The only way to run as a utility model”.

Data centres are going to be increasingly important to Microsoft, if the global connected device forecasts are to be trusted. There are going to a lot of devices out there consuming the cloud, and Microsoft is already running over 200 cloud services. There is still a legacy of older properties running on specific servers, in specific configurations, but the company is well on the way to moving everything to commodity hardware, in a federated infrastructure. Some services, like Bing are completely open and public, others are locked down, like the Health Vault medical records service. There’s a need for Microsoft’s data centres to support, and offer, a number of different types of service.

You might think that Microsoft is fairly new to the data centre game, but it’s been around a long time – opening its first data centre in 1989, and running online services running for over 2 decades. Some services run for free, like Hotmail and Messenger, but for data centres free means expensive, with a lot to be done to support millions of demanding users.

The switch to virtualisation has helped. The management team can move VMs between servers when hardware fails, improving service reliability. It also means that the role of the IT staff has changed, with self-service portals and pools of servers removing much of the day-to-day drudgery of server configuration and deployment.

Now the company is moving at full speed into the cloud era, where deploying data centres has become a speed bump, as you’re limited by when and where you’ll need that capacity. Microsoft is addressing some of these issues with its cloud architecture, which mixes primary and secondary data centres using them as anchor and edge nodes for its applications. The aim is to build both redundancy and resiliency into the application to keep things running if there's a failure. It’s also starting to roll out its own content delivery network as part of Azure, meaning it owns the hardware – and the resulting SLA. The future will be smaller nodes in more locations.

Bakken describes Microsoft’s data centres as “the IaaS provider to the PaaS businesses in Microsoft”. That means there’s a focus on getting the most CPU performance per watt of power in, often calculated as the Power Usage Effectiveness metric (PUE). Currently PUE is calculated as Total Facility Power/IT equipment Power, though there’s work going on to develop an application performance/watt metric.

New data centres, like Microsoft’s Dublin facility, have a much lower PUE than traditional facilities, with Dublin getting a score of 1.2 PUE by using the outside air for cooling. Not needing massive air conditioning makes a big difference, and it’s an important part of the design philosophy behind Microsoft’s 4th generation data centres. These use a modular, premanufactured build, with roofs over the ITPAC containers. This makes a big difference and you can get a PUE of around 1.05. Costs are around $10M/MW at this point, though future designs should reduce costs by an additional 5 times, with a 600% reduction in operational expenditure.

Bakken says that “Microsoft leverages vendors, working with OEMs”. The original ITPAC design came from MS, as a proof of concept. The resulting deployed systems were designed by OEMS. They use very generic servers, with no need for local fans for the servers, as there are massive fans in the container airflow. Cooling is simple, with adiabatic cooling only needed for 8 weeks of the year in Washington. That brings the cost of operation down dramatically. New equipment can be added quickly, with four ITPAC pods as a deployment unit. Each ITPAC’s cost includes set up, maintenance and upgrades, with three complete replacements in a 12 year period as part of the contract – and the whole unit being recycled at the end of the contract.

So what of the future? Bakken points out that Microsoft is already buying between 2 and 5% of global server production every quarter, and that number is likely to rise. The company is looking to the future, thinking about its 5th generation data centres. There are likely to be big changes here, with no spinning devices at all (no cooling fans and a switch to SSD storage – even though it’s initially likely to increase the amount of maintenance needed). Microsoft is actively hiring staff for its data centres, focusing on folk who’ve built cloud data centres before – the people who’ve been designing and building infrastructure for companies like Google and Facebook.

Simon Bisson

Editorial standards