X
Innovation

Photos: A tour inside one of Microsoft's cloud datacenters

Taking a look at 270 acres of servers in the high desert plains.
Written by Simon Bisson, Contributor
Microsoft's data centers dwarf the small Eastern Washington town of Quincy.​

Microsoft's datacenters dwarf the small Eastern Washington town of Quincy.

Image: Microsoft

The numbers for Microsoft's cloud are impressive: over a million servers and more than a hundred sites around the world. So if we're going inside one, it might be an idea to start with some statistics.

  • 1989: Microsoft opens its first datacenter on its Redmond, Washington campus.
  • 200-plus: The number of online services delivered by Microsoft's datacenters 24x7x365.
  • $15 billion-plus: Microsoft's investment in building its cloud infrastructure.
  • 30 trillion-plus: The number of data objects stored in Microsoft datacenters.
  • 1.5 million-plus: The average number of requests its networks process per second.
  • 3: The number of times Microsoft's fiber optic network, one of North America's largest, could stretch to the moon and back.
  • 3.8 billion kWh: The amount of green power purchased by Microsoft as part of a carbon-neutral goal

Having built and run a national ISP in the UK and having been responsible for setting up more than one datacenter over the years, I was intrigued to see how Microsoft built and ran something so very much larger than my tiny patch of cloud. So when the invite came to take part in its first datacenter tour for more than a decade, I jumped at the chance.

Even with an invitation, getting into a Microsoft datacenter isn't easy. I'd had to provide ID and sign several forms, before I got on a coach with a small group of enterprise and datacenter journalists early one September morning. We were about to drive three hours across Washington state to a small town near the Columbia River, where one of the company's massive datacenters takes advantage of low-cost hydroelectric power and the predictable climate of the high desert plains.

(I should note that there were some restrictions on what we could do, and one was that we weren't allowed to take any pictures. However the images that Microsoft provided, which I've featured in this story, matched what we saw.)

Quincy is an unassuming little town, one that's been at the heart of Eastern Washington's agricultural industry for decades; the area describes itself as the home of American potato-growing. But the rise of the cloud has changed the shape of the town, with its cheap power bringing data centers to the desert. Hot in the summer, cold in the winter, Quincy has a predictable climate that makes it easy to condition datacenters, bringing in air through adiabatic coolers on hot days and using it without any conditioning at night and when it's cool.

Microsoft's Quincy facility isn't so much one datacenter as several, showcasing several generations of its datacenter designs. While the earliest datacenters were traditional racks of off-the-shelf servers, much like you'd see at any co-location site, the Quincy site contains examples of three different approaches to datacenter design. Together the facilities add up to more than 270 acres of land, spread across the small desert town.

A line of ITPACs at Microsoft's Quincy site.

A line of ITPACs at Microsoft's Quincy site.

Image: Richard Duval/Microsoft

The oldest is that familiar modern data center, with hot and cold aisles between servers, even though different rooms encompass different releases of the server hardware. What's perhaps most interesting is that as hardware has got more powerful, and denser, there are fewer racks, ensuring that the halls of servers stay within the design power budget. The jump between the original tier-one vendor hardware and new Open Compute-based systems is a big one, and while they can use the same halls, there are far fewer racks.

The amount of hardware is also reduced by Microsoft's shift to a virtualized infrastructure, using software-defined networks and virtual appliances to handle switching and other network functions. It's a little odd seeing a data center without much of the old familiar networking hardware, but it's the way more and more facilities are being designed, using virtual appliances to handle networking at a rack level.

Step outside the original halls, and you're presented with examples of Microsoft's current build of datacenters. These newer facilities are containerized, both inside huge ventilated sheds and outside on concrete pads. Built by familiar hardware vendors, the containers are the heart of Microsoft's current cloud services, running services like Azure. Designed to need just power, networking, and a trickle of water, containers like these are the foundation of the modern cloud, handling all the compute and storage we expect.

Snow baffles make it easier to get to ITPACs in the depths of winter.

Snow baffles make it easier to get to ITPACs in the depths of winter.

Image: Richard Duval/Microsoft

The containerized datacenter modules at Quincy show off different generations (and vendors) of what Microsoft calls an ITPAC. The initial installation sit in a large roofed building, two containers high. The building itself didn't originally have walls, but when staff found themselves wading through snowdrifts shortly after the ITPACs were commissioned, the gaps were quickly filled with snow baffles. Later ITPACs are deployed outside, sat on concrete pads.

We didn't see the initial tranche of containerized hardware, the original ITPAC designs, which run in the company's Chicago site. However what's most important about the ITPAC isn't what it contains, but how it was designed. All Microsoft was concerned about was a set of specifications: power, compute, and storage, as well as the connection points to networking, power supplies, and cooling. Outside of that everything -- even the container design -- was left to the vendors. That showed in the difference between ITPACs, some based on traditional shipping containers, others more like a mobile home.

Walking through an ITPAC is like walking through the hot or cold aisle of any modern datacenter. You won't see anything surprising -- after all, the key to delivering cloud at scale is to reduce the risks associated with running a datacenter, so much of the hardware is familiar servers with the barest hint of networking equipment.

ITPACs in the open, no roof, just cloud.

ITPACs in the open, no roof, just cloud.

Image: Richard Duval/Microsoft

Microsoft is in the middle of a major expansion of its Quincy site, building a new hyper-scale datacenter. Composed of four buildings, each an independent set of equipment, it's one of the biggest sites I've seen. Sadly, however, as it's still under construction with only one building being fitted out at present, we weren't allowed to go inside for safety reasons. Still, trundling round the buildings in a coach gives you a feel for their size: bigger than a 747's aircraft hangar.

Under those four roofs Microsoft's datacenter strategy is taking a whole new direction. No longer is it buying in ITPACs and servers from third-party vendors. Instead it's now using OEMs to build its own Open Compute Project hardware. That's allowing it to build high density racks, which will support the next generation of its cloud services on a software-defined platform that can be reconfigured as required using the software-defined datacenter tooling built into both Windows Server and Azure.

Racks of Open Compute hardware in Microsoft's newest data centers.​

While we didn't go in, Microsoft gave us a photograph of racks of Open Compute hardware in its newest datacenters.

Image: Microsoft

The new datacenter is designed to last considerably longer than its predecessors. When the new Quincy site finally comes to the end of its life, it'll have been through many different generations of hardware, adding compute and storage while keeping inside the site's fixed power budget. That's one reason why Microsoft has gone back to using racks of servers, rather than its current containerized designs, making it easier to swap out servers and install newer hardware.

This isn't the end of Microsoft's datacenter evolution: it's already experimenting with new ways of delivering the cloud, like its recent tests of underwater datacenters. And while that work goes on, the company will have built and rolled out even more generations of datacenters, taking more land on the high desert planes for more massive facilities full of compute, networking, and storage.

There's a strong focus on environmental impact at Microsoft, with all its datacenters carbon neutral (Quincy runs off nearby hydroelectric systems in the Columbia River). Similarly, Microsoft recycles all its old servers, though disks never leave the site -- at least not in one piece. We were shown one of the disk shredders the company uses, a hefty piece of equipment that will quickly turn a hard drive into a pile of unrecognizable bits of metal: smashed, crushed, twisted, and mangled. It seemed to be a very thorough approach to data security!

A lot of what Microsoft does at its datacenters is commercially sensitive, and it's good to see it sharing as much as it does through the Open Compute Project. Three of the four biggest cloud vendors are part of the OCP, sharing designs and lessons about cloud servers, storage, and networking, and giving us clues about the future of the cloud.

One piece of work Microsoft has shared with the Open Compute Project is a new approach to UPS, putting batteries in-line with the PSU, in the server itself. That allows it to use more of the datacenter space for servers, without having to use space for battery rooms to keep the systems running in event of power failure. In-line batteries can be relatively small, as they're only needed to keep the server up while a site's massive generators start up.

So what were my final impressions of the site? If I were to say that I was impressed by its mundanity, that wouldn't be a critique of Microsoft. It's actually high praise.

A hyper-scale cloud needs to be mundane, it needs to be simple, even boring. It has to run 24 hours a day, 7 days a week, 365 days a year, year after year, decade after decade. That means it needs to be reliable, it needs to be huge rooms where nothing in particular happens, just the roar of server fans and the blinking of LEDs. If it's anything else, then it wouldn't be a cloud that thousands of businesses and millions of people rely upon. If it was anything else, Microsoft would have failed.

"Impressive in its mundanity." Yes, that pretty much sums it up.

Read more about Microsoft datacenters

Editorial standards