Putting the cloud in a box: How Microsoft built Windows Server 2012

Putting the cloud in a box: How Microsoft built Windows Server 2012

Summary: Microsoft wanted Windows Server 2012 to be the 'definitive cloud operating system'. So how did its engineers go about this? For a start, by not writing any code for a year, according to project lead Jeffrey Snover.

SHARE:

Part of Microsoft's series of big bets on its future, Windows Server 2012 is a major upgrade to the company's server operating system — one designed to change the way businesses build and manage datacentres.

The idea was to build the "definitive cloud operating system", different to anything Microsoft had built before and anything the industry had seen, according to Jeffrey Snover, the lead architect for Windows Server 2012.

Windows Server 2012
Windows Server 2012 is designed to change the way businesses build and manage datacentres.

Microsoft launched the server OS update at the beginning of September, after three years of work. There was a lot the software maker needed to get right in the new OS: from handling virtualisation to how IT departments could work with BYOD, to delivering tools for managing many servers at the same time.

READ THIS: Windows Server 2012: RTM review

That meant the development process was very different from projects Snover — best known as the inventor of Microsoft's PowerShell scripting language — had worked on before, he told ZDNet.

"The first thing we did was stop. We said to everyone, 'Put your pens down, let's be thoughtful about this'," he said. "For a whole year, all the engineers, not a single line of production code was written."

Testing and talking

Instead, that first year was spent on planning and testing, and retooling the development system for the server OS. The planning part meant talking to hardware vendors and buyers, to understand just where the server and datacentre market was going — getting what Snover called "the voice of the technology team".

"One team spent a lot of their time talking to people running cloud datacentres with Windows, asking what's working, what's not working, what were their priorities" — Jeffrey Snover

"We got out of our cubicles and we talked to customers," he said, explaining that Microsoft wanted to know what businesses were looking for in an operating system. "One team spent a lot of their time talking to people running cloud datacentres with Windows, asking what's working, what's not working, what were their priorities."

The second part — retooling the development platform — meant Microsoft's team focused on creating new code management and development tools. This called for "great code check-in, great quality metrics, building the unit test frameworks that would be needed. Really beefing up our engineering experience", Snover said.

While no one on the team was working on production code, that didn't mean no one was writing code. Engineers used the year to try out new ideas and new technologies, familiarising themselves with the techniques and some of the tools they'd need to use when Windows Server 2012 development began — including spending time with new hardware.

Handling storage

Drawing on what customers had told them, the Windows Server development team identified the main things they had to take into account in their next release. Perhaps the most important was to try to improve the way the server OS worked with storage, to help IT departments manage it more effectively and at lower cost, according to Snover.

Windows Server 2012
Windows Server 2012 is meant to be the "definitive cloud operating system".

Other key areas were automation, speed and virtualisation. Automation features had to be simplified and standardised, clients said, while better virtualisation support was needed for datacentre flexibility and business agility. As for speed, the focus was on raw performance and price/performance.

READ THIS: Windows Server 2012: RTM screenshots

Next, the team put together a set of features for the OS and came up with a list of high-level issues to tackle. The main insight they had was to treat Windows Server as a datacentre abstraction layer — they took the familiar concept of the hardware abstraction layer that had been part of Windows Server since the NT days and extended it to the entire datacentre.

This meant Windows Server 2012 needed to be able to manage and control not just compute and storage, but also networks with support for software-defined networking in a virtual switch and with tools for dynamically managing large numbers of IP addresses.

Microsoft "needs a standards-based approach to manage the whole datacentre — everything in it — with no lock-in", Snover said.

Re-engineering the OS

As with Windows 8 on the desktop, the software maker saw Windows Server 2012 as an opportunity to re-engineer the OS for the latest hardware, he added.

Processors are now uniformly multicore, so applications need to take advantage of the CPU and memory architectures in modern servers, he argued. That meant the development team had to focus on improving support for NUMA (Non-Uniform Memory Access) — seen as essential for improving virtualisation performance, as it will allow Windows Server 2012 and Hyper-V to treat servers as a compute fabric, automating memory usage.

"Getting NUMA right is really hard. So we did a ton of analysis, test, measurements and tweaking, which gave us phenomenal NUMA scaling as a result," Snover said.

One thing the team held in mind was the idea of continuous availability — basically, bringing cloud design to the datacentre. Continuous availability uses compute, storage and network fabrics to keep business systems running, even when applications, storage, and infrastructure fail. This changes the way servers and datacentres are designed, according to the Microsoft distinguished engineer.

To do this, Microsoft took what Snover called "a very engineered approach to resilience — looking at how it can be delivered for single nodes, multi-node clusters and even across multiple sites".

Snover described the approach the team took as "walking up the stack". That meant making changes in the file system and kernel, including developing a whole new resilient file system, called ReFS.

At a kernel level, Microsoft changed the way data is flushed to disk, as the enterprise shift to using commodity hardware has meant businesses using cheaper consumer storage. The result is the ability to look for NTFS problems on the fly and repair them without requiring a reboot (which takes disks offline for fractions of a second).

Tackling BYOD

As well as factoring in private clouds, the engineers tackled BYOD (Bring Your Own Device) policies. Unmanaged devices are now part of most corporate networks, so there needed to be a shift from application and device management to user and information management in Windows Server, Snover said. That meant building new features into the OS to make sure it could scale and cope with the explosion in data.

The resulting Dynamic Access Control added rules that can be applied automatically, locking down access-based roles, groups and user IDs.

Workers also now expect their business tools to be as usable as their consumer devices. Microsoft worked on Windows Server's VDI support to try to meet these expectations, Snover said.

Windows Server 2008 R2 introduced RemoteFX, which brought hardware-accelerated graphics and video effects to virtual desktops using Remote Desktop Protocol. However, it required additional hardware, and meant that servers needed to have desktop graphics cards.

New codecs

That changed in Windows Server 2012, which now has a software GPU. There were also improvements to the RDP protocol, according to Snover.

"We're using a lot of technology from Microsoft Research," he said. "We're using different codecs for different parts of the screen — for text, for video".

Those new codecs are meant to make it easier to deliver virtual desktops and remote applications to employees working at home or on the road.

"Efficiency is a lot better with the new codecs. You can get a lot of efficiencies across a WAN as well as a LAN," Snover said.

With Windows Server 2012 now available for download, it's the end of the journey for Microsoft's development teams — but the start of the journey for IT departments around the world as they plan server and datacentre upgrades.

With support from server, storage, and networking vendors — and the ability to buy preconfigured reference architectures — Microsoft is describing this Windows Server release as "delivering the cloud, in a box". It's going to be interesting to watch how it gets deployed, and how IT teams use it to approach key issues like BYOD and private cloud.

Topics: Windows, Cloud, Microsoft, Operating Systems, Servers

Simon Bisson

About Simon Bisson

Simon Bisson is a freelance technology journalist. He specialises in architecture and enterprise IT. He ran one of the UK's first national ISPs and moved to writing around the time of the collapse of the first dotcom boom. He still writes code.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

9 comments
Log in or register to join the discussion
  • Suspicious me.

    OK call me a Luddite but I can't get away from the uneasy feeling that 'The Cloud' is little more than a clever device for getting unsuspecting people to put all their sensitive data somewhere safe where anybody with the right tools can access it at their leisure.
    I like my data under lock and key on my own premises - plus offsite backup OK? - and available to nobody without a Warrant or Court Order. There are way too many 'gotcha's and back doors' around for me to feel secure with anything left on-line.
    Radio Wales
    • I think you're right to be concerned

      This is a new approach that needs rethinking how we do things. I'd rather let others work out the bugs first.
      happyharry_z
    • mehhh....

      tin foil hat much?
      microsoft_ecarsella
    • it does not seem

      you unserstand the topic of the article. 'Cloud' is word to describe the level of abstruction. You do not need to know on what server/ip you data is. The servers may as well be 'on premises', you do not need to know to have access.
      To that extent MS built tools to make it work. This has nothing to do with a 'lock and key'.
      Besides, what makes you think that your lock is any better than the cloud's?
      ForeverSPb
      • Abstruction

        "'Cloud' is word to describe the level of abstruction."
        Perfectly put!
        "Abstruction" being the intersection of "abstraction" and "obstruction".
        That's what cloud computing is all about!
        Droog
      • Kids, kids, kids . . .

        Come on, let's stop getting bogged down on semantics. Cloud computing is nothing but another type of virtualization. And virtualization was first introduced back on the mainframe.

        IMHO, I think we all became a bit too enamoured with our workstations and PC's, the I'm king of this domain, thought process if you will. We've had complete control over these little personal fiefdoms, and now view the cloud as something that may threaten to take that away.
        dinomutt
        • Old guys fear the cloud

          Typical. The old guys were paranoid of the PC's I remember them when I got out of college. I remember telling them all to sell their ibm stock and buy microsoft stock. lol. The cloud is inevitable. It's insanely inefficient to host data centers all over the world for companies. The only people afraid of the cloud are techies that do not understand it, don't like change and are trying to put their head in the sand. I like the tin foil had reference above. It's perfect.
          butter44
  • Cloud is more marketing than technology

    The data center technology doesn't need much to be "cloud". It's quite simply a way for organizations to get past the dirty word "outsourcing" for their data centers. The new thing is the businesses no longer have to make a "capital investment" to have the service. Instead they have a contract for services and the feel good of thinking they are more secure because its off-site. Hopefully that off-site isn't also outside the USA - where a whole different set of rules apply.
    hull@...
    • Not even close...

      You better stick to the conspiracy theories about the moon. You know nothing about the cloud. You've obviously never tried to serve data or sites across multiple continents. The cloud does this seamlessly without in-house replication, maintenance or disaster recovery. All of which are very, very expensive.
      butter44