The desktop operating system as we know it isn't dead yet, but the writing's clearly on the wall. The rise of the tablet and of the cloud, both public and private, is changing the way IT departments think about desktop PCs and how they're managed and used. You'd think that would mean panic in the halls of Microsoft's Redmond campus – as, after all, the numbers show just how many billions of dollars a quarter Windows is worth.
But Microsoft doesn't seem to be running round in circles, screaming or shouting, or even emulating a panicking Tim Brooke-Taylor singing "I'm a teapot". Instead it's been working to deliver an infrastructure that will support a future of virtual desktops with the features we expect from a dedicated desktop OS. The first piece of that infrastructure comes in less than two weeks on February 22nd, when Windows Server 2008 R2 SP1 (and Windows 7 SP1) can be downloaded.
We've written about SP1 before, when Microsoft first unveiled its Dynamic Memory and RemoteFX features at TechEd North America in New Orleans back in May 2010. They're the key to Microsoft's desktop virtualisation strategy, helping with both performance, user experience and server density.
RemoteFX is probably the most obvious to end users, as it lets virtual desktops use the features of a server GPU to provide the effects users have come to expect from a desktop. Using either software or hardware decoding it lets you push high quality desktops from central servers, without worrying about users complaining about a degraded experience. Not only that, but you're able to access local USB devices from the remote desktop – whether you're using old PCs, thin clients or even zero-client devices. GPUs are exposed using a virtual driver and are shared over the Hyper-V VMBus, with either the GPU, the CPU or a dedicated hardware encoder used to deliver the compressed screens to client devices. This approach means there's no need to add driver support in the client images – it only needs to be handled by the base OS.
However, it's Dynamic Memory that's the most interesting, as it lets the Hyper-V virtual machine manager change the amount of memory allocated to virtual machines on the fly, based on what they're doing. There's support for both Vista and Windows 7 (and for recent server OSes as well). Microsoft is claiming 40% improvement in image density with Dynamic Memory, meaning you'll be able to service 40% more client desktops from the same virtual machine host. Instead of allocating 1GB of RAM per Windows 7 desktop you'll be able to start up with only 512MB, and then add memory as users start running applications.
There's no lag too, as this isn't algorithmic. Instead the system starts small and allocates on top of that base based on demand. We asked for more detail on how it cleans up memory when it stops being needed and got this reply:
"Dynamic Memory reclaim works on a global and local view of memory pressure. To that end there are two primary routines that kick in to hot-remove memory.
1. There is a lazy cleanup from a thread that removes memory from the VM based on a given timeframe.
2. Also taken into account is the memory pressure of the hypervisor and priority that the VM has been given. This can override the time period of the lazy cleanup to remove memory quicker per VM according to its priority if the hypervisor is under memory pressure
This Algorithm was fine-tuned over the development process for optimal performance."
Windows 7 SP1 isn't quite as important as Server 2008 R2 SP1, but it does provide features to help with both Remote FX and Dynamic Memory. You don't need it to run a VDI infrastructure, but it certainly helps.
It's clear that with this release Microosft has started thinking about a post-desktop future – and tools like System Center Virtual Machine Manager 2010 will help you deploy and manage this new virtual infrastructure, mixing Hyper-V for OS virtualisation with App-V for virtualised sandboxed applications. A brave new world indeed.