Windows Server 2012 R2: A First Look

Windows Server 2012 R2: A First Look

Summary: The latest iteration of Microsoft's server OS is now in beta. Is this the Cloud OS on cloud time?

SHARE:

Microsoft's 'Blue' wave of tools and technologies is more than just a user interface refresh. It's the next step on Microsoft's journey to becoming a devices and services company, and the first of what the company intends to be a regular series of updates to its core platform. At the heart of that core platform is Windows Server, the foundation for Azure and for what Microsoft calls its Cloud OS. All of which means that Windows Server 2012 R2 is much more than just another service pack; adding new features that make it easier to build cloud applications and services in your datacentre, and to move them to and from Azure.

Microsoft recently released a preview build of Windows Server 2012 R2, and we installed it as a Hyper-V virtual machine running on a Windows Server 2012 system. Although that meant we were unable to look at some of the new Hyper-V features in R2, it gave us a good picture of what you'll need to know when setting up Microsoft's latest server.

Getting started
Installation is easy. Like Windows Server 2012, R2 has two installation options: a full GUI and the command-line-only Server Core. We were able to get up and running in just a few minutes, only needing to choose keyboard and language options. Server 2012 R2 boots to the Start Screen, although there's the option of choosing boot to desktop. You can also turn off hot corner support for Windows 8-style navigation, a feature that comes in handy if you're using a non-touch-enabled monitor or a remote desktop. Although we've found that a Surface RT's 1366-by-768 screen is just the right size for working with a remote Windows Server 2012 R2, not everyone has the option of using the touch features Microsoft has put into its server.

ws-r2-1
Windows Server 2012 R2 looks very like its predecessor. It's only when you start to delve into the details that you find just how many changes there are. (Screenshot: Simon Bisson/ZDNet)

When it comes to the user interface, there's little difference between Windows Server 2012 R2 and its predecessor. The real changes are under the surface, with significant enhancements to Hyper-V, Storage Spaces and to Active Directory. That shouldn't be surprising; Microsoft has been talking about Windows Server as a key component in its Cloud OS for some time, and those are the key features needed to build and run a cloud service on Windows Server.

Windows Server 2012 R2 is configured, like Server 2012, via Server Manager. It's a modern-style desktop application that gives you an overview of running services from its dashboard, as well as launching the familiar Windows Server management tools and handling role and feature installation. It's a useful one-stop shop for managing one or many servers, although for more complex tasks you'll want to use PowerShell (especially its new Desired State Configuration tools) or System Center 2012 R2. Desired State Configuration (DSC) is an extremely powerful tool that can help prevent configurations from drifting over time — something that's increasingly important in automatically managed service deployments, where users use self-service portals to define the servers they want to deploy. With DSC you can define the managed elements of a server or a service, and can ensure they always have the correct configuration.

ws-r2-8
The real heart of Windows Server 2012 R2 is PowerShell, and PowerShell 4.0's new Desired State Configuration tools make it easier to deploy and manage servers — and to keep them running just the way you want. (Screenshot: Simon Bisson/ZDNet)

Virtual all the way down
Even though we weren't able to set up Hyper-V on our test install, there are plenty of improvements to Microsoft's virtualisation platform. Perhaps the most obvious is an improved virtual disk format, with support for up to 64TB dynamic disks that can be resized on the fly. However, the most useful new feature is Hyper-V Replica, which lets you quickly set up a disaster recovery site, and keeps it up to date. It's asynchronous and replicas can be tested without forcing a failover to the recovery site — and while the replica keeps on being updated (you can set replication points from 30 seconds to 15 minutes depending on server utilisation). The related Hyper-V Recovery Manager handles failover, monitoring primary servers and automatically switching load to a disaster recovery site, ensuring business continuity.

Read this

Windows Server 2012 R2: Screenshots

Windows Server 2012 R2: Screenshots

The 'Blue' upgrade to Windows Server 2012, R2, looks much like its predecessor, but there are plenty of changes when you look at the fine detail.

Microsoft has done a lot to improve how Hyper-V works in a private cloud, with features like Shared VHDX files that make it easier to separate storage and compute, and to quickly migrate a virtual machine from one server to another. Live migration now supports migration between different base operating systems, as well as using compression to significantly speed up transfers. There's also support for deduplication in virtual disks, which in conjunction with improved caching speeds up booting virtual machines — something that's key to delivering improved VDI performance to your end users.

There's also improved support for virtual networking, with the Hyper-V Extensible Switch providing a framework for software-defined networking. Third parties, like Cisco, can add extensions to the base switch, linking it to control frameworks and adding additional features (like firewalls, or data-loss prevention filters), easing the connection between virtual and physical networks. If you're using Windows Server 2012 R2 to host multi-tenant applications, there's now also a multi-tenant VPN gateway to manage secure access to separate virtual networks in your datacentre. Managing those virtual IP addresses is also simplified with the addition of virtual address management to Windows Server's IP Address Management (IPAM) tooling.

Storage and BYOD
Storage Spaces, Microsoft's storage virtualisation technology, also gets an overhaul in Windows Server 2012 R2. Microsoft has added support for storage tiering, letting you mix traditional hard drives and solid-state disks. With storage tiers, you can identify slow and fast disks in a Storage Space, and Windows will move data between them automatically to give you the best performance — putting data that's accessed regularly on SSD, and data that's not needed so often on slower, cheaper hard drives.

The CPU, storage and networking used by a service composed of several virtual machines can now be monitored as a whole, by wrapping all the resources used as a single resource pool. You can then get data on just how they're all being used by a service (or by a tenant on a multi-tenant system).

One of the most significant new features in Server 2012 R2 is Workplace Join. Best thought of as a granular version of full Active Directory membership, joining a workplace lets lightly-managed devices (like a Windows RT tablet, or a user's own PC) access files and directories. Workplace Join creates an Active Directory entry for the device, and delivers an authentication certificate that can be used to give access to files on corporate servers — without having to join a domain. There's also an option for users to choose to add a Workplace Joined device to Windows Intune or System Center 2012 R2 Configuration manager, to provide additional management capabilities.

ws-r2-3
You'll find new server roles among the many familiar options. Want to install Work Folders to give your BYOD users access to their files? Click to expand the File and Storage Services options. (Screenshot: Simon Bisson/ZDNet)

Another closely related new feature, Work Folders, allows you to synchronise files and folders with users' devices. It's not as granular as the old offline files model, but Work Folders lets BYOD users with Workplace-joined devices get managed copies of their files on their PCs and devices. There's one flaw with Work Folders at present though: there's no support for selective synchronisation. That means it'll try and copy the same files on a 32GB Surface RT as on a 256GB laptop — with no option of choosing the files you want on which device.

With cloud at the heart of Server 2012 R2, it's interesting to see it coming with a range of service provider-friendly (and BYOD-friendly) features. One key new role is support for Server 2012 Essentials' features, giving system administrators an approach to shared storage and backup that will work on consumer devices; without requiring membership of a domain or a workspace.

AD for Identity
Active Directory remains at the heart of Window Server 2012 R2, with an increased focus on managing user identities. That makes sense, as with a shift to services running on public, private or hybrid clouds, single sign-on is increasingly important, and a consistent source of user identity is needed to manage those sign-ons. If you're working with Azure and Windows Server 2012 R2 (an increasingly likely scenario) you can use Active Directory Federation Services to link your on-premises AD to the cloud-hosted Azure Active Directory, or to virtualised AD servers running on your own private clouds.

The Cloud OS: Windows Server and System Center
Although Windows Server 2012 R2 can standalone, it's now best considered in tandem with System Center 2012 R2. The two products were developed alongside each other, and System Center now serves as a control layer for the tools and services that run on Windows Server — especially around managing networks and virtual servers and applications. The two together are the basis of a software-defined datacentre that reaches from your server to the cloud (whether it's Azure or a hosting provider's Windows Servers).

That's probably the most important part of this release: an explicit relationship between management tools and server roles. If you want to use Windows Server 2012 R2 as a virtual machine host, you're going to need to run System Center 2012 R2 Virtual Machine Manager to get the most from your system — including automating live migrations and giving your users a portal to install and configure virtual machines. System Center is the automation layer on top of Windows Server, and it's essential if you're planning on building a private — or even a public — cloud on your Windows Server systems. You can get away without it if you're a small business, or running development servers, but if you don't want to spend your life configuring functions and features and tidying up after users, you're going to need to deploy the two products in tandem (and add in Windows Intune for managing Workspace-joined devices). Microsoft's joint development programme for the two tools makes sense, especially as what it calls the Cloud OS is really the delivery of the company's long term Dynamic IT vision.

The initial verdict
So should you upgrade from Windows Sever 2012 to the R2 release? Certainly it's a compelling release, with new Hyper-V and storage features making it a significant upgrade over last year's server. It's also surprisingly stable for a beta release of an OS that's been under development for less than a year, showing just how effective Microsoft's new sustained engineering and continuous development processes have become. But with pricing and a release date still unclear, any initial deployments should be purely to test out the new features. Where R2's new features are likely to be essential is if you're moving from a traditional application-centric datacentre to a service-centric private cloud, and where you want to automate as many of your server operations as possible.

If you don't install R2 this year, then there'll be another new server along this time next year (and possibly even sooner). That new cadence is a big change to how we do IT, and one that's going to take some getting used to — especially in more traditionally run IT organisations. There's a hierarchy to how Microsoft is shipping new server features: Azure gets them every three weeks or so, Office 365 and the rest of the company's cloud services get upgraded every three months, and the on-premises tools get an annual boost. This, then, is Microsoft's new approach to server development: shipping its Cloud OS on cloud time.

Topics: Windows Server, Cloud, Networking, Reviews

Simon Bisson

About Simon Bisson

Simon Bisson is a freelance technology journalist. He specialises in architecture and enterprise IT. He ran one of the UK's first national ISPs and moved to writing around the time of the collapse of the first dotcom boom. He still writes code.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

63 comments
Log in or register to join the discussion
  • When you refer to price/cost

    do you mean for updates, for new installations or both?

    Surely MS won't be charging for an update for those who already have WS 2012!
    Wakemewhentrollsgone
    • not even sure you'll be able to pay for an update

      If you had a server running windows server 2008 you had to buy a new license to get 2008 R2. I haven't seen anything that makes me thinks 2012 will be different.
      Jean-Pierre-
      • Companies with any weight attached to them...

        would buy server software with Software Assurance. You can even get standalone SA as an add-on to OEM software.
        Joe_Raby
        • off point.

          Its not free if you need software assurance to get it.
          Jean-Pierre-
          • off point ^2

            ... posing as any company that expects zero costs in regard to SA over a three year period, for any software or OS. Not even the most low-cost Linux house would dare pretend to offer that. Surely you're speaking in jest. (Please don't look a gift-horse allowing you to 'save face', in the mouth).
            TechNickle
  • Except 2008 R2

    Came out more than a year after 2008, where as 2012 will be about a year. But on that, I did read somewhere that there will be a charge.
    schultzycom
  • No mention of this?:

    - UEFI support for VM's (and Secure Boot)

    This gives you better support for migrating VM's between virtual and physical boot-from-VHD setups that are running on logo-certified hardware. The new support also gives you the ability to boot from virtual SCSI and non-legacy virtual NIC's. If you're doing VDI configurations, you can also use BCD commands on the host UEFI-enabled OS to propagate boot entries into a VHD for use in a UEFI-supported VM. This is a major headache with MultiPoint Server 2012 because the wizards it uses for VDI don't work properly for Hyper-V on a UEFI system since the old Hyper-V doesn't include UEFI emulation, nor the UEFI BCD firmware updates in the VM.
    Joe_Raby
  • Can It Handle More Than 26 Drive Letters Yet?

    Or is that considered enough for anybody?
    ldo17
    • Quite sufficient

      Considering that most other operating systems make due with single root mount points, having 26 can be seen as quite sufficient.

      You know, you *can* mount any volume anywhere you so desire.

      Server 2012 R2 sports storage spaces. As a single volume. You can mount anywhere.

      How does that buggy and not-quite-sufficient ZFS work for you on Linux? Or BtrFS - is that ready for production yet? No - I didn't think so, they are still work-in-progress. Server 2012 is production ready, robust, resilient and secure.

      Do you run a Linux server? Has it been compromised by Darkleech yet? It soon will be, since nobody can figure out how the servers are being compromised, but they *are* being infected at a steady rate. By someone with root privileges, mind you. You know what they call a server where somebody else has root? Total *pwnage*!
      honeymonster
      • re quote

        "Considering that most other operating systems make due with single root mount points, having 26 can be seen as quite sufficient."

        Not since about 1969. UNIX systems have had as many as you want. Even VMS allowed as many as you wanted.

        "You know, you *can* mount any volume anywhere you so desire."

        Finally you get into the 70s.

        "Server 2012 R2 sports storage spaces. As a single volume. You can mount anywhere."

        Linux has done that for about 15 years. Ever since LVM was added.

        "How does that buggy and not-quite-sufficient ZFS work for you on Linux? Or BtrFS - is that ready for production yet? No - I didn't think so, they are still work-in-progress. Server 2012 is production ready, robust, resilient and secure."

        Ask sun.. oops Oracle... After all, they own it. Btrfs is larger than NTFS, even ext4 is larger. And Linux has been ready for use for many years. That is why most supercomputer centers use it to handle the largest data storage requirements in the world.

        "Do you run a Linux server? Has it been compromised by Darkleech yet? It soon will be, since nobody can figure out how the servers are being compromised, but they *are* being infected at a steady rate. By someone with root privileges, mind you. You know what they call a server where somebody else has root? Total *pwnage*!"

        Depends entirely on the server. Many servers don't run Apache - and thus currently appear secured. Others do, but use SELinux to compartmentalize Apache - thus any other root vulnerability is limited to the capabilities of Apache... and not the root login, thus not "total *pwnage*". Those that don't should use VMs (even those that do use SELinux can do that, giving 4 levels of isolation). Doesn't mean there isn't a lot of idiots out there.

        How do you like having to defrag? Even if it is online, that slows your system down a LOT, and makes it more fragile to system crashes.... Speaking of crashes, do you still have to reboot the system monthly to prevent counter overflows from crashing the system? Or was that problem only limited to SQL servers?
        jessepollard
        • Stuck in the 70s

          >"You know, you *can* mount any volume anywhere you so desire."

          >Finally you get into the 70s.

          Good thing Windows is not stuck in the 70s. Like the stupid, limited file permissions. How many groups was it you can grant file access to again? one? Modern operating systems are not restricted to access control that will fit in a 16bit word. That's 70s thinking.

          >Ask sun.. oops Oracle... After all, they own it. Btrfs is larger than NTFS, even ext4 is
          >larger. And Linux has been ready for use for many years. That is why most
          >supercomputer centers use it to handle the largest data storage requirements in
          >the world.

          Linux does not have anything like Storage Spaces. ZFS would almost get there. But it's not quite ready, is it? Linux has been ready for many years, I'll give you that. It is just that the file systems suck and are maturing too slow.

          >Depends entirely on the server. Many servers don't run Apache - and thus currently
          >appear secured.

          Operative word here: *appear*. You see, nobody has yet figured out how the attackers gain access. Linux servers with Apache have been comprimised en masse. Linux servers with NGinx have been compromised en masse. Linux servers with different control panels have been compromised. Linux servers with a single role, a single non-root user have been compromised.

          And all of those servers are compromised *at root level*. Files owned by root are being changed, meaning that the attacker has root. On Linux that means game over.

          And the only common factor so far: Linux. Go figure.

          >Others do, but use SELinux to compartmentalize Apache - thus any
          >other root vulnerability is limited to the capabilities of Apache...

          If the breach is in kernel space, no SELinux is not going to save you. SELinux is good, and properly used it will amp up security a lot. But it is incredibly complex to set up and most administrators simply switch it off. Hence the mass exploitations of Linux servers.

          >"and not the root login, thus not "total *pwnage*".
          >Those that don't should use VMs (even those that do use SELinux can do that, giving 4 levels of isolation).

          Oh, Linux doesn't feel secure enough for you? Now you need to box it in a VM so that you can jettison it *when* it gets compromised?

          >Doesn't mean there isn't a lot of idiots out there.

          Indeed
          honeymonster
          • UNIX has had access control lists for quite some time.

            Look up "setfacl" and "getfacl".
            ye
          • Linux has ACLs as addons

            Still a kludge, and many servers, especially web servers, are set up without ACLs. Compare that to Wndows Server, which had ACLs from the start of Wndows NT. Proper *Network* aware ACLs, where users and Groups are not coded with limited 16 bits that leads to all kinds of trouble when you try to Network the disk and files.
            honeymonster
          • Re: Linux does not have anything like Storage Spaces

            Given Linux has LVM and per-process namespaces, not to mention pluggable filesystems and chroot, why would it need anything as limited as "Storage Spaces"?
            ldo17
          • Oh I dont know

            but how about if I have a bunch of SSDs, really fast spinning disks and a lot of regular disks and want to set them up so that 1. I make the most performant use ofthem, 2. I make the most efficient use of my storage and 3. I want to be able to add and remove disks online if they fail or I purchase new disks.

            As I said ZSF(when it becomes ready for production on Linux) will fit the bill. Anything else on Linux right now? hmm. not so much.
            honeymonster
          • Re: Oh I dont know

            Yup, it appears you don't.
            ldo17
          • Your choice of setup is up to you.

            You can go for reliability... or performance.

            Go raid 10 if you want. Use them for cache...

            Use Luster or gluster if you want and make storage pools.

            Both are available, right now. And both are faster than anything Windows has.
            jessepollard
          • part 1.

            "How many groups was it you can grant file access to again? one? Modern operating systems are not restricted to access control that will fit in a 16bit word. That's 70s thinking."

            Several 10s of thousands - as long as you keep adding to the ACL. And ACLs have been there since around 1998/1999, though not very heavily used.
            jessepollard
          • Oh yes, Linux ACLs

            the kludge that was added on as an afterthought because the Linux file permissions is officially not sufficient for government deployments.

            Linux ACLs that still does not support proper inheritance. Linux ACLs that only apply to *files* and directories - not to other object types.

            Yes - they are easy to forget.
            honeymonster
          • Meets standards that window doesn't.

            And inheritance is available.

            What other objects of filesystems are there?

            They are not forgotten, just not as useful as you think.
            jessepollard