Geek Sheet: Virtualizing Free Linux Distributions in Windows Server 2008 R2

Geek Sheet: Virtualizing Free Linux Distributions in Windows Server 2008 R2

Summary: It's been a while since we've had a hardcore Geek Sheet installment, and I promise that this one will be a real winner.Some of you may be aware that the updated Hyper-V bare-metal hypervisor virtualization layer in Microsoft's upcoming Windows Server 2008 R2 (Which is due to be released August 14th to MSDN and Technet customers) now has support for SUSE Linux Enterprise Server 11 (SLES) and Red Hat Enterprise Linux 5.

SHARE:

It's been a while since we've had a hardcore Geek Sheet installment, and I promise that this one will be a real winner.

Some of you may be aware that the updated Hyper-V bare-metal hypervisor virtualization layer in Microsoft's upcoming Windows Server 2008 R2 (Which is due to be released August 14th to MSDN and Technet customers) now has support for SUSE Linux Enterprise Server 11 (SLES) and Red Hat Enterprise Linux 5.3 (RHEL). Additionally, Linux support and performance has greatly improved over the initial Hyper-V release. Microsoft also recently released it's Hyper-V Linux Integration Components (LinuxIC) under the GPLv2 Open Source License.

The LinuxICs for Hyper-V, which are in Release Candidate status and are available for download from Microsoft's Connect site, provide synthetic device drivers that enhance I/O and networking performance when Linux OSes are virtualized under Hyper-V. The source code for the LinuxIC's were accepted into the Linux Driver Project and should become part of the Linux Kernel within two subsequent releases and code merges -- 2.6.32 is expected to be when they will be integrated, and all Linux distributions using that kernel code base going forward should be Hyper-V enabled out of the box. Yes, you heard that correctly, Microsoft is now an official Linux Kernel contributor.

Download: Windows Server 2008 R2 with Hyper-V Release Candidate

Download: Hyper-V Server 2008 R2 Release Candidate

Podcast: Frugal Friday with Jeff Woolsey, Product Manager for Microsoft Virtualization

However, until that code merge occurs, only Red Hat Enterprise and SUSE Linux Enterprise Linux distributions are officially supported by Microsoft as Hyper-V for Server 2008 R2 ready. Both RHEL and SLES are commercial Linux distributions and cost money for updates and maintenance. However, that does not mean that free Linux distributions will not work fully optimized with synthetic driver support in Hyper-V now. They most certainly do, and you can definitely take advantage of VMs running on free Linux distributions on Hyper-V right away.

Over the last week or so, I've been putting the release code of Windows Server 2008 R2 as well as the free Hyper-V Server 2008 R2 release candidate through their paces and running a combination of both Windows and Linux virtual machines on them. The free Linux distributions I have had most success with running the Hyper-V LinuxICs on are CentOS 5.3, Scientific Linux 5.3, and OpenSUSE 11.1.  In my limited testing, I only used 64-bit versions, because Windows Server 2008 R2 is 64-Bit only and I wanted to fully take advantage of the processing capability and native 64-bit virtualization of Hyper-V. However, the LinuxIC's should also install fine on the 32-bit versions of these systems.

CentOS 5.3 (foreground) and Scientific Linux (background) running fully paravirtualized in Hyper-V in Windows Server 2008 R2.

CentOS 5.3 and Scientific Linux 5.3 are both source code clones of Red Hat Enterprise Linux, so there are only minor differences in how the LinuxIC's are installed on them compared to how it is done in RHEL. OpenSUSE 11.1's installation procedure is also very similar to SLES, but like the other two there are some minor changes. However, these small differences were learned through a good number of hours of troubleshooting, so as long as you follow these steps, you won't run into the pitfalls I ran into.

Building your Linux VM in Hyper-V

First you'll want to build your VM. Using the Hyper-V Manager, select New > Virtual Machine from the "Actions" menu on the right. You'll be presented with this initial dialog wizard:

Here you'll specify the name of your Virtual Machine and where you'd like it stored on your server. I created a new volume specifically for storing my virtual machines and my ISO files, the V: drive.

After clicking "Next" you'll be asked for the amount of memory to assign. For a basic Linux server VM, 512-768MB of RAM is certainly plenty, but if you're going to use the GUI features, 1GB or more is recommended. On the following screen, you'll be asked to configure networking and to pick a network interface to bridge to. During the setup of the Hyper-V role, at least one network adapter should be bridged to the LAN for the virtual network switch.

Following the Network screen, you'll be asked how large your virtual hard disk (VHD file) should be. For CentOS, Scientifc Linux and openSUSE server use, I'd recommend 8GB-20GB of space for the VHD depending on the usage role (Apache/MySQL/PhP, Java, Ruby on Rails applications, etc.)

On the final configuration screen you'll be asked to point to your install media. You can either install from a physical CD or DVD, or from an ISO file. Once you've chosen your desired install media, click on Finish to create the VM.

Next -->

Before powering up the VM, we need to do a bit of tweaking. During the install process we're going to want to have actual IP network connectivity, and to do that, we're going to need to add a second network adapter for "Legacy" connectivity. By default Hyper-V installs a synthetic network card that Linux cannot see until the LinuxICs are installed. The "Legacy" network adapter allows Linux to connect to the LAN at less than optimal performance but it still functions. If you have an older version of Linux or another Linux OS you'd like to use in Hyper-V without the Integration Components, this is how networking can be made to work.

In the Hyper-V manager, right click on the name of your newly created VM and choose "Settings".

In the Add Hardware window choose Legacy Network Adapter and click OK.

Select a virtual network connection and click OK. We're now ready to boot and install our Linux OS.

You'll notice in these last two screen shots I only have a single virtual CPU (vCPU) selected. That is because the current version of the LinuxIC's only officially supports uniprocessor guests. While I have run SMP Linux guests with the LinuxICs and have had the additional vCPUs detected and running, I have had some stability issues with them during the LinuxIC install process so you want to at the very least not turn those additional vCPUs on until you've verified the LinuxICs are running correctly.

CentOS 5.3 and Scientific Linux 5.3

Both CentOS and Scientific Linux have similar install processes to that of RHEL, but they differ very slightly.

For RHEL clones in Hyper-V, I'm preferential to Scientific Linux because it's easier to lay down the base package support needed to install the dependencies required for the LinuxICs. But when installed both OSes are more or less the same in terms of capability and software support.

With both of these OSes I prefer to use the text-based installer rather than the GUI, because until we install a special mouse driver later on, we won't have use of the mouse during the installation. Otherwise, you'll need to use the TAB key to bounce around the GUI and it gets somewhat cumbersome.

After booting your CentOS or Scientific Linux VM for the first time, type "linux text" at the boot prompt and press ENTER.

If you are familiar with a standard RHEL installation at this point, there is very little difference until you get t the software selection screen.

In CentOS, you want to at the minimum select "Desktop - Gnome" and "Server - GUI" and also the "Customize software selection" checkbox and choose OK using the tab key and pressing ENTER.

In Scientific Linux you'll also see the choice for "Software Development", which you will also need to select. That's all you need to make the LinuxIC's work.

In CentOS, On the next screen you will need to select "Development tools" and "Development Libraries". Add any additional packages you want, select OK with the TAB key and then press enter.

On both CentOS and Scientific Linux, you'll notice also see the choice for "Virtualization". Do NOT choose this. Unlike on the previous release of Hyper-V, virtualized Linux guests in Server 2008 R2 use a regular Linux kernel and not a Xen kernel.

After Package Selection the install proceeds normally, and when the install completes, you'll be prompted to reboot the system and eject the media from the drives.

After first boot the Anaconda installer program will ask you which items you want to modify, such as firewall and network configuration. After those items are complete, you should be able to log in as root with a text prompt.

Here is another key step that differentiates a regular RHEL install on Hyper-V from a CentOS or Scientific Linux install:

Verify that you have network connectivity by issuing a "ifconfig eth0". If you get an IP address from your router, you're good to go.

from the root command prompt, enter "yum update" and press ENTER. The yum utility will connect to the Internet repository for CentOS/Scientific Linux and look for the latest versions of the packages that have been installed on your system. It will then ask you if you want to update them. Answer "y" to confirm the update, and also answer "y" to confirm the GPG Keyring import when prompted.

Next -->

Depending on the speed of your Internet connection, the package update could take several minutes or as much as a half hour. This step is critical because the LinuxICs are built against the most current Red Hat kernel code, and by default the CentOS 5.3 and Scientific Linux 5.3 install media use RTM Red Hat Enterprise Linux source code, which has not been updated.

This process also installs an updated Linux kernel, so we will need to reboot the system in order to get access to it. It should be noted that every time you update your kernel (which happens periodically with major fix releases) you will need to re-run the LinuxIC installer script.

Once you get the "Complete!" message and are returned to the root bash prompt issue a "reboot" command and hit ENTER. Log back into your system again as root.

We're now ready to install the Linux Integration Components. You'll want to refer to the README file accompanying them and follow the instructions for RHEL. This entails mounting the LinuxIC.iso and copying its contents over to /opt/linuxic. External media can be mounted using the Media menu in the Virtual Machine Connection console.

Download: Linux Integration Components for Microsoft Hyper-V

Before installing the LinuxIC's, verify that you have the pre-requisites installed by issuing a "yum install kernel-devel" and "yum install gcc". If you already have them installed you're good to go. If not, yum should grab them over the Internet.

To install the Linux Integration Components, issue a "./setup.pl drivers" from within the /opt/linuxic directory or from whatever directory you have copied them to. Once the install program has confirmed the drivers have installed, I like to shut down the system, remove the Legacy Ethernet adapter from the VM settings, and then reboot to verify that the OS can boot cleanly with the new modules.

To verify that the Integration Components are working, issue a "/sbin/lsmod | grep vsc" to display the status of the Hyper-V kernel modules. To verify that the Synthetic Ethernet adapter is working, issue a "ifconfig seth0".

At this point, if you're content on using CentOS or Scientific Linux as a server with command-line support, you're all done, and now can clone your Hyper-V integrated VMs to your heart's content and can customize the configuration as needed. However, if you want to be able to use the GNOME GUI, you will want to download an additional driver for Mouse Integration, which is currently provided by Citrix.

Download: Citrix Project Satori (Mouse Support for Linux under Hyper-V)

As with the Linux Integration Components you will want to mount the ISO, and copy its contents over to a directory on the VM. I like to use /opt/mousedrv.

Prior to running the ./setup.pl script in this directory, you will need the "xorg-x11-server-sdk" package installled. To do this, from the root bash prompt issue a "yum install xorg-x11-server-sdk" and hit the ENTER key. This will install that package as well as several other dependent packages.

The ./setup.pl script installs the mouse integration support. Once it is finished you should be able to see the mouse cursor when you move it within the Linux console window. To bring up the GNOME GUI, issue a "startx" as the bash prompt.

CentOS Linux mouse integration with Hyper-V.

Scientific Linux 5.3 with Mouse Integration.

Next -->

openSUSE 11.1

As with the two RHEL clones, openSUSE 11.1 works much like its enterprise sibling, SLES when integrated with Hyper-V. You'll need to build your VM as specified in the first step with 1 vCPU and with a Legacy Network Adapter. To navigate around the GUI installer you will need to use the TAB and arrow keys and spacebar to make selections.

During the openSUSE 11.1 installation you will need to select additional packages for "Base Development" and "Kernel Development". Again, do NOT install any virtualization components.

Prior to final installation confirmation, you should see the Base Development and Linux Kernel Development packages listed.

After openSUSE installs, you will need to enter a text mode prompt. You can do this by pressing the key combination of CTRL-ALT-F1.

Once you've logged in as root, issue a "zypper update" from the prompt. Like a yum update during the CentOS/Scientific Linux install, this will download a large amount of updates and fixes and it is required to make the Integration Components function.  After the zypper update, you'll want to reboot the system.

Once you have rebooted into the system, follow the instructions in the README for the Linux Integration Components for SLES. Per the instructions, you will need to make a modification of the /etc/fstab file for the file system mountpoints prior to re-booting the system.

To verify that the components are functioning, do the same thing as with the CentOS/Scientific installation:

LinuxICs functioning in openSUSE 11.1

Currently, the Citrix mouse driver does not function in openSUSE, so if you want to use the graphical console, I recommend that you install a vnc or RDP server such as xrdp. Microsoft is looking into this issue and hopefully I will have an update for you in a few weeks.

One thing that I find particularly nice about openSUSE is that you can build a JEOS (Just Enough OS) distribution using Novell's SUSE Studio. This is great for creating simple virtual appliances for web serving, security, or whatever simple functions you need Linux to do. I'm currently in the process of creating a simple ISO automated installer that installs openSUSE 11.1 with the Hyper-V components working, all the development tools and many common server components out of the box. I didn't have it ready in time for this article but I'll let you know about it as soon as I am done.

Have you had any experience getting other Linux distributions working with the Hyper-V R2 Linux Integration Components? Talk Back and Let Me Know.

Topics: Operating Systems, Hardware, Linux, Networking, Open Source, Servers, Software, Windows

About

Jason Perlow, Sr. Technology Editor at ZDNet, is a technologist with over two decades of experience integrating large heterogeneous multi-vendor computing environments in Fortune 500 companies. Jason is currently a Partner Technology Strategist with Microsoft Corp. His expressed views do not necessarily represent those of his employer.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

46 comments
Log in or register to join the discussion
  • cool.......:)

    nt
    straycat5678
  • For the same reasons...

    You'd do it in VMWare. The difference is that this
    solution is free. You want live migration or
    anything else VMWare does you'll have to pay for
    it. Hyper-V is an excellent hypervisor.
    jperlow
    • Well having Windows as a guest on a Linux host maybe...

      but the other way around...no thank you!
      ­ 
      • Would you run Linux on a Xen host? Or VMWare ESX?

        Because this isn't any different. You aren't
        "running" under Windows. You're running on a bare
        metal hypervisor. Windows is just the "priveleged
        guest" under the hypervisor, it runs in Domain 0
        paravirtualized. Research the architecture before
        making statements like that.
        jperlow
        • Not better technology

          Hyper-V was designed to share similar features
          of Xen's architecture.

          Compared with Xen's management tools Hyper-V's
          are
          much more robust. Xen may be open source but
          giving it to an IT staff to manage without
          additional tools -- such as Xen's commercial
          offerings for XenServer you'll still need to
          spend
          money, open source or not.
          jperlow
          • Haven't you learned...

            not to argue with people that have the mind set that proprietary software is lacking because it is proprietary or FOSS is lacking because it is FOSS. Anyone with this mind set is truly half of the IT person they could be and can't possibly carry on an intelligent discussion because they have killed part of the possible scenarios without even considering them. 08/08/09 is making a decision based on nothing factual. Both flavors can be quality software and both have their place in a top notch IT environment.

            I personally would chose to run SuSE and Xen and Windows in the virtualized environment. But in no way because I think MS's proprietary offering is lacking and especially not simply because it is proprietary.
            bjbrock
          • Is it necessary to spend money?

            I would think most IT people could get off to a good start on Xen without buying tools. Provided they are prepared to read some basics.
            And, could you add some ideas as to how or why Hyper-V tools are "more robust"?
            peter_erskine@...
        • Are you suggesting....

          That you can install Xen with no O/S installed and THEN install guests?
          handydan918
          • Xen's architecture

            and Hyper-V's architecture are the same. You
            are using a bare metal hypervisor with a
            priveleged "parent" guest OS to act as the
            driver passthru for storage and networking. But
            you are still not running guests on that OS.

            In the case of XenServer from Citrix, which is
            also a free product, you are using a heavily
            stripped-down Linux as the parent domain. So
            you are in effect installing it with almost no
            OS. Hyper-V server can be installed the same
            way, with no GUI. The Windows Server kernel is
            being used as a driver passthru privileged
            guest.

            All of this is well documented on both the Xen
            and Hyper-V sides.
            jperlow
          • @jperlow

            If I understand you correctly, then MS is using the Windows Server kernel as the backbone of Hyper-V. If so, then has MS solved the problem with the heavy resource penalty when spawning processes in their latest kernel? Or is it still better to spawn threads with the Windows Kernel?
            If it's still better to spawn threads, then with MS current method for spawning threads with shared processes should make anybody pause and rethink using Hyper-V. Because it will never be as stable as Linux.
            Axsimulate
          • @axsimulate: You have misunderstood both Windows kernel and hypervisor

            1) An operating system running under a type 1 hypervisor does not use the host operating system kernel at all. If you run Linux under Hyper-V, creating a Linux process does not create a process or thread in the host Windows system.

            2) Windows makes a distinction between processes and threads. A Windows process is not a unit of execution and is not subject to scheduling as in Linux. A Windows process provides boundaries such as memory, handles, security etc. But it *never* executes *anything*. Instead all processes own at least one thread. The responsibility is simply more "modularized" if you will: Boundaries are the responsibility of processes (merely a data structure), execution is the responsibility of threads. In Linux these are mixed in with eachother.

            You are correct that <i>processes</i> are considered "expensive" in Windows. However this is not a <i>problem</i> as you claim, it is merely a design decision. Threads are considered lightweight and comparatively cheap to create.

            Windows system libraries are all guaranteed thread-safe, and in general also 3rd party DLLs can be assumed to be thread-safe, unless otherwise stated.

            This is not the story on Linux where the "thread-safeness" of even many system libraries are unknown. This is the reason why e.g. PHP can execute threaded under Windows but you are (strongly) advised to execute PHP under the Apache MPM dispatcher if you are using LAMP.

            Either way, the Windows kernel threading/process model has absolutely no bearing on the "guest" operating system. That is a misunderstanding. The guest's processes are not mapped to processes/threads in the host OS.
            honeymonster
          • @honeymonster

            "1) An operating system running under a type 1 hypervisor does not use the host operating system kernel at all. If you run Linux under Hyper-V, creating a Linux process does not create a process or thread in the host Windows system."

            I guess I'm not understanding how Hyper-V works. Doesn't it need some sort of kernel to provide a foundation for the virtual machines?

            Don't vmware use a trimmed down Linux kernel as a foundation?

            "2) Windows makes a distinction between processes and threads. A Windows process is not a unit of execution and is not subject to scheduling as in Linux. A Windows process provides boundaries such as memory, handles, security etc. But it *never* executes *anything*. Instead all processes own at least one thread. The responsibility is simply more "modularized" if you will: Boundaries are the responsibility of processes (merely a data structure), execution is the responsibility of threads. In Linux these are mixed in with eachother."

            Yes, I know that processes in Windows are different than Linux/Unix. What I'm saying is, take svchost.exe for example, many different apps piggyback on this process such as networking. When they do this, this negates the whole purpose of memory protection as each thread shares the same memory space as the host process and can encroach on another threads. If one thread goes down, it can bring other threads down until the entire system goes down. Not only can this happen, the way Windows handles threads, there is no easy way to identify an errant thread and restart it. And if the errant thread has a memory leak, restarting it don't free up the memory it has consumed, which ultimately requires a system restart to free the memory. Hence less stability.
            Axsimulate
          • @axsimulate: You are incorrect

            once the process hosting the thread is ended, all memory that may have been leaked is released back to the os. No os restart is needed.
            Also, one thread terminating unexpectedly generally has no effect on the system as a whole.
            svchost is a special process, designed to be a surrogate container for services. It's supposed to host services. No "Apps" piggyback onto this, only services.
            ITLeader
          • @ITLeader

            "once the process hosting the thread is ended, all memory that may have been leaked is released back to the os. No os restart is needed."

            Yes, that is what is <i>suppose</i> to happen. What happens when an errant thread gobbles up memory? Killing the offending thread don't always free up the memory it took.

            "Also, one thread terminating unexpectedly generally has no effect on the system as a whole."

            Yes, that is the way it is <i>suppose</i> to work. However what happens when an errant thread starts to stomp all over the other threads? Because each process may have it's own protected memory space, threads don't. Piggybacking defeats the sole purpose of protected memory.


            "svchost is a special process, designed to be a surrogate container for services. It's supposed to host services. No "Apps" piggyback onto this, only services."

            Yes, I understand that. However MS's own programming guidelines encourage the use of piggybacking on processes. svchost was nothing more than an example. And yes, svchost can, will and has brought the entire system down.
            Axsimulate
        • Bare Metal? I think not.

          Jason-

          Calling Hyper-V a "bare-metal" hypervisor is a stretch, at best. By
          definition, a bare-metal hypervisor does not contain a general
          purpose operating system. All drivers are fully contained and
          optimized in the kernel itself. This has important implications for
          performance and security, among other things.

          Note that the Wikipedia article that you referenced does not call
          Hyper-V a "bare-metal" hypervisor once. This is because it is not.
          http://en.wikipedia.org/wiki/Hyper-V

          Jason, I think you need to "Research the Architecture" a bit more
          yourself. You were rather rude in response to a simple question, "Why
          would anybody want to trust their virtualized infrastructure to
          Windows?" While Hyper-V is Microsoft's first true hypervisor, it is a
          first generation product and still highly dependent on Windows. If the
          "privileged guest" dies, so does your ability to access your VMs.

          Hyper-V is comparable to VMware ESX v1 which was released in 2001.
          This ESX release used RedHat as a privileged guest in a fashion similar
          to Hyper-V. When/if Microsoft manages to untangle Hyper-V from
          the Windows driver stack, but still leaves Windows to manage the
          kernel then it will have progressed to match VMware ESX v2 which was
          released in 2003.

          If Microsoft ever removes Windows altogether and simply ships a true
          "bare-metal" hypervisor then they will match what VMware completed
          in 2007 with ESXi v3. This is highly unlikely considering that
          Microsoft's stated virtualization strategy is to sell virtualization as a
          feature of the OS. This stands in stark opposition to the concept of an
          OS-free bare-metal hypervisor.

          The point is that Microsoft's hypervisor still lags 8 years behind
          VMware and is additionally dragged down by its reliance on Windows.
          I will agree that Microsoft's management tools are doing better than
          their hypervisor as the "live migration" feature that will ship this fall
          finally is in the same ballpark as VMware's VMotion. However, VMware
          shipped VMotion way back in 2003, so Microsoft is still way behind in
          this area as well.

          So, back to the original question, with Windows' long history of
          unreliability and with thousands of companies virtualizing their
          systems with non-Microsoft products to protect themselves from the
          risks that Windows poses, "Why would anybody want to trust their
          virtualized infrastructure to Windows?"
          A/UX
          • Hyper-V is bare metal, period.

            What you are alluding to are the differences between a monolithic kernel Type 1 hypervisor (ESX, z/VM) and a microkernel Type 1 hypervisor. Both approaches are equally valid. Xen and Hyper-V are microkernel hypervisors and are bare metal.
            jperlow
          • Type 1, yes. Bare-metal, up for debate

            Okay, I'll bite. :)

            I agree that Xen, ESX and Hyper-V are all Type 1 hypervisors, they all
            install directly onto the hardware. ESX used to be a microkernel-
            based solution in version 1, just as Hyper-V and Xen are today. ESX is
            now a monolithic solution. Customers demand this level of reliability,
            they do not want to trust their virtualized infrastructures to the
            Windows driver model. You have made my point quite well. :)

            Microsoft had to use a microkernel-based solution to get Hyper-V out
            the door, but they will have to follow the same evolutionary path from
            microkernel to monolithic to compete. The microkernel-based driver
            model does not make Hyper-V or Xen any less a Type 1 hypervisor,
            but it does make them less "bare-metal" as they have dependancies
            on non-kernel resources to accomplish essential tasks.

            We can keep splitting semantic hairs over the term "bare-metal" but
            that won't do anybody any good. The point is that there is an
            evolutionary curve and Hyper-V and Xen are at the bottom of it right
            now. Suggesting anything less ignores the progress that Microsoft in
            particular has made in the last year in even getting themselves into
            the game. It also misses the many milestones and hurdles that they
            will face going forward.

            Good discussion. Thanks for engaging. :)
            A/UX
    • You mean Windows Server 2008 is free ....

      Where do I get my copy :)
      mrlinux
      • "Hyper-V Server 2008 R2" is free

        Its a dedicated hypervisor product that only uses a bare bones Windows kernel as a privileged guest. You can get it Wednesday for a free download.
        jperlow
  • A robust and informative posting

    Thanks, Jason!

    This is a robust and informative posting. I will definately try this out. Until now I've just run with the built-in drivers.

    Please ignore the Linux trolls - some people don't realize that the real world is heterogenous. I haven't worked with a single client who didn't have a mix of Windows, 'Nix and Linux.
    honeymonster