It's been a while since we've had a hardcore Geek Sheet installment, and I promise that this one will be a real winner.
Some of you may be aware that the updated Hyper-V bare-metal hypervisor virtualization layer in Microsoft's upcoming Windows Server 2008 R2 (Which is due to be released August 14th to MSDN and Technet customers) now has support for SUSE Linux Enterprise Server 11 ( ) and Red Hat Enterprise Linux 5.3 (RHEL). Additionally, Linux support and performance has greatly improved over the initial Hyper-V release. Microsoft also recently released it's Hyper-V Linux Integration Components (LinuxIC) under the GPLv2 Open Source License.
The LinuxICs for Hyper-V, which are in Release Candidate status and are available for download from Microsoft's Connect site, provide synthetic device drivers that enhance I/O and networking performance when Linux OSes are virtualized under Hyper-V. The source code for the LinuxIC's were accepted into the Linux Driver Project and should become part of the Linux Kernel within two subsequent releases and code merges -- 2.6.32 is expected to be when they will be integrated, and all Linux distributions using that kernel code base going forward should be Hyper-V enabled out of the box. Yes, you heard that correctly, Microsoft is now an official Linux Kernel contributor.
Download: Hyper-V Server 2008 R2 Release Candidate
Over the last week or so, I've been putting the release code of Windows Server 2008 R2 as well as the free Hyper-V Server 2008 R2 release candidate through their paces and running a combination of both Windows and Linux virtual machines on them. The free Linux distributions I have had most success with running the Hyper-V LinuxICs on are CentOS 5.3, Scientific Linux 5.3, and OpenSUSE 11.1. In my limited testing, I only used 64-bit versions, because Windows Server 2008 R2 is 64-Bit only and I wanted to fully take advantage of the processing capability and native 64-bit virtualization of Hyper-V. However, the LinuxIC's should also install fine on the 32-bit versions of these systems.
CentOS 5.3 (foreground) and Scientific Linux (background) running fully paravirtualized in Hyper-V in Windows Server 2008 R2.
CentOS 5.3 and Scientific Linux 5.3 are both source code clones of Red Hat Enterprise Linux, so there are only minor differences in how the LinuxIC's are installed on them compared to how it is done in RHEL. OpenSUSE 11.1's installation procedure is also very similar to SLES, but like the other two there are some minor changes. However, these small differences were learned through a good number of hours of troubleshooting, so as long as you follow these steps, you won't run into the pitfalls I ran into.
Building your Linux VM in Hyper-VFirst you'll want to build your VM. Using the Hyper-V Manager, select New > Virtual Machine from the "Actions" menu on the right. You'll be presented with this initial dialog wizard:
Here you'll specify the name of your Virtual Machine and where you'd like it stored on your server. I created a new volume specifically for storing my virtual machines and my ISO files, the V: drive.
After clicking "Next" you'll be asked for the amount of memory to assign. For a basic Linux server VM, 512-768MB of RAM is certainly plenty, but if you're going to use the GUI features, 1GB or more is recommended. On the following screen, you'll be asked to configure networking and to pick a network interface to bridge to. During the setup of the Hyper-V role, at least one network adapter should be bridged to the LAN for the virtual network switch.
Following the Network screen, you'll be asked how large your virtual hard disk (VHD file) should be. For CentOS, Scientifc Linux and openSUSE server use, I'd recommend 8GB-20GB of space for the VHD depending on the usage role (Apache/MySQL/PhP, Java, Ruby on Rails applications, etc.)
On the final configuration screen you'll be asked to point to your install media. You can either install from a physical CD or DVD, or from an ISO file. Once you've chosen your desired install media, click on Finish to create the VM.
Before powering up the VM, we need to do a bit of tweaking. During the install process we're going to want to have actual IP network connectivity, and to do that, we're going to need to add a second network adapter for "Legacy" connectivity. By default Hyper-V installs a synthetic network card that Linux cannot see until the LinuxICs are installed. The "Legacy" network adapter allows Linux to connect to the LAN at less than optimal performance but it still functions. If you have an older version of Linux or another Linux OS you'd like to use in Hyper-V without the Integration Components, this is how networking can be made to work.
In the Hyper-V manager, right click on the name of your newly created VM and choose "Settings".
In the Add Hardware window choose Legacy Network Adapter and click OK.
Select a virtual network connection and click OK. We're now ready to boot and install our Linux OS.
You'll notice in these last two screen shots I only have a single virtual CPU (vCPU) selected. That is because the current version of the LinuxIC's only officially supports uniprocessor guests. While I have run SMP Linux guests with the LinuxICs and have had the additional vCPUs detected and running, I have had some stability issues with them during the LinuxIC install process so you want to at the very least not turn those additional vCPUs on until you've verified the LinuxICs are running correctly.
CentOS 5.3 and Scientific Linux 5.3Both CentOS and Scientific Linux have similar install processes to that of RHEL, but they differ very slightly.
For RHEL clones in Hyper-V, I'm preferential to Scientific Linux because it's easier to lay down the base package support needed to install the dependencies required for the LinuxICs. But when installed both OSes are more or less the same in terms of capability and software support.
With both of these OSes I prefer to use the text-based installer rather than the GUI, because until we install a special mouse driver later on, we won't have use of the mouse during the installation. Otherwise, you'll need to use the TAB key to bounce around the GUI and it gets somewhat cumbersome.
After booting your CentOS or Scientific Linux VM for the first time, type "linux text" at the boot prompt and press ENTER.
If you are familiar with a standard RHEL installation at this point, there is very little difference until you get t the software selection screen.
In CentOS, you want to at the minimum select "Desktop - Gnome" and "Server - GUI" and also the "Customize software selection" checkbox and choose OK using the tab key and pressing ENTER.
In Scientific Linux you'll also see the choice for "Software Development", which you will also need to select. That's all you need to make the LinuxIC's work.
In CentOS, On the next screen you will need to select "Development tools" and "Development Libraries". Add any additional packages you want, select OK with the TAB key and then press enter.
On both CentOS and Scientific Linux, you'll notice also see the choice for "Virtualization". Do NOT choose this. Unlike on the previous release of Hyper-V, virtualized Linux guests in Server 2008 R2 use a regular Linux kernel and not a Xen kernel.
After Package Selection the install proceeds normally, and when the install completes, you'll be prompted to reboot the system and eject the media from the drives.
After first boot the Anaconda installer program will ask you which items you want to modify, such as firewall and network configuration. After those items are complete, you should be able to log in as root with a text prompt.
Here is another key step that differentiates a regular RHEL install on Hyper-V from a CentOS or Scientific Linux install:
Verify that you have network connectivity by issuing a "ifconfig eth0". If you get an IP address from your router, you're good to go.
from the root command prompt, enter "yum update" and press ENTER. The yum utility will connect to the Internet repository for CentOS/Scientific Linux and look for the latest versions of the packages that have been installed on your system. It will then ask you if you want to update them. Answer "y" to confirm the update, and also answer "y" to confirm the GPG Keyring import when prompted.
Depending on the speed of your Internet connection, the package update could take several minutes or as much as a half hour. This step is critical because the LinuxICs are built against the most current Red Hat kernel code, and by default the CentOS 5.3 and Scientific Linux 5.3 install media use RTM Red Hat Enterprise Linux source code, which has not been updated.
This process also installs an updated Linux kernel, so we will need to reboot the system in order to get access to it. It should be noted that every time you update your kernel (which happens periodically with major fix releases) you will need to re-run the LinuxIC installer script.
Once you get the "Complete!" message and are returned to the root bash prompt issue a "reboot" command and hit ENTER. Log back into your system again as root.
We're now ready to install the Linux Integration Components. You'll want to refer to the README file accompanying them and follow the instructions for RHEL. This entails mounting the LinuxIC.iso and copying its contents over to /opt/linuxic. External media can be mounted using the Media menu in the Virtual Machine Connection console.
Before installing the LinuxIC's, verify that you have the pre-requisites installed by issuing a "yum install kernel-devel" and "yum install gcc". If you already have them installed you're good to go. If not, yum should grab them over the Internet.
To install the Linux Integration Components, issue a "./setup.pl drivers" from within the /opt/linuxic directory or from whatever directory you have copied them to. Once the install program has confirmed the drivers have installed, I like to shut down the system, remove the Legacy Ethernet adapter from the VM settings, and then reboot to verify that the OS can boot cleanly with the new modules.
To verify that the Integration Components are working, issue a "/sbin/lsmod | grep vsc" to display the status of the Hyper-V kernel modules. To verify that the Synthetic Ethernet adapter is working, issue a "ifconfig seth0".
At this point, if you're content on using CentOS or Scientific Linux as a server with command-line support, you're all done, and now can clone your Hyper-V integrated VMs to your heart's content and can customize the configuration as needed. However, if you want to be able to use the GNOME GUI, you will want to download an additional driver for Mouse Integration, which is currently provided by Citrix.
Download: Citrix Project Satori (Mouse Support for Linux under Hyper-V)
As with the Linux Integration Components you will want to mount the ISO, and copy its contents over to a directory on the VM. I like to use /opt/mousedrv.
Prior to running the ./setup.pl script in this directory, you will need the "xorg-x11-server-sdk" package installled. To do this, from the root bash prompt issue a "yum install xorg-x11-server-sdk" and hit the ENTER key. This will install that package as well as several other dependent packages.
The ./setup.pl script installs the mouse integration support. Once it is finished you should be able to see the mouse cursor when you move it within the Linux console window. To bring up the GNOME GUI, issue a "startx" as the bash prompt.
CentOS Linux mouse integration with Hyper-V.
Scientific Linux 5.3 with Mouse Integration.
openSUSE 11.1As with the two RHEL clones, openSUSE 11.1 works much like its enterprise sibling, SLES when integrated with Hyper-V. You'll need to build your VM as specified in the first step with 1 vCPU and with a Legacy Network Adapter. To navigate around the GUI installer you will need to use the TAB and arrow keys and spacebar to make selections.
During the openSUSE 11.1 installation you will need to select additional packages for "Base Development" and "Kernel Development". Again, do NOT install any virtualization components.
Prior to final installation confirmation, you should see the Base Development and Linux Kernel Development packages listed.
After openSUSE installs, you will need to enter a text mode prompt. You can do this by pressing the key combination of CTRL-ALT-F1.
Once you've logged in as root, issue a "zypper update" from the prompt. Like a yum update during the CentOS/Scientific Linux install, this will download a large amount of updates and fixes and it is required to make the Integration Components function. After the zypper update, you'll want to reboot the system.
Once you have rebooted into the system, follow the instructions in the README for the Linux Integration Components for SLES. Per the instructions, you will need to make a modification of the /etc/fstab file for the file system mountpoints prior to re-booting the system.
To verify that the components are functioning, do the same thing as with the CentOS/Scientific installation:
LinuxICs functioning in openSUSE 11.1
Currently, the Citrix mouse driver does not function in openSUSE, so if you want to use the graphical console, I recommend that you install a vnc or RDP server such as xrdp. Microsoft is looking into this issue and hopefully I will have an update for you in a few weeks.
One thing that I find particularly nice about openSUSE is that you can build a JEOS (Just Enough OS) distribution using Novell's SUSE Studio. This is great for creating simple virtual appliances for web serving, security, or whatever simple functions you need Linux to do. I'm currently in the process of creating a simple ISO automated installer that installs openSUSE 11.1 with the Hyper-V components working, all the development tools and many common server components out of the box. I didn't have it ready in time for this article but I'll let you know about it as soon as I am done.
Have you had any experience getting other Linux distributions working with the Hyper-V R2 Linux Integration Components? Talk Back and Let Me Know.