Last night I wrote about thin clients, PC over IP, and desktop virtualization. These all have the potential to make system admins' lives easier, especially in education where there often aren't many of us. They also have the potential to save money, although the upfront costs of servers, network infrastructure, software, and thin/zero-clients can be considerable. There are a few of ways to do this cheaply, though, one of which is easy (but less cheap), one of which is robust and interesting (and somewhat less cheap), and the other of which is hard (but probably very cheap).
First, we can't ignore NComputing. NComputing has all sorts of slick products designed to take single PCs and share them among multiple users leveraging simple virtualization technologies. They recently sent me both their X350 and their L130 devices to test. I'm working up a full review, but suffice to say that for quite a low cost, the X series in particular gets you quite a bit of bang for your buck. Given a relatively powerful PC, you really can achieve desktop-level performance shared among 4 computers. An $800 PC (this is probably generous, but we're looking for a solid dual-core with 4GB of RAM, Windows 7 Professional, and a decent warranty), 4 monitors ($110 a piece), keyboards/mice ($30 a set), 3 additional Windows 7 licenses ($70 a piece academic), and the NComputing X350 kit ($250) add up to $1820 or $455 per PC with a single point of management and negligible power consumption in 3 of the desktops.
A 28-seat lab would then be under $13,000. While cheap desktops could be had for this price, you'd be hard-pressed to match the power consumption, Windows 7 Pro functionality, and management advantages. Further cost-savings can actually be realized using their more sophisticated technologies that leverage server operating systems and hardware. This is one case where virtualization technology is basically transparent to the admin, making it well-suited to schools where virtualization expertise may be in short supply.Virtualbox and KVM are free. We here in education like free stuff. I've used Virtualbox many times before to run test operating systems or embed Windows XP for persnickety Windows programs on Linux machines, but it supports larger-scale desktop virtualization, too. In fact, my vision for many of my computer labs going forward is a single server running 30 identical virtual machines, each of which is accessed by a thin client.
KVM can achieve the same results (some say much better) and the beauty of such a setup is that you merely need to configure a single machine and then clone it to provision all of your desktops. The desktops are virtual, but look and feel like a standalone PC to users at the thin clients. Creation of the desktops is incredibly easy and a Linux server can host Windows virtual machines, Linux machines, etc. You could even have, for example, students working in a web design class running Ubuntu desktop virtual machines and publishing their work to a Ubuntu server VM. These VMs can easily move between servers as well, making them highly fault-tolerant.
A few problems crop up, though, in this rosy world of virtualization. If you need to run Windows (I know I do in some of my labs), then you need a volume license for every VM. This can add up quickly, even using academic pricing. The second issue is the cost for the server. If Windows XP, for example, runs well with 500MB of RAM, then 30 VMs will require 15GB of RAM. The host OS takes at least another gig. A server with 16GB of RAM and a couple of quad-core processors (most have some sort of hyperthreading, so 16 effective cores is enough to run 30 VMs in a production environment) will cost some money.
The thin clients aren't free either, although this is one place where all those old PCs come in very handy. They may not be pretty or energy-efficient in your lab, but even ancient PCs can run Thinstation (or any OS that can run an RDP or VNC client) and access the server. However, even using a couple servers for fault tolerance and/or pushing hard on single quad-core servers, you'll probably come out ahead of (or comparable to) the NComputing solutions.LTSP-Cluster with recycled, inexpensive, or low-end hardware. This is something I'm speaking about in theory only as I haven't had a chance to explore it yet. However, my basement is looking like a pretty good place to try this. So is my office, for that matter, since they both seem to be homes for wayward, outdated computers.
LTSP-Cluster runs in various Linux environments and Ubuntu supports both LTSP-Cluster and other clustering software called Eucalyptus, the point of which is to combine the computing power of multiple machines. These machines can, in theory, be older, cheap, or low-end, as the clustering software should only assign them work that they can handle. Eucalyptus would start a single virtual machine on an old P3 or P4 compute node, while it might start as many as 8 on an I7-based node.
As I discussed earlier, the actual thin clients can also be older PCs (much older, in fact, than those that can run as a compute node). All of this means that, if it works, stacks of PCs that would otherwise be headed for donation can quickly be repurposed. Like I said, a lot of this is in theory. My boss is just going to love the humming of even more computers in my office as I test this out.
Anyone who has had experience with LTSP-Cluster or Eucalyptus, talk back below. Let us know the hardware you used, how well virtualization worked, and what environments you were able to create.
Like I said, I know that I can leverage emerging VDI technologies cheaply. There has to be a way to make computer labs not merely cost-effective, but downright cheap. At the same time, they need to be robust and easy to manage. Care to help me figure out the best approach?