What makes a server-centric thin-client environment attractive is:
It's hard to argue the first point until you realize that the cycles consumed doing productive work are now on the server. This means that as you add thin clients, you need to add server resources. Still, this isn't a bad trade-off if you need to provide access to a large number of locations but only a small number of thin clients will be in service at any given time.
But what happens if you need to support hundreds, or thousands, of clients most of which could be in use at one time?
The most optimistic figures for either terminal services or virtualization technologies is that a single physical server can support up-to twenty clients running concurrently. Most systems administrators find that number closer 10 concurrent clients. We're not talking about your average server either. A server capable of supporting 10-20 concurrent clients, will typically cost over $15,000. Typical thin-clients run about $300 (sans monitor/keyboard) so, optimistically, an environment capable of running 100 concurrent clients could easily exceed $100,000 -- enough money to buy 100 quite robust workstations or twice that many entry-level workstations -- which would still be capable of offering a user experience far exceeding the capabilities of the typical thin-client.
Still, the big plus of this model is the ability to deliver a single desktop image instead of requiring the delivery of perhaps hundreds of workstation images requiring hours of work. RIGHT? Well, maybe ...
This assumes that the only alternative to the thin-client model is lots of hours spent configuring individual workstations. Well this just isn't the case. There are a number of tools on the market which will distribute a single image out to any number of workstations. Sure, that consumes valuable network resources too -- but this can be done once per academic year, when classes are not in session, and then incremental updates can be performed periodically when few students are computing. The beauty of this approach is that the economies of scale play out -- as you add workstations, the number of images that you have to support remains fixed.
Systems administrators don't have to worry about workstations becoming corrupted either. All modern operating systems can be configured to keep individual user log-ins from getting into mischief.
In the end, the trade-offs between the thin-client model and individual workstations in a student lab setting come out pretty much even. If your workstation model is implemented so that idle workstation cycles are used to do research for your faculty, then the gains from the economies of scale realized from using large centralized computing resources vanish entirely.
Erik's assumption that because a large number of students own their own computers that student labs are becoming unnecessary is just plain wrong! As those students owning their own computers has gone up, so has the number of student workstations we provide (now over 3,500) and demand for our facilities still leaves people waiting in line during most of the semester.
Don't get me wrong, there are lots of opportunities for virtualization (and terminal services) in Education IT but those opportunities are are limited as a solution for student labs.
As alluded to by Erik, perhaps the best application for virtualization (other than its primary purpose of reducing the number of under-utilized physical servers into fully-utilized virtual server farms) is the delivery of applications to student-owned computers. This allows students to access discipline-specific applications which they might not otherwise be able to afford -- or which might not run on their hardware.
This solution works well for some applications but because of the high bandwidth requirements of many graphics-intensive applications and the growing demand for streaming audio and video, care must be taken. Remote-desktop delivery of virtual desktops is really most appropriate for low-bandwidth applications.