That's a no, then, on utilization?

That's a no, then, on utilization?

Summary: I don't want to imply that there isn't an argument to be made for getting server utilization up; there is, but its not always appropriate and, where it is, neither virtualization nor partitioning are likely to be the right way to do it.

SHARE:
TOPICS: Servers
3
Every day, or almost every day, somebody talks to me about server consolidation and the need to get utilization rates up. That's a big deal in the general press too. Sun's president, for example, keeps blogging about how Solaris containers can help drive utilization. In real life containers are a marketing morph on trusted communities, and those are very useful, but his specific arguments on utilization are an appeal to ignorance, aimed far more at making sales than sense.

I don't want to imply that there isn't an argument to be made for getting server utilization up; there is, but it's not always appropriate and, where it is, neither virtualization nor partitioning are likely to be the right way to do it.

First, consider that the overwhelming majority of the computers whose function makes us call them "servers" are used to provide a direct service to users -and whether that's email, SQL data access, or file and print doesn't matter. What the user wants from the machine is instant response -and the faster the better. No user cares much about another user's schedule: they want their result, and right now. To deliver that we needs lots of capacity on standby, not busy doing some other user's job: idle, and ready to go to work on demand. That's why most of our servers are idle upwards of 90% of the time, and too slow the other 10% or less of the time.

When partitioning started, in the 1960S, a machine with 128K of memory and two 5MB disks cost over two million bucks -and took over 200 people earning around $4K/year to babysit. Memory management was critically important, but weak and poorly understood for large applications. As a result, hardware partitioning became a solution that made sense as a way of keeping developers from bringing down production without having to buy and operate a second machine.

For the same reasons, a different solution to the problem, systems virtualization, made sense too. Both solutions reduced the risk of program collision and both fit the context of the time. Of course, in those days, an electronic data processing machine cost the equivalent of 4,000 man years of labor and there weren't any users in the modern sense outside the academic and then emerging mini-computer markets. Today the typical server costs less than a person month, users depend on interactive services, and schedulable batch processing retains its primacy only among those too tied to the mainframe to change.

So lets push a little reality into this consolidation stuff, shall we? What matters now is user satisfaction and service. So next time someone complains to you that your servers sit idle a lot, ask them why it matters - and point out that making productive users wait, even a second a shot, also has a cost - one that overwhelms that server cost in a matter of months if not weeks.

None of this means, however, that you don't consolidate, as I'll discuss next week, there are reasons to do it and methods that make sense -but not everywhere and not through partitioning or virtualization.

Topic: Servers

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

3 comments
Log in or register to join the discussion
  • You have forgotten

    the 80/20 rule! There ARE reasons to use virtualization. Number one is that M$ Windoze has limitations that no other competitors has, so you need to buy a LOT of servers to do something simple - like mail exchange. Each server takes up space and power in a specially-built, climate-controlled building/room. If you want to expand - and there's no more room PHYSICALLY, you are faced with the prospect of building another "room" - and they are VERY expensive. If you can consolodate 8 Windoze servers into one server running VMWARE, then you save on space and power - without costing you too much more (8CPU box vs 8x1CPU boxes). Even if the utilization of those 8 virtual servers are 10%-10-10-10-...etc., you have STILL saved money.

    Consolodation and Virtualization also kills Grid technology. Although its hard to find applications that run on Grid, the idea of Grid is elegant and the future should be bright for that technology. New tech like IBM P5 - where you can shut down or share CPUs when you need them (after spending a TON of time/money on configuration planning - which IBM consultants will "help" you with), is anathema to Grid - and you will see that IBM is pretty cool on the whole Grid idea.
    Roger Ramjet
    • You're right, roger, but only for MS users

      Yes, virtualization can make sense for people using racks of ms servers - and for the same reason it did for bosses with mixed COBOL/Assembler programming teams on the IBM 360/370 architecture: memory management (protection) across multiple apps is/was weak.

      But you know what would work better in your example? Throw those eight sets of MS licenses away and run everything on one Linux server - no virtualization, less money, and less hassle.

      By the way, I generally appreciate your comments; please keep em coming.
      murph_z
      • Sorry, Linux crashes too!

        In the real-world of Linux apps it is very easy for one poorly-written php or perl script to push a server to 100% utilization or bring it down.

        The need to isolate poorly-written or poorly performing apps (or system software) is universal and not a religious Windows versus Linux versus Solaris thing.
        spiv