A followup on server consolidation

A followup on server consolidation

Summary: There's nothing wrong with loading three differentdatabase engines on the same machine if you've got the I/O and processor bandwidth to handlethem -you can trust Unix to do its job pretty much no matter what you throw at it.

About a week ago in this blog I discussed the history of both virtualization and partitioning as solutions to problems we no longer have -and which we therefore no longer need. I was thinking only in terms of how these technology ideas are being misapplied in the Unix world but, as reader Roger Ramjet pointed out, these solutions still have value in the Windows world -and, of course, the mainframers are still out there too.

The Windows 2000 kernel is quite capable of handling multiple concurrent applications, but the Windows registry really isn't without the kind of hand editing that Microsoft puts into consolidated products like Small Business Server -- and neither is Microsoft's network stack unless you use one NIC per application. As a result Windows virtualization provides a perfectly reasonable way to ensure that multiple low use applications can be run and maintained on the same box without interfering with one another.

On the Unix side -and whether that means BSD, Linux, or Solaris to you doesn't matter- neither Microsoft's problems now, nor IBM's problems then, exist. The workarounds do, but that's not because they're needed, it's because people from the mainframe and Windows environment insist they have to have them.

With Unix, you can safely run multiple applications on the same machine -the technical issues you run into have little or nothing to do with minimizing systems resource interactions, and a lot to do with externals like fail-over management and network connectivity. The most important difference, however, isn't in the technology but in what you try to do with it: when the resource is cheaper than user time, utilization becomes unimportant because the value lies in improved user service.

Consolidation generally does lead to both better utilisation and better service, but it's the better service that counts, not the utilisation. That was nicely illustrated in a press release issued by Sun and Manugistics yesterday. In it, they report using a Sun 20K machine with 36 USIV CPUs to set new world records on the Manugistics Fulfillment v7.1 benchmark.

It's a positive result for Sun, but it's the way they got it that counts here. To get both a 23% speed advantage and a 45% price/performance advantage over the previous record holder (an IBM P5-590), they put both the database and application set on the same machine. Remember, Unix isn't client-server, so why use a relatively slow network when you've got SMP and a fast backplane?

Machine considerations in Unix consolidation usually involve appropriate scaling, not operating system limitations in memory, network, or processor management. There's nothing wrong with loading three different database engines on the same machine if you've got the I/O and processor bandwidth to handle them -you can trust Unix to do its job pretty much no matter what you throw at it.

On the other hand, you can't trust the typical corporate PC network to the same degree, so one of the critical pieces in making consolidation work is to measure response on the user desktop, not at your server. What you'll often find is that the server's lightly loaded, but the network is forcing the user to wait -and in that situation you don't consolidate the servers, you put them electrically adjacent to the users they serve and go back to the budget committee for money to clean up the network.

The reality is that since it's not the Unix technology that limits your ability to consolidate, you don't need work-around tools like partitioning and virtualization to help you. What you do need is a good understanding of usage demand patterns (and the willingness to change once you discover where you were wrong) because your success depends on meeting user needs, not on saving a few thousand bucks on hardware at the cost of making hundreds of users wait minutes every day. That's the real bottom line on consolidation: if minimizing user time means leaving capacity idle, then do that and smile because capacity's cheap, but user time isn't.

Topic: Operating Systems

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Ah, if only that were true!

    You said:
    "capacity?s cheap, but user time isn?t."

    Try telling that to a previous employer of mine. User time was seen as costless - in fact, using the term "soft costs" was a quick way to get your funding denied! I wonder how prevalent that view is in the big wide world.

    In the view of that CFO, the only real costs were ones that required writing a check, and cost reduction meant writing a smaller check (or best, not writing one at all). Reducing user wait times wasn't even on the list.
    • Odd.

      "In the view of that CFO, the only real costs were ones that required writing a check..."

      How does one get to be a CFO without having taken a basic class in economics?
      • I know.

        It was a question that was asked frequently.

        Nobody ever got a good answer.
        • Simple: CFOs get paid to manage money, not make it

          Making money isn't in the CFO job description; they're bean counters -meaning someone else has bean there and made it for them to count.

          I have a nasty trick I like to use on the worst cases. CFOs like leases -I have no idea why, the tax reasons are long passe, and investors know enough to look past balance sheets and see leases. What I do is suggest getting their office PC gear on the same lease -with a 30 month expiry- as the company's key servers. The result? they'll fight to get you upgrade money instead of fighting to keep you from getting it.
  • Linux systems crash too!

    The need for isolation from poorly written code is not limited to mainframes or Windows.

    Just ask any Linux-based web hosting company about the problems they have with a client's php or perl script "going wild" and taking down a server.

    Suddently, Linux admins don't seem all that different from Windows admins or others - the need to sandbox apps that are written by others to avoid system-wide failure is just as important.

    In a corporate environment, the equivalent is a centralized server opts team that needs to protect the crm app group from the financial team when each team's code is running on one box that has enough cpu and i/o power as you stated.
  • you got it wrong - virtualization is blade servers done right!

    Virtualization is really just an architectually elegant evolution of the horribly kludgy hardware solutions called blade servers.

    Virtualization gives all the management, isolation, scalability, and flexibility benefits that has everyone flocking to blade servers but without the negatives.

    Virtualization can be implemented on any host; it doesn't require buying into the proprietary blade chassis platform of a single vendor such as IBM or HP and then losing all the flexiblity in hardware pricing and configuration that we currently enjoy with standardized servers.

    Virtualization also lets you "slice and dice" the VM's anyway you want. Rather than being limited to the blad servers' marketing manager who has decided whether single cpu, dual cpu, blads are available or how much disk space he thinks you want on a blade, you can configure your own "virtual blade" any way you see fit since it based solely on the standardized resources of the underlying server.