How a chill-pill for your server room improves your bottom line

How a chill-pill for your server room improves your bottom line

Summary: Heatwave-wise, yesterday was probably the most uncomfortable day I can recall since moving to Massachusetts in 1991.  The temperature inside my house reached 99 degrees.

SHARE:
TOPICS: Processors
5

Heatwave-wise, yesterday was probably the most uncomfortable day I can recall since moving to Massachusetts in 1991.  The temperature inside my house reached 99 degrees.  Ironically, yesterday was also the day that the HVAC guys showed up at my home office to start a central air conditioning project that my wife and I had in the plans for over a year.  Double ironically, it was also the day that Intel's enterprise marketing director Shannon Poulin showed up to discuss, among other things, a new server chip (Sossaman) from Intel that's designed to keep cramped server rooms from overheating (see Intel to borrow from laptops for server chips, reported by News.com's Michael Kanellos).  Poulin and I escaped to a local eatery with air conditioning where we not only beat the heat, but spent a significant time talking about beating the heat. 

If you look back on any of my coverage regarding the most aggressive of heat-beating servers (blade servers), you'll notice that if I discuss heat and wattage, it's mostly lip service.  It's not that I'm not interested in the issue.  Rather, I have found other features of the various blade offerings from companies like Dell, IBM, and HP  to be more important as differentiators.  So, if there was ever a day that was finally appropo for a discussion of heat dissipation, yesterday was it and Poulin obliged by going deep (I'll write up another blog entry for the announcements we went over)

The more power it takes to run a chip, the hotter that chip can get and when you've got a bunch of chips packed into close quarters, as you very likely would in a high-density blade situation, it doesn't take too many fully populated blade enclosures before things start to heat up.  "So what!?" I've always asked myself. I've been in server rooms before.  The ones I've been in have had their air conditioning cranked up so much that I wondered whether frost might develop on my eyelashes. You wouldn't dare touch your tongue to any metal objects nor would you be surprised to find a side of beef hanging from the ceiling.  There was no question in my mind that the air conditioning was set to "overkill" and the people who had access to the thermostat had little reverence for the money it took to run a cafeteria-sized meat locker.  Apparently, the CFOs weren't bothered by the server room air conditioning line item either.  Well, maybe they were, but something tells me that this is one of those line items that the CFOs never argued with because they didn't know enough to  dispute what temperature the server room really needed to be in order to avoid an IT meltdown.  

But now, with seriously rising energy costs, cooling a server room is no longer a line item that can be swept under the rug, far away from the scrutiny of the CFO.  Poulin cited financial services outfits as the poster children for this sort of problem -- outfits for which cramped quarters are very typical and not much space is allocated to server rooms.  In situations like these, claims Poulin, the blade form factor comes in extremely handy because of how many servers can be packed into a tight space.  Density varies from vendor's blade offering to another's.  But, regardless of vendor, blades are always better than the next most space efficient form factor: the 1U rack-mountable server (see Hey, how many blades fit in that rack?).  For this reason says Poulin, blades appeal to Wall St. companies -- particularly ones that do a lot of trading and that, as a result of all that trading, have an insatiable thirst for horsepower.

As a justification (from blade server vendors) for why blades are good, I've heard about these so-called financial firms with walk-in closet-sized server rooms  -- ones that need every server per square foot they can manage.  But it wasn't until yesterday that I finally asked a representative (Poulin) from a market stakeholder (Intel) if they'd actually ever been in one of these server rooms.  Poulin said yes and I asked him to describe the experience.

Said Poulin,  "You see these huge vents and can literally feel the air flowing.  The air is moving very quickly.  This is why we're always looking to improve the thermal envelope for a given amount of processing power.  With today's astronomical energy costs, things can escalate pretty quickly.   First, you have the energy required to run the servers themselves.  Then, you have these cooling systems that have the typical A/C componentry plus the equipment to move the air through and around the servers."  Not to mention the acquisition cost of the cooling gear and contracts to service them annually.  Once you start to piling blades into the proprietary enclosures that house them, and then start piling enclosures into a standard 42U-sized (75-80 inches tall) 19-inch wide telecommunication rack, the resulting number of microprocessors, chipsets, memory chips, hard drives, power supplies, fans, etc.  per square foot can turn that rack into a space heater for a small shopping center.  

Poulin was actually speaking a language I now understood.   It was very much the same conversation I had one day earlier with Brian -- the owner of the outfit that was installing the central air conditioning system into my house.  A 3.5 ton compressor here, a 2 ton compressor there, a pair of air handlers (one in the basement, one in the attic) to get the air moving into every nook and cranny of my three-story house.   The acquisition and installation cost caused my heart to skip a beat and I can't wait to see next July's electric bill (July is always the killer month).   If I was feeling the pinch, and I definitely was, one can only imagine the pinch that companies must feel to run their meat lockers -- meat lockers where the meat is constantly working against  you.  At least with my house, the only things I could find that will routinely dish out heat in their battle against the central air conditioning were the digital video recorder/cable box (boy that thing gets hot), my son's Alienware computer, and the refrigerator (very much a contributor to the 99 degree temperature that was recorded in my kitchen during yesterday's sweltering heat).

So, for microprocessor companies like Intel and AMD that are looking to lower the TCO for their densely packed servers, the challenge is to deliver the most amount of processing power with the least amount of heat.  The "most amount of processing power" is what outfits like the financial services firms need to handle their trading loads without delay.   The least amount of heat means chips with better thermal envelopes that result in a cascade of savings.  On one level, you have the chip consuming less power which, by itself results in a savings on the electric bill.  But, if that chip, through the sheer effect of Moore's Law and/or other performance enhancing improvements such as bigger memory caches, multiple cores,  thread servicing technologies like Intel's Hyperthreading, and higher performance interactions with main memory can do the same amount of work that two or more chips did before,  then, theoretically speaking, there would come a point at which The XYZ Financial Services firm needs fewer chips (and therefore fewer servers).  

Over the long run, fewer chips means less electricity.  When those chips run at lower wattages -- for example the 31 watts it takes to run Intel's new Sossaman purely 32-bit server chip -- that yields even more of a savings. Want icing on the cake?  Fewer chips, running at less wattage, all generating less heat could significantly cut back the air conditioning bill.   Not just to run the sheer compressor tonnage that's need to condition the huge volume of air that must be whooshed around and through the reduced number of blades, but also to cut back on the whooshing.   In other words, less air whooshing around means a lower bill for running the air handlers and also means a lower total volume air that needs to cooled.  It's sort situation that could be a win-win-win-win (fewer chips, less electricity to run each of those fewer chips, less overall cooling tonnage, ess air handling) for the right company -- the kind that previously needed to pack a gazillion servers into a closet. 

To produce that win-win-win-win situation, Intel has turned to its Pentium M (mobile Pentium) technology -- a technology with a proven track record of delivering great performance for the least amount of power since that's exactly what today's notebooks need (something that's mindful of battery life with minimal sacrifices on other fronts).   The idea of using the Pentium M technology for blade servers is nothing new though.  More than 18 months ago, Fujitsu announced that it would be putting Intel's Pentium M into its Primergy BX300 blade servers.  But, whereas server vendors like Fujitsu were forced into a proprietary design (proprietary above and beyond the "proprietariness" of Intel's chips and chipsets), now, with Intel endorsing the idea, server vendors don't have to come up with their own approach to the problem.  They can just take the Sossaman package as is from Intel, and put it on their blades.  

Whether or not we'll see a similar approach in the AMD world remains to be seen.   AMD is making waves with a 35 watt mobile processor called the Turion.  There's also a 25 watt part but it hasn't gotten the attention of the 35 watt part.  Like the Sossaman, it will run in the 2.0 Ghz territory (both vendors now downplay the importance of Ghz).  But for 4 watts more, AMD customers would get the 64-bit capability found in its Opteron servers.  The Sossaman chip is strictly an IA-32 (32-bit Intel Architecture) part that targets 32 bit apps out of which end-users are trying extract the most amount of horsepower at the least amount of power.    That said, whereas the first Sossaman chip is a single core design that ships in the first half of 2006, there's a dual core design coming in 2006H2 that's sure to help Intel deliver on that win-win-win-win by taking the performance to a level not previously achieved by such a low power x86 part. 
However, for a 32-bit application to take full advantage of a dual core design, Poulin admits that developers may have revisit their code.  The question is, when and if they do that, should they take it one step further by going after the 64-bit capabilities supported in the 32/64-bit hybrid chips from Intel and AMD.  If they do, then Sossaman is definitely not an option.


Topic: Processors

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

5 comments
Log in or register to join the discussion
  • 3rd time is a charm

    The StrongARM processor was the first chip to be engineered to use less power. Although Mr. Ellison's network computer never flew, the last of the Apple Newtons were fantastic devices! Its a testament to ARM's design that InHell decided to keep it, rather than replace it with some crappy x86 technology. The Xscale (as StrongArm is now called) is an excellent low power chip - why not put THAT into blades (answer - no floating point unit).

    The best low-power chip to date was the Transmeta Crusoe. This technological marvel was the pinacle of Linus Torvald's accomplishements. Its too bad that Transmeta took so long, and had few financial backers for this chip to be a success. Maybe the InHell monopoly rolled these guys up - we'll probably never know . . .

    The Pentium-M was low power AND low performance! Take a stock 3Ghz chip and run it at 1.4 Ghz - and guess what, its uses less power! WOW! What a concept! Yes, that's an oversimplified explanation, but essentially true - if you run a Pentium-M at 3.6 Ghz (which it should be capable of), it will drain just about as many watts as your "typical" home PC (maybe a tad less). The Pentium-M is more about finding the happy medium where the power vs speed graph is optimal. Much of the power-consumption savings with P-M is with idling the chip when its not needed - something that constantly-pounded servers would never use. So if you think you're going to save big by using these new processors, maybe you should think again.
    Roger Ramjet
    • Transmeta was never really about low power

      What happened with Transmeta is that they took a huge gamble on a VLIW processor that was optimized for emulation. They thought that they could get a high speed solution that way that would knock the socks off of the competition.

      When the silicon came in, they were disappointed in the speed, so they tried to repackage it as a low-power solution instead. But a lot of this was smokescreen and mirrors. Van's Hardware did a rather convincing trick with a laptop powered by a Transmeta cpu -- he put it in his freezer and discovered that it ran a lot faster without any other mods. Which proved his point that the cpu was not really all that efficient, it was just thermally throttled so that it would never get very warm.

      I have to agree with the other post that Intel is going to have a difficult time selling people Pentium Ms as server chips when you can have 64-bitness in the Opteron, low power if you wish as well, and the full AMD-64 x86 instruction set instead of Intel's somewhat lame copy on the Pentium 4. If there is any extra power required to run 64 bits, it is also doing more work more efficiently, especially for the huge applications and databases that the target market is running. I also suspect that you could put huge amounts of memory per server in opteron cpus and thus speed up data throughput by getting much more work out of each CPU, simply due to the efficiencies of having so much memory quickly accessible to dual/quad core cpu on the much more efficient Athlon memory architecture.

      AMD has done their homework, and on a shoestring budget has out-engineered big Intel. It is going to take some more time to turn this big ship, and coming out of the turn, AMD is probably going to have a solid 20-30% of the whole high end market by then and enough cash to continue competing.

      Which is good for everybody because the high end stuff will trickle down to low and middle end where most of us are, and we will be getting some great hardware out of this competition.
      geewhizbang
  • Sossaman 32 bit technology is obsolete

    I found INTEL's Sossaman 32 bit Pentium M derived solution totally obsolete even today. Who is going to spend top dollars for two dual core Sossaman chips which can only take one piece of 4GB DIMMs? That's 4 cores sharing 4GB of memory on a narrow FSB! Even VIA has managed to clone AMD64 instructions on their mobile chip, yet INTEL is unable to copy AMD64 to Pentium III. INTEL engineering is seriously lagging. Look at Opteron, the Iwill 2P Opteron board has 16 memory slots that can take a total of 64GB memeory, that what you call a server. On power consumption, AMD has been selling the EE model (30 watts) and HE model (55watts) for two years already. I would rather buy some EE Opterons plus 64GB of memory running 64bit code, instead of some Pentium M based 32 bit solution. 32 bit belongs to the past.
    sharikou
  • RE: How a chill-pill for your server room improves your bottom line

    in my toilet I have this <a href="http://bathroomceilingheatertips.com">bathroom ceiling heater</a> that would help to regulate and control the temperature inside the room.
    alyka
  • RE: How a chill-pill for your server room improves your bottom line

    Well actually its correct, the lesser power consumption
    for the chip would mean lesser heat output. Normally a
    server room without any cooling system would be like in a
    bathroom with a very good <a href="http://www.bathroom-
    ceiling-heater.com">bathroom ceiling heater</a> and it
    will melt the chips.
    iscare