X
Tech

How a chill-pill for your server room improves your bottom line

Heatwave-wise, yesterday was probably the most uncomfortable day I can recall since moving to Massachusetts in 1991.  The temperature inside my house reached 99 degrees.
Written by David Berlind, Inactive

Heatwave-wise, yesterday was probably the most uncomfortable day I can recall since moving to Massachusetts in 1991.  The temperature inside my house reached 99 degrees.  Ironically, yesterday was also the day that the HVAC guys showed up at my home office to start a central air conditioning project that my wife and I had in the plans for over a year.  Double ironically, it was also the day that Intel's enterprise marketing director Shannon Poulin showed up to discuss, among other things, a new server chip (Sossaman) from Intel that's designed to keep cramped server rooms from overheating (see Intel to borrow from laptops for server chips, reported by News.com's Michael Kanellos).  Poulin and I escaped to a local eatery with air conditioning where we not only beat the heat, but spent a significant time talking about beating the heat. 

If you look back on any of my coverage regarding the most aggressive of heat-beating servers (blade servers), you'll notice that if I discuss heat and wattage, it's mostly lip service.  It's not that I'm not interested in the issue.  Rather, I have found other features of the various blade offerings from companies like Dell, IBM, and HP  to be more important as differentiators.  So, if there was ever a day that was finally appropo for a discussion of heat dissipation, yesterday was it and Poulin obliged by going deep (I'll write up another blog entry for the announcements we went over)

The more power it takes to run a chip, the hotter that chip can get and when you've got a bunch of chips packed into close quarters, as you very likely would in a high-density blade situation, it doesn't take too many fully populated blade enclosures before things start to heat up.  "So what!?" I've always asked myself. I've been in server rooms before.  The ones I've been in have had their air conditioning cranked up so much that I wondered whether frost might develop on my eyelashes. You wouldn't dare touch your tongue to any metal objects nor would you be surprised to find a side of beef hanging from the ceiling.  There was no question in my mind that the air conditioning was set to "overkill" and the people who had access to the thermostat had little reverence for the money it took to run a cafeteria-sized meat locker.  Apparently, the CFOs weren't bothered by the server room air conditioning line item either.  Well, maybe they were, but something tells me that this is one of those line items that the CFOs never argued with because they didn't know enough to  dispute what temperature the server room really needed to be in order to avoid an IT meltdown.  

But now, with seriously rising energy costs, cooling a server room is no longer a line item that can be swept under the rug, far away from the scrutiny of the CFO.  Poulin cited financial services outfits as the poster children for this sort of problem -- outfits for which cramped quarters are very typical and not much space is allocated to server rooms.  In situations like these, claims Poulin, the blade form factor comes in extremely handy because of how many servers can be packed into a tight space.  Density varies from vendor's blade offering to another's.  But, regardless of vendor, blades are always better than the next most space efficient form factor: the 1U rack-mountable server (see Hey, how many blades fit in that rack?).  For this reason says Poulin, blades appeal to Wall St. companies -- particularly ones that do a lot of trading and that, as a result of all that trading, have an insatiable thirst for horsepower.

As a justification (from blade server vendors) for why blades are good, I've heard about these so-called financial firms with walk-in closet-sized server rooms  -- ones that need every server per square foot they can manage.  But it wasn't until yesterday that I finally asked a representative (Poulin) from a market stakeholder (Intel) if they'd actually ever been in one of these server rooms.  Poulin said yes and I asked him to describe the experience.

Said Poulin,  "You see these huge vents and can literally feel the air flowing.  The air is moving very quickly.  This is why we're always looking to improve the thermal envelope for a given amount of processing power.  With today's astronomical energy costs, things can escalate pretty quickly.   First, you have the energy required to run the servers themselves.  Then, you have these cooling systems that have the typical A/C componentry plus the equipment to move the air through and around the servers."  Not to mention the acquisition cost of the cooling gear and contracts to service them annually.  Once you start to piling blades into the proprietary enclosures that house them, and then start piling enclosures into a standard 42U-sized (75-80 inches tall) 19-inch wide telecommunication rack, the resulting number of microprocessors, chipsets, memory chips, hard drives, power supplies, fans, etc.  per square foot can turn that rack into a space heater for a small shopping center.  

Poulin was actually speaking a language I now understood.   It was very much the same conversation I had one day earlier with Brian -- the owner of the outfit that was installing the central air conditioning system into my house.  A 3.5 ton compressor here, a 2 ton compressor there, a pair of air handlers (one in the basement, one in the attic) to get the air moving into every nook and cranny of my three-story house.   The acquisition and installation cost caused my heart to skip a beat and I can't wait to see next July's electric bill (July is always the killer month).   If I was feeling the pinch, and I definitely was, one can only imagine the pinch that companies must feel to run their meat lockers -- meat lockers where the meat is constantly working against  you.  At least with my house, the only things I could find that will routinely dish out heat in their battle against the central air conditioning were the digital video recorder/cable box (boy that thing gets hot), my son's Alienware computer, and the refrigerator (very much a contributor to the 99 degree temperature that was recorded in my kitchen during yesterday's sweltering heat).

So, for microprocessor companies like Intel and AMD that are looking to lower the TCO for their densely packed servers, the challenge is to deliver the most amount of processing power with the least amount of heat.  The "most amount of processing power" is what outfits like the financial services firms need to handle their trading loads without delay.   The least amount of heat means chips with better thermal envelopes that result in a cascade of savings.  On one level, you have the chip consuming less power which, by itself results in a savings on the electric bill.  But, if that chip, through the sheer effect of Moore's Law and/or other performance enhancing improvements such as bigger memory caches, multiple cores,  thread servicing technologies like Intel's Hyperthreading, and higher performance interactions with main memory can do the same amount of work that two or more chips did before,  then, theoretically speaking, there would come a point at which The XYZ Financial Services firm needs fewer chips (and therefore fewer servers).  

Over the long run, fewer chips means less electricity.  When those chips run at lower wattages -- for example the 31 watts it takes to run Intel's new Sossaman purely 32-bit server chip -- that yields even more of a savings. Want icing on the cake?  Fewer chips, running at less wattage, all generating less heat could significantly cut back the air conditioning bill.   Not just to run the sheer compressor tonnage that's need to condition the huge volume of air that must be whooshed around and through the reduced number of blades, but also to cut back on the whooshing.   In other words, less air whooshing around means a lower bill for running the air handlers and also means a lower total volume air that needs to cooled.  It's sort situation that could be a win-win-win-win (fewer chips, less electricity to run each of those fewer chips, less overall cooling tonnage, ess air handling) for the right company -- the kind that previously needed to pack a gazillion servers into a closet. 

To produce that win-win-win-win situation, Intel has turned to its Pentium M (mobile Pentium) technology -- a technology with a proven track record of delivering great performance for the least amount of power since that's exactly what today's notebooks need (something that's mindful of battery life with minimal sacrifices on other fronts).   The idea of using the Pentium M technology for blade servers is nothing new though.  More than 18 months ago, Fujitsu announced that it would be putting Intel's Pentium M into its Primergy BX300 blade servers.  But, whereas server vendors like Fujitsu were forced into a proprietary design (proprietary above and beyond the "proprietariness" of Intel's chips and chipsets), now, with Intel endorsing the idea, server vendors don't have to come up with their own approach to the problem.  They can just take the Sossaman package as is from Intel, and put it on their blades.  

Whether or not we'll see a similar approach in the AMD world remains to be seen.   AMD is making waves with a 35 watt mobile processor called the Turion.  There's also a 25 watt part but it hasn't gotten the attention of the 35 watt part.  Like the Sossaman, it will run in the 2.0 Ghz territory (both vendors now downplay the importance of Ghz).  But for 4 watts more, AMD customers would get the 64-bit capability found in its Opteron servers.  The Sossaman chip is strictly an IA-32 (32-bit Intel Architecture) part that targets 32 bit apps out of which end-users are trying extract the most amount of horsepower at the least amount of power.    That said, whereas the first Sossaman chip is a single core design that ships in the first half of 2006, there's a dual core design coming in 2006H2 that's sure to help Intel deliver on that win-win-win-win by taking the performance to a level not previously achieved by such a low power x86 part. 
However, for a 32-bit application to take full advantage of a dual core design, Poulin admits that developers may have revisit their code.  The question is, when and if they do that, should they take it one step further by going after the 64-bit capabilities supported in the 32/64-bit hybrid chips from Intel and AMD.  If they do, then Sossaman is definitely not an option.


Editorial standards