X
Tech

IBM: The computer isn't the network. It's the blade chassis.

If you're buying servers and you're thinking it might be time to give the blade form factor a look (which I highly recommend you do), or, if you're already into blades but willing to consider switching vendors (very hard to do since there are no blade standards), then IBM's BladeCenter is definitely worth a look.  Surprisingly, ZDNet's readers still beat me up every time I write about blades saying that by the time they reach the end of what I've written, they still don't know what a blade is.
Written by David Berlind, Inactive

If you're buying servers and you're thinking it might be time to give the blade form factor a look (which I highly recommend you do), or, if you're already into blades but willing to consider switching vendors (very hard to do since there are no blade standards), then IBM's BladeCenter is definitely worth a look.  Surprisingly, ZDNet's readers still beat me up every time I write about blades saying that by the time they reach the end of what I've written, they still don't know what a blade is.  Blades are basically full blown servers that have been fit onto an expansion card that slides into a special chassis or housing alongside of a bunch of other blade servers.  This "design" has several advantages over other server form factors (1Us, towers, etc.) in that it usually makes more efficient usage of limited floorspace (more servers per square foot), it involves the sharing of resources like network and storage connectivity (thereby greatly reducing the rats-nest of cabling), and has management features such as hot-swapping (blades can be removed or inserted without powering anything up or down) and on-the-fly provisioning that lend themselves to rapid fault recovery.  

When it comes to blades, IBM has been making all the right moves recently -- moves that IBM hopes will keep it in the IDC-rated top quarterly revenue marketshare spot that it took from IDC #2-rated HP in 2003.  Together, the two companies dominate over 70 percent of the  blade category  So, there's clearly something both are doing right.  As a side note, measuring marketshare is tricky.  Most companies are rated based on total share of quarterly revenues (and then quarters are added up for annual ratings).  But that doesn't necessarily mean that the current leader has sold the most units into the market for that quarter nor does it mean that it has the biggest installed based over history.

I had a chance to catch up with Big Blue's blade chief Doug Balog who has been making the rounds to remind press and analysis of what those moves are and why they should earn the company's blade offerings a spot on the short list of consideration when it comes to blade buying.  Until May of last year, Balog was in charge of worldwide engineering for IBM's blade offerings. Then in May of 2004, Balog was called up to be the group's frontman.  According to Balog, "the rationale behind [putting an engineer in charge of the whole group] was that blades are somewhat of a technical  discussion with customers because of how it involves storage, networking, servers, and software and how customers still consider them to be a disruptive technology that they must try to figure out how to fit into their environments."

One of the first achievements Balog spoke of was the delivery of 4 GB/s Fibre Channel  connectivity for storage in its Bladecenter.  As it turns out, the "midplane" in IBM's Bladecenter chassis -- the conduit through which data passes from a server to a FibreChannel storage switch -- has always been capable of handling 4GB of throughput.  It just wasn't until this year that the first 4GB Fibre Channel switches -- a.k.a. 4GFC-compliant switches -- were tested for interoperability and started to come to market.  Architecturally, any blade requiring connection to the Bladecenter midplane for 4GFC connectivity requires a 4GFC daughtercard.  With a connection to the Bladecenter midplane by way of that daughtercard, a server is connected directly to a Fibre Channel switch that fits in the rear of the Bladecenter chassis (it doesn't take up a server slot).  And then, that switch can be connected directly to a storage network.  According to Balog, Fibre Channel switch vendors Brocade, McData, and Qlogic already have 4 GFC switches that fit into the Bladecenter chassis.  Said Balog, "Qlogic also came up with a daughtercard version of its 4GB host bus adapter that fits on our blades.  The result is connectivity for our blades all the away across midplane to the switch to storage."

Perhaps the more interesting of Bladecenter's recent developments is the way IBM may have broken new ground on software licensing.  Currently, new developments in processor architecture are beginning to test the traditional models for selling software.  For example, depending on the software vendor, software used to be sold by the server or by the number of processors (in the case of multiprocessor systems).  Along came multi-core processors (essentially 2 or more processors on a single die) and some software vendors began treating each core as a separate processor (selling their software by the core) while still others priced it per die (or, as some like to say, per socket -- the receptacle on the motherboard into which a chip that can contain one or more cores is inserted). 

Then, in a deal it cut with Novell, along came IBM and instead of pricing by server, processor, core, or socket, customers have the option of pricing Novell's SuSE Linux by the chassis.  Linux of course is free to those who want to bear the burden of supporting it for themselves.  But, for those who want to subscribe to Novell's support for Linux, they can do so at an annual subscription cost of $2759 per chassis.   This of course makes no sense if you're only going to put Linux on one out of the maximum of 14 blades that can fit into a Bladecenter chassis.  Balog says the break-even point comes at at the 8th blade, essentially dropping the cost of a support subscription on blades 9 through 14 to free. 

Prior to the per-chassis pricing, Balog says, SuSE Linux was priced per socket. With each of IBM's blades having up to two sockets (for a potential total of 28), not only might the package deal result in a savings for some customers, Balog also argued that it makes license management less burdensome.  When priced per chassis for example, IT managers don't have to worry about whether the OS is running on 5, 10, or all 14 servers at any given time.  Short of a per chassis licensing plan, IT shops must keep a close eye on any on-the-fly server provisioning to make sure that the reprovisioning of just one more server doesn't put them out of subscription compliance. 

Such on-the-fly-provisioning where everything about an individual blade -- from its operating system to the applications running on it -- can change at a moments notice (either to supply additional server bandwidth to a starved application or to run a period process such a a nightly Monte Carlo simulation) is a feature that's highly touted by blade makers as an advantage to the blade architecture.  But in truth, the technologies required for such on-the-fly provisioning (primarily pre-execution boot environment --  a.k.a. "PXE", pronounced "pixie") is in just about all systems including many desktops.  Also, Balog freely admitted that  even  though the Novel Suse package deal is advantageous to shops that may do a lot of on-the-fly reprovisioning, that it's not the sort of thing that many IT shops really do.  "But," argued Balog, "Sarbanes-Oxley compliance is real and per-chassis pricing makes it a lot easier to cover yourself for compliance."

More to the point of how the chassis could be becoming the computer (vs. the network), Balog described a package deal that IBM has put together with Citrix and VMWare that immediately took me back to my days as an IT manager when I used to connect IBM 3270 terminals and PCs with IRMA boards to IBM's 317x SNA controllers that front-ended an IBM mainframe.  Consider Citrix's MetaFrame technology today and how it takes a single server and carves it up into a bunch of partitions, each of which powers an end-user on a thin-client with their instance of Windows (operating system, applications, etc.).  With IBM's
Bladecenter/Citrix/VMWare deal, the partitioning of a server into individual virtual machines (of the workstation type, not the server type) is done by VMWare (one of my favorite products) instead of Citrix MetaFrame.  Citrix supplies the connectivity that remotes the keyboard and display to an end user on a Citrix-enabled thin or thick client (as well as Citrix-esque) management and the result is a one-to-one correspondence of end-user to VMWare virtual machine. 

Now, take the server view of that architecture up a level to a chassis and, well, how is this any different than what a mainframe used to be.  You've got terminals on users' desktops.  A 317x-ish controller like technology that handles connectivity to the big iron, and then, in the big iron, you've got partitioned user space, application, and operating systems. Balog says one chassis can simultaneously support about 210 users. Users of all three products (BladeCenter, MetaFrame, VMWare) could probably build something similar on their own, but Balog says that the three companies have assembled and tested the bundle as a one stop shop.  Balog, who contrasts this plan to a similar one from HP where each user is assigned a complete blade,  envisions a point in time where end users can connect to their image from remote places like home.  He says IBM is working with Citrix to overcome some weaknesses in the latter's protocol to support such a scenario.  For example says Balog, "You can't take advantage of a local USB drive or printer if you're at home accessing a VMWare-based virtual machine in one of our Bladecenters at work.  There's new technology coming from Citrix that will deliver on this whole desktop experience."  Despite its shortcomings, it's just a very cool back to the future marriage of a bunch of state of the art technologies. 

Balog also cited the company's recent deal with Sun that resulted in BladeCenter's support for Solaris.  Whereas Sun has positioned the announcement as IBM capitulating to the market's demand for Solaris, Balog sees it differently.  Said Balog, "Our success in the marketplace and their lack of a blade offering left them no choice but to make sure their operating system was running on the most successful blade in business.  Clearly they were interested in keeping people on Solaris, even if it meant running it on our blades."  But Balog also agrees that IBM can benefit from access to typical Solaris strongholds.   "The government,  financial, and telecommunications sectors are three historically strong Solaris segements where blades have also done well" said Balog. "Where they have Solaris applications running on the SPARC processor, if I can pick up that application and move it to IBM's [x86] blade that would be awesome. I think it's important." Balog is of course speaking of Solaris' relatively newfound x86 religion.

In terms of what's coming, IBM has announced that it will deliver new Bladecenter chassis in 2006 that is forwards and backwards compatible with its current offering.  In other words, old blades will fit in the new chassis and blades designed to fit in the new chassis will also work in the old chassis.  In a bit of inside baseball, HP has, behind the scenes, been predicting the launch of a new BladeCenter chassis on the basis that the current one can't fully power  an entirely loaded chassis in the event that one of the redundant power supplies fails.  In an answer that couched the claim as a non-issue, he essentially admitted it was true by saying that when one power supply fails in a fully loaded Bladecenter (14 blades), that the Bladecenter's design throttles back the power going to each of the blades to in order to keep them all running.  In other words, should one of the redundant supplies fail, all servers cannot run at full throttle on the remaining power.  Balog said that the real test is whether the applications keep running.  "In a maximum configuration where the chassis is full of blades and full of drives, we do throttle down.  But, it's for a brief moment (until the failed supply is replaced) in time and no one notices."  Nevertheless,  Balog says that in addition to more midplane bandwidth and better support for virtualizing both servers and I/O, the new chassis will also have a better power design.


Editorial standards