X
Business

A closer look at the benefits of blades

Are blades all they're cracked up to be? They may actually be better for the companies that sell them than they are for the people who buy them
Written by David Berlind, Inactive

In response to my recent articles about blades, one that focused on RLX and another about how Dell could rock the blade market, many of ZDNet's readers wrote in to say that the comparison of 1U-sized rack-mountable servers ("What's a 'U'?") to blade alternatives was not only a legitimate one to make, but also highly relevant to the decisions they've either already made or are in the process of making.

I'm becoming increasingly bearish about blades when compared to other server form factors, such as 1Us. (That term comes from their unit of measurement. A server by that name is 1.75 inches thick.) So bearish am I, that in the traditional marketspeak of "razors and blades", I'm beginning to wonder whether the server blades aren't really the razors, and the management software to run those servers doesn't fall into the category of the blades.

Blade servers are like video or network cards that you snap into a slot in a PC. The difference is that instead of snapping them into a PC the way one would with a video card, the card (henceforth, "the blade") snaps into a slot in a special enclosure that can hold other blades. Each blade is an entire, self-contained server (usually an Intel-based one) that might include on-board storage. When storage is not on board, the blade is normally connected to networked storage such as a storage area network (SAN) or network-attached storage (NAS). Blades typically share resources with other blades. Those resources -- power supplies, networking switches, storage switches and so on -- can usually be found in the same enclosure as the blades that share them. Blades from one vendor, however, do not fit into enclosures from another.

First-tier blade vendors Hewlett-Packard and IBM and their second-tier competitors, such as RLX, Egenera and Verari (Dell won't be serious about blades until November), talk about blades as though they're the best thing since sliced bread for everyone. But are they? I'm not so sure. As far as I can tell, the greatest benefits of blades are the ones that are most difficult to quantify in terms of total cost of ownership (TCO).

For example, since much of the aforementioned resource sharing takes place through a backplane that's inside the enclosure, a blade deployment is typically void of the cable nests that might be found in server deployments involving other form factors, such as 1Us and towers. Blade vendors actually compete on the number of cables you can expect to eliminate with a fully loaded enclosure. But beyond the minimal costs that are associated with the cables themselves, cable elimination is a convenience. For some shops that have hundreds or thousands of servers that are frequently being moved (either from one enclosure to another or because of failure), this convenience may be a very important one. The same can be said for hot-swapping -- a feature that allows blades to go live on the network just on the basis of being inserted into an enclosure -- no powering up or down of anything is required.

But these are conveniences and, as it turns out, many shops that have gone with blades just plugged them in and left them much the same way they would have done with their 1Us. It's not that most server administrators wouldn't like to have these modern day server conveniences (redoing the tie wraps for the cables on my server racks was never my favourite thing to do), but the question that Dell raises, the one that was the impetus for my last column on the topic, is a fair one: should you have to pay a lot more for this convenience?

Duelling over blade claims
Right about now, blade vendors are crying foul. "There's more to blades!" they'll tell you. "Much more, and it's quantifiable." One so-called quantifiable unique selling proposition of blades is their density -- the number of servers per square foot that you can squeeze into a server room or data centre. For three reasons, this claim is easily questioned.

First and foremost, even if space were once a problem for some companies, there's evidence that it's not anymore. All that downsizing that's taken place over the last decade, when coupled with outsourcing, has had its benefits from a space perspective.

Second, one of the more endearing features of blades, vendors will tell you, is that they're the ideal platform for server consolidation. This is where you take a software product like VMWare's ESX Server and make one physical server behave like a bunch of distinctly separate systems. To this I ask vendors, "Which one is it? Do I need to fit more physical servers in my fixed limited space or do I just need fewer servers?" Do the maths. Take a rack of 1U servers, add some VMWare to get just one more "system" out of each server, and the density improvement is 100 percent -- way better than what you'll get with blades, which leads me to the third point.

Beyond points one and two, if space is still a problem, then a close analysis of most offerings that include enterprise-class servers (that is, have two processors and two SCSI drives and not IDE) at the enclosures' most density-efficient configurations (fully loaded) reveals that the average overall gain in density over 1U form factors is somewhere in the 25 percent to 35 percent range. (Hint: Read the fine print very carefully.) Just supposing space is real problem for you, you have to ask yourself how much space you must be able to reclaim (taking growth into account) before you'll even consider a blade implementation. Warning: in doing this, you may end up weeding out all blade offerings as solutions to your space problem.

Beyond density, the next favourite advantage of blade vendors claim is management. This is the part where your blade deployment is one big, highly malleable hodgepodge of a server pool whose resources can be dynamically allocated and reallocated on the fly, often without human intervention. Is that memory SIMM in blade No. 242 acting up? Did the load on the back end to your point-of-sale application cross some magical performance threshold the day before Christmas? Thanks to proprietary on-board management hardware that can dig deep into the guts of a blade, spot the problem, and report it back to some central management console, the re-provisioning of the suffering blade's complete operating system, application, and connectivity (network and storage) configuration to a healthy blade-in-waiting (either for failover or load-balancing) can happen automatically without so much as a beeper going off in the middle of the night.

As opposed to the pay-more-as-you-need-more business model that vendors like HP characterise as utility computing, blade vendors will tell you that this sort of autonomic response, generally associated with your water or gas company, is utility computing.

The human factor
In quantifying a blade offering's management prowess, vendors like to toss around the much maligned acronym FTE (full-time equivalents). FTEs -- people costs -- are apparently like the plague. The fewer of these your company has, the better (if you just thought "offshoring", you're not alone). It's the American way to do more with less.

Although I haven't seen it yet, I suppose that a standard productivity metric could be stated on side of software boxes -- something like 4FTE or 5FTE. "This software will allow one person to manage the same number of servers that it normally takes 10 FTEs to manage." Thus a rating of 10FTE. The point is that in addition to the self-managing message, blade vendors also tout productivity. "If you need to provision 500 servers by hand, we can do it with the click of one button."

For close to a year, RLX's Simon Eastwick has been telling me that once I give his company's Control Tower management product a whirl, I'll be convinced that blades are the way to go. But in the same breath, Eastwick and his contemporaries at other blade vendors will also tell you that their management products are multiplatform. Not only can they manage their own blades, but they can manage servers from other vendors as well. On first blush, it seems as though the vendors are engaging in something of an FTE competition, and the key to the productivity throne lies in the supposedly proprietary connection between the on-board diagnostics found in each server and the corresponding management console that has the necessary secret sauce for decoding those diagnostics. On the second blush, by telling us that they can manage other servers as well (the point of which is that you can switch to a new server vendor without paying a management penalty), the message is that maybe the on-board stuff isn't so proprietary or unique after all.

To dispel the mystery, I checked with Altiris, which claims that its products can do pretty much anything that proprietary management offerings like IBM's Director, HP's Insight Manager or RLX's Control Tower can do. This includes functions that are introduced through helper applications, such as a member of the Tivoli family of management products in the case of IBM's Director, or ironically, Altiris' solutions that HP bundles with Insight Manager. In this respect, Altiris is a Switzerland of systems management and provisioning.

Much the same way that blade vendors say, "Our management products can manage our blades or those of other vendors," Altiris thrives on heterogeneity. The company's message is that if you're managing your systems with Altiris' tools, and you grow dissatisfied with your current server provider, you should be able to switch vendors without a problem." Altiris can simultaneously support Dell, Fujitsu, Siemens, HP, IBM, and most white boxes," said Dwain Kinghorn, chief technology officer at Altiris. "Our current shipping release supports the ability to rip and replace servers across all of those vendors' implementations and not just for blades. We do it for blades, 1Us, towers, dual-processor systems, quads, you name it. Workstations too. All the systems need to do is have support for the Pre-Boot Execution Environment (PXE) standard."

A PXE of the action
PXE is a standard that, when a server boots up, allows it and a provisioning server to find each other on the basis of the MAC address, which is hard-coded into the server's network adapter. When the system powers on, it announces, "Here I am, this is my MAC address" to the network. Upon hearing that announcement, the provisioning server, which is listening on the network for those requests, looks up the address in its database to see if it has plans for the new system. If it does, it downloads enough code to not only get the system booted into either DOS (a prerequisite to loading Windows) or Linux, but to retrieve additional provisioning instructions (applications, security settings, data and so on) from the provisioning server. Just as important is that the presence of PXE in a server has nothing to do with the server's form factor. It's found in most modern servers regardless of whether they're blades, 1Us, towers, or any other form factor. In other words, all of this fancy provisioning capability is not strictly the domain of blades. It can happen on servers of any form factor, as well it should.

Provisioning alone, however, isn't all there is to server management. Once servers are up and running, their operating systems and software have to be patched and their performance has to be monitored. Altiris can handle patch management, but, presumably, that last bit about the monitoring is where the proprietary management and firmware comes into play. Altiris' Kinghorn begs to differ though. "With very few exceptions that same information is available through our console," said Kinghorn. "We can mine that data from HP systems, Dell systems, IBM systems, it doesn't matter. Many of the specifications for accessing that information are public and where they're not, we've licensed them." Like with provisioning, Altiris says that it can reach into servers for the purposes of performance management and diagnostics regardless of whether those servers are blades, towers, or 1Us. "The bottom line," said Kinghorn, "is that we've normalised the [provisioning and management] experience regardless of the form factor. We don't care if it's a blade versus a traditional server or even if it's a desktop."

When taken in combination with the truth about density, the revelations that third-party software can not only handle the all important task of provisioning with nothing more than PXE present in the server, but that they can also take care of the follow-on management, are what makes it worthwhile to take another look at the value proposition of blades and to whom they really offer that value.

ZDNet readers brought this point home to me when, in response to my last column where I used the comparison of blades (with management software) to 1U servers (also with management software) from the same vendor to prove that 1Us might still be the better value proposition, they said I shouldn't have compared IBM's blades to IBM's 1Us, for which a premium must still be paid. Instead, to really take the emperor's clothes off, they told me I should have compared blades to low-cost 1Us from white-box vendors, Taiwanese manufacturers or a low-cost provider like Dell.

Trying to find a blade/management combination from IBM, HP or RLX whose TCO compares favourably with low cost 1Us that can be paired with third-party management solutions like what Altiris has to offer is even more difficult. It should be noted that, in the case of white box or Taiwanese vendors, you may not be getting the sophisticated on-board diagnostics that come from companies like IBM and HP. But, for those offerings, sufficient diagnostics do exist and where they don't, they will creep in over time because Intel is keen on making sure that its servers are as manageable as they are fast and reliable.

So, who do the benefits of blades really accrue to? Perhaps the vendors. If you're HP or IBM or any other server vendor and you see the way 1Us are becoming increasingly commoditised (especially with the help of Intel), you're also watching the margins on those servers shrink if not disappear altogether.

Blades that lock in customers with proprietary enclosures and management modules could be a much more reliable source of profits than 1Us. Licensing costs for the management software can result into some nice annuities, which bring up the question of whether the blades are actually the razors, and the software is the blades. All you'd need to do is convince buyers that the improvements in density are significant and that the management and provisioning can't be had with other server form factors -- then you could probably charge extra for a blade solution, thereby commanding a margin-healthy premium.

Should blades be commanding the premium they do? Do the hard-to-quantify benefits such as hot-swapping and cable-free designs make a recognisable difference in your bottom line? Are additional diagnostics like the predictive failure assessments found in high-brow offerings like IBM's xSeries servers (not just the blades by the way) worth the additional cost? These are questions that that should be considered in making a buying decision.

In the end, if you still see blades as compelling as their manufacturers want you to see them, and you're in the throes of making a business decision, you should check into what Dell is developing on the blade front. In November the company is expected to announce a blade offering that promises a 50 percent improvement in density at a 25 percent savings (as opposed to a premium) over the cost of its similarly configured 1U boxes Given that its 1Us already represent a savings to many buyers (everything is negotiable), another 25 percent savings on top of that will simply shatter the price points of other blade offerings for which the manufacturers are charging a premium rather than creating a savings.




Editorial standards