X
Tech

Can blade servers solve your server room problems?

Blade servers have the potential to solve many of the most challenging issues that administrators face in planning and managing server rooms. Could they be what you've been wishing for?
Written by Scott Lowe, Contributor
Blade servers represent one of the newest concepts in server design, and they have the potential to solve many of the most challenging issues that administrators face in planning and managing server rooms. In previous articles, I have discussed the benefits of rack dense servers—particularly the conservation of space in a rack, which allows for a great number of servers to be placed in one rack and managed centrally.

There is, however, a limit to the number of processors that can be installed using this method. With the explosion in need for more and faster processors, some manufacturers have taken the rack dense concept a step further with the introduction of the blade server.

As we’ll see in a moment, blade servers can address a number of common server room design hurdles, including:

  • Managing adequate power for the servers
  • Cooling and maintaining reasonable humidity levels in the server room
  • Dealing with an overabundance of cables in a rack or cabinet (power cables, network cables, KVM cables)
  • The ever-expanding need for more and faster servers to run software

What exactly is a blade server?
In a nutshell, a blade server is one component of an overall system that will allow not just dozens but hundreds of servers to fit into the space that is usually occupied by 42 1U high rack servers. Yes, I said hundreds.

Of course, the way that these systems work will vary by manufacturer, and there are likely to be a lot of proprietary technologies released at the beginning.

Among the blade servers that I have done preliminary research on are:

In most cases, there is a main chassis with power connections and a number of slots. In the case of the RLX System 324, each 3U high chassis has two power connections and up to four RJ-21 connectors on the back of the chassis. Each chassis can accommodate 24 servers running with a Crusoe processor at 633 MHz. In a 42U cabinet, RLX can provide 336 independent servers with 42 network connections (again, using RJ-21 telco cables) and 28 power connections. For 336 servers, that is amazingly few cables.

Egenera, another new name in the server market, has come up with a hyper dense solution as well. Their solution provides up to 96 processors in a single cabinet. With the Egenera solution, as few as six cables can be used to provide power and network access.

Compaq, Dell, HP, and IBM are also working on their own designs. Compaq’s QuickBlade hyper dense server is due to appear by the end of the year, while Dell’s solution is expected at some point in 2002. HP is working with Intel on using the Itanium processor in the design of their hyper dense solution, and IBM has licensed technology from a third-party provider to compete in this market.

How do these servers solve our problems?
First and foremost, these new designs take the problem of server room crowding down a few notches. If a systems engineer can pack more than 300 well-powered servers into a 42U rack rather than only 42, a number of things happen:

  • Power requirements are drastically reduced.
    Each 1U high server has at least one, and often two, power supplies. Each supply needs to be connected to a separate circuit in order to maintain true redundancy. If 42 servers are installed into a 42U cabinet, that generally requires 84 cables just for power. In addition, a large number of dedicated circuits are required to provide all of this power. If that same administrator were to install a full cabinet of 14 RLX chassis and maximize their configuration, he or she would only require 28 power connections and many fewer circuits. This would result in a significant cost savings.
  • Cooling requirements are reduced.
    366 servers packed into 42U, or 366 servers organized in nine stand-alone cabinets? It’s obvious that the former solution will result in significantly less heat generation and thus, a cost savings for cooling the environment.
  • Space is conserved.
    366 servers in one 42U cabinet, or 366 servers in nine 42U cabinets: That math is very simple! There will be a significant cost savings due to use of space, especially if you have to pay per rack at a hosting facility.
  • Cabling is much easier.
    Installing 42 1U high servers with power, KVM, and public, private, and management network cables is no small feat. 336 cables is a lot for any rack or cabinet. By using blade servers with RJ-21 telco connections for an entire chassis, which consolidates power and management, the cabling requirements for a cabinet are drastically reduced.
  • Adding processing power is simpler.
    Rather than having to install a completely new 1U high server from scratch, run the appropriate cables and install an OS; adding processing power can be as simple as sliding a new blade into a chassis, loading an OS, and going home for the day.

I am positive that there will be some kinks to be worked out. But ultimately, if the price point is right, these devices will take the industry by storm—especially ISPs and other organizations that require a large number of servers.

Editorial standards