There is, however, a limit to the number of processors that can be installed using this method. With the explosion in need for more and faster processors, some manufacturers have taken the rack dense concept a step further with the introduction of the blade server.
As we’ll see in a moment, blade servers can address a number of common server room design hurdles, including:
What exactly is a blade server?
In a nutshell, a blade server is one component of an overall system that will allow not just dozens but hundreds of servers to fit into the space that is usually occupied by 42 1U high rack servers. Yes, I said hundreds.
Of course, the way that these systems work will vary by manufacturer, and there are likely to be a lot of proprietary technologies released at the beginning.
Among the blade servers that I have done preliminary research on are:
In most cases, there is a main chassis with power connections and a number of slots. In the case of the RLX System 324, each 3U high chassis has two power connections and up to four RJ-21 connectors on the back of the chassis. Each chassis can accommodate 24 servers running with a Crusoe processor at 633 MHz. In a 42U cabinet, RLX can provide 336 independent servers with 42 network connections (again, using RJ-21 telco cables) and 28 power connections. For 336 servers, that is amazingly few cables.
Egenera, another new name in the server market, has come up with a hyper dense solution as well. Their solution provides up to 96 processors in a single cabinet. With the Egenera solution, as few as six cables can be used to provide power and network access.
Compaq, Dell, HP, and IBM are also working on their own designs. Compaq’s QuickBlade hyper dense server is due to appear by the end of the year, while Dell’s solution is expected at some point in 2002. HP is working with Intel on using the Itanium processor in the design of their hyper dense solution, and IBM has licensed technology from a third-party provider to compete in this market.
How do these servers solve our problems?
First and foremost, these new designs take the problem of server room crowding down a few notches. If a systems engineer can pack more than 300 well-powered servers into a 42U rack rather than only 42, a number of things happen:
I am positive that there will be some kinks to be worked out. But ultimately, if the price point is right, these devices will take the industry by storm—especially ISPs and other organizations that require a large number of servers.