X
Home & Office

Construction firm extolls virtues of Dell's blades

Over the last year, one of ZDNet's more loyal readers has been keeping in touch with me, letting me know what he thinks of the various products, services, and companies we cover on ZDNet.  He asked not to be identified other than by his title -- general manager of IT at one of the world's largest construction and engineering firms.
Written by David Berlind, Inactive

Over the last year, one of ZDNet's more loyal readers has been keeping in touch with me, letting me know what he thinks of the various products, services, and companies we cover on ZDNet.  He asked not to be identified other than by his title -- general manager of IT at one of the world's largest construction and engineering firms.  One other thing he said I could share with ZDNet's readers is his success story with Dell's blades.  Demonstrative of how the tier one vendors are steeply discounting behind closed doors in order to close deals, his story begins with an offer that he couldn't refuse:

In March this year, Dell/EMC came to our business with an offer we couldn't refuse for a comprehensive DR strategy utilizing clustering, blades and two EMC CX500 SANs connected by Cisco 9216i FCIP Storage Routers on a 100MB connection between two separate sites.  I looked at the HP blades and the IBM blades (in production environments) before deciding on the Dell solution.  Basically, Dell is now cheaper per blade than a 1U server and Dell has said that its chassis will survive two generations of blades and both generations will be able to co-exist inside the chassis.  This should give us easily three years of life on these products.  I don't think the current HP solution was up to scratch and IBM's solution is good overall, but it is seriously expensive.  Dell sold us five blades and gave us the chassis, so we bought two loads of five and got two chassis.  They are just unbeatable on any price/performance calculation I can come up with.

Our "GM" went onto describe the setup....

 

We're implementing now and are having some issues with the EMC/Legato software because of the complexity of what we're trying to achieve.  With the two chassis (five blades each), we are doing cross-chassis clustering. So for example Chassis 1 Blade 1 has Microsoft Exchange Node 1 running on it. Chassis 2 Blade 1 supports Exchange Node 2.  Then, at our other site (which is in the same VLAN), we have a third Exchange node running on an older Compaq 1U server.  Each site has an EMC CX500 Fibre Channel-based storage system.  So in essence we could lose one or two blades, one or two chassis and one CX500 and still have no service interruption for our end users.  We're doing File and Print, Exchange and SQL Server in this configuration.  Basically the concept is to eliminate the possibility of a disaster by increasing high availability.  Both of our sites are independently accessible via the internet through Citrix, so if we lost our primary building and staff had to work from somewhere else or our senior management was overseas, they could do so readily from any internet connected computer.  In light of what happened this week in London, this configuration is even more useful because if this happened where we are and the public transportation was shutdown, nearly 100 percent of our senior staff is on broadband at home.  So they could just telecommute until things got back to normal.

...and then went on to describe how happy he is with the setup...

 

The Dell blades are tremendous.  We eliminated all of the clutter associated with keyboard/video/monitor (KVM) switching in rack environments.  Now the chassis provides one single Ethernet cable that connects to network and the keyboard, video and mouse tray picks that up via IP.  We eliminated about thirty-five sets of cables from the back of the unit, making the whole thing cooler and ridiculously easier to maintain physically.  We're using Cisco's 9216s Fibre Channel switches to connect to the CX500.  But they also do iSCSI and FCIP (Fibre Channel over IP).  This allows our Storage Area Network's Fibre Channel network to extend over an IP network to another site.  It's a bridging solution that turns our two physically separate networks into one virtual LAN and Fibre Channel SAN.  It's terrific stuff.

To the best of my knowledge, though, Cisco's 9216 Fiber Channel switches don't fit into the Dell chassis the way Brocade's do.  I asked how, then, the connection was being made.

Currently, Cisco doesn't offer a version of the 9216 that fits into the Dell's chassis the way that Brocade does.  We spoke to Dell about making the 9216 into a module for the 1855 chassis and they said it was an interesting idea.  Currently you can only get Brocade fibre switches for the chassis, which we weren't interested in.  With the FCIP and iSCSI support, the 9216s gave us the flexibility we were looking for.  Since there's no 9216 that fits in the 1855 chassis, we took advantage of the the Host Bus Adapter daughterboard option on the blades.   That daughterboard connects into the chassis and ultimately you connect out to your SAN via a concentrator.  We could have replaced the concentrator with a Brocade Fibre Channel switch, but then the switch is shared across the entire chassis backplane and your bandwidth is limited. 

Given Cisco's presence in the Fibre Channel market, the absence of a 9216 module for the 1855 seems like a glaring hole in Dell's offerings.  Last week, Tim Golden, director of Dell's Dell's PowerEdge Server Product Group, told me that some Cisco switches were around the corner and I assumed that they'd be Gigabit Ethernet switches.  But now, I'm thinking that it's more important for Dell to close Cisco with a storage offering than to close it with a network offering.  So, while a Cisco Ethernet switch may be in the works for the 1855, my bet is on something like the 9216.

Finally, in the spirit of disclosure, vendors often come to me with their star clients in hopes that I'll write their case studies.  For the record, I automatically refuse all such offers and only tell a story when it comes to me from a ZDNet reader that I have a longstanding relationship with.  In this case, the reader shared some pictures of his setup with me, ones that included pictures of his company's 1U servers and EVA SAN (both from HP).  But, unfortunately, I couldn't get PhotoShop to make them suitable for publishing.

Editorial standards