Construction firm extolls virtues of Dell's blades

Construction firm extolls virtues of Dell's blades

Summary: Over the last year, one of ZDNet's more loyal readers has been keeping in touch with me, letting me know what he thinks of the various products, services, and companies we cover on ZDNet.  He asked not to be identified other than by his title -- general manager of IT at one of the world's largest construction and engineering firms.

SHARE:
TOPICS: Networking
7

Over the last year, one of ZDNet's more loyal readers has been keeping in touch with me, letting me know what he thinks of the various products, services, and companies we cover on ZDNet.  He asked not to be identified other than by his title -- general manager of IT at one of the world's largest construction and engineering firms.  One other thing he said I could share with ZDNet's readers is his success story with Dell's blades.  Demonstrative of how the tier one vendors are steeply discounting behind closed doors in order to close deals, his story begins with an offer that he couldn't refuse:

In March this year, Dell/EMC came to our business with an offer we couldn't refuse for a comprehensive DR strategy utilizing clustering, blades and two EMC CX500 SANs connected by Cisco 9216i FCIP Storage Routers on a 100MB connection between two separate sites.  I looked at the HP blades and the IBM blades (in production environments) before deciding on the Dell solution.  Basically, Dell is now cheaper per blade than a 1U server and Dell has said that its chassis will survive two generations of blades and both generations will be able to co-exist inside the chassis.  This should give us easily three years of life on these products.  I don't think the current HP solution was up to scratch and IBM's solution is good overall, but it is seriously expensive.  Dell sold us five blades and gave us the chassis, so we bought two loads of five and got two chassis.  They are just unbeatable on any price/performance calculation I can come up with.

Our "GM" went onto describe the setup....

 

We're implementing now and are having some issues with the EMC/Legato software because of the complexity of what we're trying to achieve.  With the two chassis (five blades each), we are doing cross-chassis clustering. So for example Chassis 1 Blade 1 has Microsoft Exchange Node 1 running on it. Chassis 2 Blade 1 supports Exchange Node 2.  Then, at our other site (which is in the same VLAN), we have a third Exchange node running on an older Compaq 1U server.  Each site has an EMC CX500 Fibre Channel-based storage system.  So in essence we could lose one or two blades, one or two chassis and one CX500 and still have no service interruption for our end users.  We're doing File and Print, Exchange and SQL Server in this configuration.  Basically the concept is to eliminate the possibility of a disaster by increasing high availability.  Both of our sites are independently accessible via the internet through Citrix, so if we lost our primary building and staff had to work from somewhere else or our senior management was overseas, they could do so readily from any internet connected computer.  In light of what happened this week in London, this configuration is even more useful because if this happened where we are and the public transportation was shutdown, nearly 100 percent of our senior staff is on broadband at home.  So they could just telecommute until things got back to normal.

...and then went on to describe how happy he is with the setup...

 

The Dell blades are tremendous.  We eliminated all of the clutter associated with keyboard/video/monitor (KVM) switching in rack environments.  Now the chassis provides one single Ethernet cable that connects to network and the keyboard, video and mouse tray picks that up via IP.  We eliminated about thirty-five sets of cables from the back of the unit, making the whole thing cooler and ridiculously easier to maintain physically.  We're using Cisco's 9216s Fibre Channel switches to connect to the CX500.  But they also do iSCSI and FCIP (Fibre Channel over IP).  This allows our Storage Area Network's Fibre Channel network to extend over an IP network to another site.  It's a bridging solution that turns our two physically separate networks into one virtual LAN and Fibre Channel SAN.  It's terrific stuff.

To the best of my knowledge, though, Cisco's 9216 Fiber Channel switches don't fit into the Dell chassis the way Brocade's do.  I asked how, then, the connection was being made.

Currently, Cisco doesn't offer a version of the 9216 that fits into the Dell's chassis the way that Brocade does.  We spoke to Dell about making the 9216 into a module for the 1855 chassis and they said it was an interesting idea.  Currently you can only get Brocade fibre switches for the chassis, which we weren't interested in.  With the FCIP and iSCSI support, the 9216s gave us the flexibility we were looking for.  Since there's no 9216 that fits in the 1855 chassis, we took advantage of the the Host Bus Adapter daughterboard option on the blades.   That daughterboard connects into the chassis and ultimately you connect out to your SAN via a concentrator.  We could have replaced the concentrator with a Brocade Fibre Channel switch, but then the switch is shared across the entire chassis backplane and your bandwidth is limited. 

Given Cisco's presence in the Fibre Channel market, the absence of a 9216 module for the 1855 seems like a glaring hole in Dell's offerings.  Last week, Tim Golden, director of Dell's Dell's PowerEdge Server Product Group, told me that some Cisco switches were around the corner and I assumed that they'd be Gigabit Ethernet switches.  But now, I'm thinking that it's more important for Dell to close Cisco with a storage offering than to close it with a network offering.  So, while a Cisco Ethernet switch may be in the works for the 1855, my bet is on something like the 9216.

Finally, in the spirit of disclosure, vendors often come to me with their star clients in hopes that I'll write their case studies.  For the record, I automatically refuse all such offers and only tell a story when it comes to me from a ZDNet reader that I have a longstanding relationship with.  In this case, the reader shared some pictures of his setup with me, ones that included pictures of his company's 1U servers and EVA SAN (both from HP).  But, unfortunately, I couldn't get PhotoShop to make them suitable for publishing.

Topic: Networking

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

7 comments
Log in or register to join the discussion
  • 10 servers

    Doing nothing but email and print spooling (I did see some SQLserver - but not how much). That's 10 M$ Windoze licenses and with Citrix, 10 Citrix licences (plus 10 Exchange and 10 SQLserver licenses). That adds up to quite a bit of cash! Using a Linux solution eliminates ALL of those license fees - and probably can do the same amount of work using only 5 servers.

    Hardware costs are the SMALLEST costs in this scenario! Saving a few bucks by buying blades is DWARFED by the overwhelming software costs (and maintenance). This guy sounds penny-wise and pound foolish - he's found ONE solution that works for him, and he thinks its the BEST solution (typical CIO-type).

    So the reason he bought 2 blade "racks" is for future expansion - when the business needs more capacity for email - and the pathetically unscalable Exchange requires an entirely NEW server for another few employees . . .
    Roger Ramjet
    • You missed the whole point Ram

      DR = Disaster Recovery = redundancy = Minimum 2 IDENTICAL systems.

      Personally, I liked the article; I generally DON?T like David B. however. But I think the architecture described here is clean, efficient and cost effective, what?s not to like. Your assertions, Ram, are completely off base!!!
      Aguy_z
      • I don't think so . . .

        All of this could be achieved using 1U servers or multi-processor servers and virtual server software - including DR, HA, etc. And the costs would be VERY similiar BECAUSE hardware is such a small component of overall cost. Using Linux would save you more cash than the 25% you saved on hardware . . .
        Roger Ramjet
      • Yet...

        ...is this really an argument for "blades" and Dell's implementation in particular? 10 servers and 2 locations? My lab is far bigger than this production "case study".

        If the disaster wiped out the single location containing chassis 1 and 2, can his entire Exchange user base function off of Node 3, the old Compaq 1U server, in London? With this small of a configuration, I'm assuming it is. For true "DR", it would make more sense to have a chassis here and the other in London since the data is already replicated to both locations. What is the "cross-chassis clustering" really buying you if you can lose the primary data center and function from London? It may be something interesting to play with, but the added complexity/redundancy really doesn't seem necessary, IMHO.

        The market for blades really isn't for a SMB as discussed here. The market is for high density/Enterprise implementations. In that environment, Dell is probably the worst of any of the blade offerings.

        Brocade has more than half of the SAN switch market. Cisco and McData pretty evenly split the rest of the maret between them and the 'also rans' share a small portion. If Dell follows their usual approach, they will build to the largest common denominator and you can plead with Dell and get replies such as "that's interesting". But don't hold your breath.

        On a side note...
        I'm not making any assumptions or accusations. The way these articles have been appearing lately, I'm curious if David Berlind, or any authors here at ZDNET/CNET, have any financial interest in Dell, Intel, AMD, HP, etc. Shouldn't we have a similar disclosure for 'technology' analysts as is required for 'financial' analysts? When the commentary is in the public domain, the market impact can be significant.
        Uber Dweeb
        • Blades vs 1U. The debate goes on...

          There are 2 interesting points here.

          The first one is whether Blades are the right technology for SMBs. Frankly, unless some one requires a lot of servers, 1U servers or other general purpose servers should do the job as they are more flexible and usually cheaper.
          But if Dell can supply blades at a price lower than 1U servers, than for as few as 5-10 servers too, Dell blades will make sense.

          The second one is about who is better. IBM style blades or Dell? Considering the fact that when blades will be deployed, customers will be looking for 100s of units. In those environments, I would always consider Dell blades. Seeing Dell as a competitor ensures that the other companies offer better discounts ! Also, Dell is not ?all that bad?. They will provide most of the useful relevant features. And for the one odd exceptions, I can always use 1-2 units of general purpose servers. Its always a discussion of ?value for money?. By being rigid, if I end up spending 20-50% more, it will mean that I have lesser money in future to do other things. In fact, I might even end up compromising ?up-front?.
          Hermit_z
  • 10? No Wonder Dell has a hard Sell with their POS

    IBM gives me all that, 14 servers instead of 10, Cisco and Brocade switches, a much better and long term strategy, and much more reliable hardware.

    So why should I buy from Dell who loves to do the "flavor of the month" and has no business in enterprise class hardware?

    Actually, Dell has no business selling any computers.
    ITGuy04
    • No one ever got sacked for buying IBM..... Untill now!

      Standardisation of the Enterprise market is a fact of life,
      standardisation of blades and other technology is another. Where
      there is volume and excessive margins Dell enters the market.
      Where there is propriatary technology and execesive margins there
      is IBM. Leave standards to the kings of the business model and get
      on with helping customers loose control of their businesses.
      Orangeman