A funny thing happened on the way back from the Data Center

A funny thing happened on the way back from the Data Center

Summary: How one frequent Data Center visitor thinks his visits could be improved. Security, noise reduction and general accessibility are all part of the plan.

SHARE:

The last time I visited a Data Center (last week), it made me wonder if Data Center (DC) technology will ever change. And when I say, "change," I mean change for the better. I don't know how often you go into a DC but I work in one on a fairly regular basis. Sure, I do a lot of my work remotely but some things just can't be done from my desk--namely hardware replacement, visual inspection of cables, blade server reseating, component reseating and the occasional cantankerous operating system installation. I don't mind working in the DC but sometimes I wonder, who designed these "standard" components and did he/they design them with human hands, fingers and eyes in mind.

I've pondered it for a few days now but I can't say that I've found an answer.

By standard components, I'm referring to 19-inch racks, hot aisle-cold aisle, server access, cables and cable organizers, power connections, KVM systems, lighting and just about every other physical structure. It's my observation that, if they're built for human interaction, they should be redesigned and re-engineered. Yes, just about everything needs a design overhaul. The only thing that probably doesn't need (in my opinion) a redesign is the humble floor plate. That system seems to work pretty well but I don't interact with the floor plates enough to comment on their features or fails.

So, for simplicity, I'll list each component, tell you what I think is wrong with it and then I'll give you my solution. Yeah, I complain a lot but I always offer a fix and not just a complaint.

Racks

19 inches is probably OK but can someone please design a rack so that standard architecture (1u, 2u, 4u, etc.) systems aren't so difficult to work with? For example, create connectors along the sides, running vertically through the rack, for power, network, SAN, etc. so that there isn't a mass of wires staring me in the face when I open a cabinet door. That "arm" of wires makes it very difficult to work on systems. For a fix, think blade-type connections here.

Please install some LED lighting in the cabinets so that we can see what we're doing. The lighting can work like a refrigerator--the light goes off when the doors are closed and comes on when opened. Don't install overhead lighting that will cast shadows from the top system down. Install the lighting on the front and back inside edges running vertically through the cabinet so that we can see all along the cabinet from top to bottom.

Hot Aisle-Cold Aisle

This concept works in theory but if you're a human and you have to work in the hot aisle or in the cold aisle, you know that it can be quite unpleasant on either side. I still believe that air exchange is far better than air temperature. In other words, air flow volume to take heat away from racks is a better design. Racks, by design, are inefficient heat sinks. Heat tends to stay in them and around them. I think that you could dissipate more heat to have fans blowing horizontally along rackspaces rather than vertically.

To properly design the fans, you'd have to create shorter aisles so that the fans wouldn't have to blow hurricane-force winds at one end of the aisle and have little effect at the other end. Basically what this design would do is create an air "river" with cool air being pulled through the fans and forced through the aisles, pushing the warm air toward exhaust fans.

Server Containers

I'd like to see some easy-to-remove tops on racked systems. Try removing the top from a server with another system installed right above it. It's very difficult and takes far too long to perform. We need quick disconnect thumb screws or latches and then off they go. Right now, I have to pull a system almost completely out of the rack to remove the top and to perform any kind of maintenance on it. And, when I pull the server toward the front of the rack, I have to disconnect the wires from the back. It would be better if there were either a quick connect/disconnect module for everything on the back or wire the servers on the sides (as suggested above) so that the system would move freely within the rack, once unlatched from its position.

I wonder if it's possible to move all replaceable parts to a more convenient location in the hardware box? Think blades again here where you can remove disks easily from the front of the system. Now, do that with memory, solid state disks and CPUs.

Fans

I know it's possible to make quiet fans. I've seen them. But, I haven't heard them. I'd like to not hear more of them. It's so loud in the DC that you have to yell at the person who's standing right next to you. Make quiet fans for racks and servers.

Security

I know this one sounds a little crazy because where do you have more security than at a DC? Probably a bank or anywhere that a lot of money changes hands. There's no money exchange at a DC (typically) but the need for some heightened security is at hand.

When I badge into a DC or DC module area, I should have a PIN that identifies the work I'm about to perform with an associated ticket number and description of that work. Only those racks to which I need access should be accessible to me. Once I badge into the workspace, only those racks containing my systems of interest should unlock. This would ensure security and prevent accidental system tampering, outages and theft.

Similarity between system names, inaccurate rack locations and general carelessness are the primary causes for accidental outages resulting from physical server handling. With this security measure in place, you'd only have access to the systems designated on your ticket. If the ticket is inaccurate and sends you to the wrong location, then that is when the technician must verify the server name and rack location visually.

This plan would also inform DC managers as to who is doing what, where and why in their DC. As it is now, DC managers have no idea which systems are being touched, why or if there's authorization to do so. Once a technician badges in, his maintenance areas would light up to notify security and managers where work is being performed. This would be a significant step toward a zero accidental outage history.

Hardware Check Out

I think that each DC should keep standard computing hardware on hand for technicians to use while in the DC. The technician would check out a mobile phone, a laptop, a crash cart, a toolkit or whatever else is needed for a physical maintenance session. No outside computing devices should be allowed onto a DC floor. It's a huge security risk to do so. Outside hardware is a great way to introduce viruses, risk security through video and photos or to damage equipment with tools that are the incorrect size or rating.

I'm not trying to complicate things for my fellow DC workers--just the opposite, in fact. And, it's your best interests that I have at heart. After all, if you walk onto a DC floor to repair a single system and you do so and walk out--you certainly don't want to hear that a system in the rack next to one you were working in took an outage and now they're looking at you as the root cause. In my world, that would never happen.

Do you have any ideas for Data Center improvements that you want to share? Talk back and let me know.

Topics: CXO, Data Centers, Hardware, Storage

About

Kenneth 'Ken' Hess is a full-time Windows and Linux system administrator with 20 years of experience with Mac, Linux, UNIX, and Windows systems in large multi-data center environments.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

12 comments
Log in or register to join the discussion
  • What is the age of your data center and what components are you using

    There are several commercial racks that provide cable runs on the side. They are either deeper or wider than the standard 19" rack (or both).

    quiet fans do exist.

    Some of the newer data centers have compartmentalized HVAC as well as servers. If you are the AC guy, you walk down a separate, secure corridor for AC systems. Likewise, there is a separate, secure corridor outside of the main data center room.
    Your Non Advocate
    • It's all new stuff

      But, the cables DO run down the sides but there is still a mass of wiring on that arm at the rear of the server. It's in the way.
      khess
  • Wheel Refurbishment

    Thank you for sharing .There are several commercial racks that provide cable runs on the side. systems aren???t so difficult to work with...
    wheelrefurbishment
  • Engineers aren't really known for designing things for humans all the time

    Engineers aren't really known for designing things for humans all the time. They get lost in hypothetical efficiencies rather than the fact that these things have to be handled by people.

    This also appears in software design, and even corporate policies. ZDNet is IMO very guilty of pushing forward concepts that ignore people for the sake of some theoretical efficiency increase.
    CobraA1
  • DC ideas

    Cabling is in the hands of the client in most public DCs, as most have pointed out racks usually do have some cable management installed. I would highly suggest looking up some old fashioned cable lacing techniques to really get things organized within your own cabinet.
    For hot aisle containment, honestly, I don't think there really is a better solution that has been created yet. Most servers are designed to take cold air from the front and exhaust the air to the rear, so the logical step would be to create a system that works with that. The noise created by huge fans would probably be way too noisy, and submersion in cooling liquid is way to costly at the current time. The in cabinet chilling systems seem pretty neat, but allot of clients still are wary of having liquid cooled cabinets.
    For noise reduction, I've been thinking about having Noise cancelling headsets on loan to clients with standard 3.5mm jacks for phones. Another option that I've been floating is a management VLAN to each clients rack with a IP KVM that is accessible from a terminal in a prep room, so that if a client has a long software maintenance window they can easily pick-up a portable KVM and hook it up to the server they want to work on and use the terminal. Check out Lantronix Spider Duo.
    For security, if you don't have a personal locking cabinet in a public DC, I would highly suggest asking for one. Most DCs would probably supply some kind of locking cabinet, unless you have totally managed/rented equipment, in which case you usually can't access the physical server.
    eric@...
  • Rack Server

    OK, here's an idea. How about video & keyboard connectors on the front of the server. You've always got the cart with the monitor & keyboard at the back of the server and then you've got to run around to the other side of the cabinet to turn the computer on/off or swap CD's.

    I mean honestly the USB keyboard/mouse connectors are usually there anyways and the video connector is almost always on motherboard, so it would take a slight motherboard redesign and one additional internal cable...
    currell
    • These exist too

      Check some of the systems from SuperMicro that have front video and usb connectors.
      PepperdotNet
      • Unfortunately, they only work (well) with SuperMicro boards

        The fans and other cooling is designed only for SuperMicro boards. Put in a different board, and the heat sink cowl doesn't fit.

        And of course, because of our needs, we don't use SuperMicro boards.
        mheartwood
      • front connectors

        the IBM servers we use, have front connectors and have had in models going back several years.
        ZingerWot
  • thoughts

    I like the "Hardware Check Out" idea. For one of our DCs, we need to specify (on the visit application) what serial numbers of equipment going in and out. This allows for hardware upgrades, but clearly shows some thought to hardware tracking.

    If laptops we available internal to the DC, it would make life just that bit simpler. Reimaging facilities and policies would make life a bit harder for the DC provider tho.

    Both DCs have key control (we own the whole rack(s) in both cases).

    I'd like to see rack based server hardware with slide forward internals, so the whole thing doesn't need to be unracked to access the lid. Yes very close to blade based hardware.

    p
    ZingerWot
  • Great ideas but will never happen...

    The driving force in DC's for many many years has been cost...

    DC floor space has dropped from hundreds per square foot to dollars.

    Unless quality starts being demanded, AND paid for, nothing will happen.

    Personally, I'm just happy to see some focus on efficiency and energy reduction. As far as human comfort, it will not get better any time soon, if ever.

    More likely will be more portability in hardware, such as semi-containers that are never touched. Cloud will also start replacing more of the smaller DC's and DC spaces. DCs will be built and left until so much has degraded that the whole space will be migrated and rebuilt.

    My humble opinion
    m_little
  • Great ideas but will never happen...

    The driving force in DC's for many many years has been cost...

    DC floor space has dropped from hundreds per square foot to dollars.

    Unless quality starts being demanded, AND paid for, nothing will happen.

    Personally, I'm just happy to see some focus on efficiency and energy reduction. As far as human comfort, it will not get better any time soon, if ever.

    More likely will be more portability in hardware, such as semi-containers that are never touched. Cloud will also start replacing more of the smaller DC's and DC spaces. DCs will be built and left until so much has degraded that the whole space will be migrated and rebuilt.

    My humble opinion.
    m_little