X
Business

A funny thing happened on the way back from the Data Center

How one frequent Data Center visitor thinks his visits could be improved. Security, noise reduction and general accessibility are all part of the plan.
Written by Ken Hess, Contributor

The last time I visited a Data Center (last week), it made me wonder if Data Center (DC) technology will ever change. And when I say, "change," I mean change for the better. I don't know how often you go into a DC but I work in one on a fairly regular basis. Sure, I do a lot of my work remotely but some things just can't be done from my desk--namely hardware replacement, visual inspection of cables, blade server reseating, component reseating and the occasional cantankerous operating system installation. I don't mind working in the DC but sometimes I wonder, who designed these "standard" components and did he/they design them with human hands, fingers and eyes in mind.

I've pondered it for a few days now but I can't say that I've found an answer.

By standard components, I'm referring to 19-inch racks, hot aisle-cold aisle, server access, cables and cable organizers, power connections, KVM systems, lighting and just about every other physical structure. It's my observation that, if they're built for human interaction, they should be redesigned and re-engineered. Yes, just about everything needs a design overhaul. The only thing that probably doesn't need (in my opinion) a redesign is the humble floor plate. That system seems to work pretty well but I don't interact with the floor plates enough to comment on their features or fails.

So, for simplicity, I'll list each component, tell you what I think is wrong with it and then I'll give you my solution. Yeah, I complain a lot but I always offer a fix and not just a complaint.

Racks

19 inches is probably OK but can someone please design a rack so that standard architecture (1u, 2u, 4u, etc.) systems aren't so difficult to work with? For example, create connectors along the sides, running vertically through the rack, for power, network, SAN, etc. so that there isn't a mass of wires staring me in the face when I open a cabinet door. That "arm" of wires makes it very difficult to work on systems. For a fix, think blade-type connections here.

Please install some LED lighting in the cabinets so that we can see what we're doing. The lighting can work like a refrigerator--the light goes off when the doors are closed and comes on when opened. Don't install overhead lighting that will cast shadows from the top system down. Install the lighting on the front and back inside edges running vertically through the cabinet so that we can see all along the cabinet from top to bottom.

Hot Aisle-Cold Aisle

This concept works in theory but if you're a human and you have to work in the hot aisle or in the cold aisle, you know that it can be quite unpleasant on either side. I still believe that air exchange is far better than air temperature. In other words, air flow volume to take heat away from racks is a better design. Racks, by design, are inefficient heat sinks. Heat tends to stay in them and around them. I think that you could dissipate more heat to have fans blowing horizontally along rackspaces rather than vertically.

To properly design the fans, you'd have to create shorter aisles so that the fans wouldn't have to blow hurricane-force winds at one end of the aisle and have little effect at the other end. Basically what this design would do is create an air "river" with cool air being pulled through the fans and forced through the aisles, pushing the warm air toward exhaust fans.

Server Containers

I'd like to see some easy-to-remove tops on racked systems. Try removing the top from a server with another system installed right above it. It's very difficult and takes far too long to perform. We need quick disconnect thumb screws or latches and then off they go. Right now, I have to pull a system almost completely out of the rack to remove the top and to perform any kind of maintenance on it. And, when I pull the server toward the front of the rack, I have to disconnect the wires from the back. It would be better if there were either a quick connect/disconnect module for everything on the back or wire the servers on the sides (as suggested above) so that the system would move freely within the rack, once unlatched from its position.

I wonder if it's possible to move all replaceable parts to a more convenient location in the hardware box? Think blades again here where you can remove disks easily from the front of the system. Now, do that with memory, solid state disks and CPUs.

Fans

I know it's possible to make quiet fans. I've seen them. But, I haven't heard them. I'd like to not hear more of them. It's so loud in the DC that you have to yell at the person who's standing right next to you. Make quiet fans for racks and servers.

Security

I know this one sounds a little crazy because where do you have more security than at a DC? Probably a bank or anywhere that a lot of money changes hands. There's no money exchange at a DC (typically) but the need for some heightened security is at hand.

When I badge into a DC or DC module area, I should have a PIN that identifies the work I'm about to perform with an associated ticket number and description of that work. Only those racks to which I need access should be accessible to me. Once I badge into the workspace, only those racks containing my systems of interest should unlock. This would ensure security and prevent accidental system tampering, outages and theft.

Similarity between system names, inaccurate rack locations and general carelessness are the primary causes for accidental outages resulting from physical server handling. With this security measure in place, you'd only have access to the systems designated on your ticket. If the ticket is inaccurate and sends you to the wrong location, then that is when the technician must verify the server name and rack location visually.

This plan would also inform DC managers as to who is doing what, where and why in their DC. As it is now, DC managers have no idea which systems are being touched, why or if there's authorization to do so. Once a technician badges in, his maintenance areas would light up to notify security and managers where work is being performed. This would be a significant step toward a zero accidental outage history.

Hardware Check Out

I think that each DC should keep standard computing hardware on hand for technicians to use while in the DC. The technician would check out a mobile phone, a laptop, a crash cart, a toolkit or whatever else is needed for a physical maintenance session. No outside computing devices should be allowed onto a DC floor. It's a huge security risk to do so. Outside hardware is a great way to introduce viruses, risk security through video and photos or to damage equipment with tools that are the incorrect size or rating.

I'm not trying to complicate things for my fellow DC workers--just the opposite, in fact. And, it's your best interests that I have at heart. After all, if you walk onto a DC floor to repair a single system and you do so and walk out--you certainly don't want to hear that a system in the rack next to one you were working in took an outage and now they're looking at you as the root cause. In my world, that would never happen.

Do you have any ideas for Data Center improvements that you want to share? Talk back and let me know.

Editorial standards