IBM's new X6 Server: A look at maintenance

Sure there's a lot to crow about when you consider IBM's new X6 Server lineup. But hardly anyone's talking about my favorite part of this hardware: ease of maintenance. Here's what you'll want to know about the X6 systems.

I could write 5,000, or more, words on IBM's new X6 systems and how fast and agile they are. How they're perfect for high density virtualization applications. How companies are deploying them for cloud applications and high-end analytics for IoT, OLTP, and a gaggle of other acronymic* devices. How IBM's new architecture brings affordable, highly-scalable, rock-solid computing to a data center near you.

But, you can get that anywhere and everywhere.

All those things are certainly true.

You don't read my posts for the normal, everyday, fluffy, or mentally stable information that a hundred other analysts give you. You read me because you know that I approach technology from an "in the trenches" perspective that's unvarnished, untarnished, and honest. I'm not saying that anyone else is dishonest or anything but you can get cake and ice cream from anyone.

You come to me for the recipe—or at the very least, the secret ingredient. I like to give you what you can't get anywhere else at any price.

Read this

Server sales slow, but Dell shows growth; HP, IBM tied for No. 1

Dell had its highest revenue market share ever in the second quarter. Oracle's server sales were thumped. IBM saw soft demand for System x, Power Systems and System z as all three were on tap to be refreshed. HP saw flat x86 ProLiant sales, but Integrity demand fell.

Read More

However, I hate for my posts to read like the opening lines of one of those hyper infomercial things that you see on TV but there's just no other way to do it, so here goes. Have you ever gone to the data center to find that not only is the server you need to work on near the bottom of the rack but the maintenance requires you to practically disassemble the whole system? Yeah? Well, it's happened to me more than once and I hate it.

How many times have you cut your hands working on servers in those cramped cases or battled clips that won't pop loose with standard amounts of force? Perhaps halfway into your maintenance activity you find that you really need three tiny hands instead of just two normal ones. I've done all that and I've done it more times than I care to remember.

The problem that I've complained about for years is that manufacturers build cases and components that were obviously not designed to be worked on while still racked. Sure, they're a dream to work on if you have a nice clear table nearby. Heck, sometimes, I'd rather work on the floor. But you don't typically remove the server from its rack to work on it, do you?

Here's a typical scenario:

I walk into the data center, locate my target system, assess it hardwarewise from the front and the rear, and then begin my attack. Of course, the system is always near the bottom of a packed rack—that's a given.

Then I remove all of the cables and connections from the system's backside.

I check out the case removal strategy while I'm back there. Naturally, it's in the front, so I go around.

I loosen the front screw connectors of the racked server so that I can slide it out a bit to reach the clever method of top removal.

Many systems require that you grab pressure-sensitive clips or mash down then push back or a combination of awkward movements that are difficulty enhanced by the yoga-esque position you have to contort your body into to make that movement a foot-and-a-half off the floor, while cold air blows up your shirt.

Finally, the top comes loose. I push back all the way on the large metal top until the restraints clear the slider holes, which makes me have to pull the system farther out before I can remove the top. The top is now liberated from the case. I'm panting with frustration, while mumbling almost unintelligibly that there are easier ways to make a living.

A rematch made in heaven?

IBM, Lenovo and the $2.3bn question: Can they hit the jackpot twice?

Both companies will be hoping that the $2.3bn deal for IBM's server business will deliver the same win-win result as the sale of its PC business.

Read More

I set the top aside and reassess my position by pulling the system about halfway out of the rack slot.

I find that the component that I need to add/replace/remove is only accessible from the rear of the system. I joyfully walk to the rear of the unit.

Wait, it's dark back here. I need a flashlight.

I grab a flashlight and pull the system backwards until it hits a hard stop. OK, I realize that I can't pull it back beyond its normal resting place. Awesome.

I begin to unscrew, unlatch, and free each cowl, slot placeholder, and component that stands between me and hardware victory.

With my quarry in sight, I gingerly remove it, replace it, or add it to the now half-disassembled system before me.

Once my actual task is complete, I begin my journey of restoration.

All the while I'm thinking, "Gee, I hope I reseated that riser card and reattached that connector correctly."

I continue my rebuild process until I'm now ready to push the system out the front and replace its top.

"Why", you ask innocently, "Would you replace the top if you're not sure if everything's back in place correctly"? 

Because some manufacturer's cases require them to be on and in place solidly before the system can be powered on. And, before you ask, no, I haven't taken the time to locate the little pressure doors or switches that I need to depress in order to "fake out" the system into thinking that its case is actually in place.

I replace the top, move to the rear of the system again, plug in all the cables—hopefully back into their correct locations, secure the server into the rack, cross my fingers, and press the power button. I always close my eyes for that last part.

I then find its console (KVM), if one exists, and watch the POST to be sure everything's OK. If it is, I know I've successfully conquered the data center gremlins and all is well. If not, I happily** return to a bit of troubleshooting—depending on where I see a failure in POST—and breathe life back into that rack-mounted beast.

"All of this could have been prevented with some good design", I tell myself.

Enter IBM's X6 systems.

All modular—easy to access components from both the front and the rear of the system.

Here's a video that demonstrates it better than I can describe it to you. The part I'm talking about begins approximately 2:05 minutes in.

Modular. Ease of maintenance. No hand cuts. No soul-condemming rants.  No push. No pull. Finally a server that would be pleasant to work on physically. I can live with that.

The IBM X6 family of servers can do a lot for you and your company. My favorite thing that it does for me is maintain my healthy blood pressure***.

*Acronymic. Probably not a real word but you understood it didn't you?

**Happily is relative and usually preceeded by a rash of expletives, a list of "where did I go wrong in life" questions, and several deep breaths.

***Which, by the way, is 120/62 when not working on servers.

Related Stories: