Who benefits from data center centralization?

Who benefits from data center centralization?

Summary: Reviewing someone else's article on the joys and virtues of 70s style data center centralization left me with a pithy new saying to try out on you: " the more money organizations save by centralizing IT control and processing, the more it costs them" - because none of the rationalizations account for the costs of the rebel IT this process tends to spawn.

SHARE:

I read an Information Week article recently on data center best practices praising various data center centralization efforts. Here's the opening bit:

There are data centers, and then there are data centers. The first kind ranges from the overheated, wire-tangled, cramped closets that sometimes also host cleaning supplies to the more standard glass-house variety of years past. The second kind--and the topic of this article--cool with winter air, run on solar power, automatically provision servers without human involvement, and can't be infiltrated even if the attacker is driving a Mack truck full-throttle through the front gate.

These "badass" data centers--energy efficient, automated, hypersecure--are held up as models of innovation today, but their technologies and methodologies could become standard fare tomorrow.

Rhode Island's Bryant University sees its fair share of snow and cold weather. And all that cold outside air is perfect to chill the liquid that cools the university's new server room in the basement of the John H. Chafee Center for International Business. It's just one way that Bryant's IT department is saving 20% to 30% on power consumption compared with just a year ago. "We've come from the dark ages to the forefront," says Art Gloster, Bryant's VP of IT for the last five years.

Before a massive overhaul completed in April, the university had four "data centers" scattered across campus, including server racks stuffed into closets with little concern for backup and no thought to efficiency. Now Bryant's consolidated, virtualized, reconfigured, blade-based, and heavily automated data center is one of the first examples of IBM's young green data center initiative.

On a word count basis the first half of this article is mostly devoted to adding an energy savings/environmental sizzle to selling the centralization agenda - thus this bit, a mid article return to the Bryant University example, pretty much wraps that up:

Consolidation was one of the main goals of Bryant's data center upgrade. The initial strategy was to get everything in one place so the university could deliver on a backup strategy during outages. Little thought was given to going green. However, as Bryant worked with IBM and APC engineers on the data center, going through four designs before settling on this one, saving energy emerged as a value proposition.

The final location was the right size, near an electrical substation at the back of the campus, in a lightly traveled area, which was good for the data center's physical security. Proximity to an electrical substation was key. "The farther away the power supply, the less efficient the data center," Bertone says. Microsoft and Equinix both have data centers with their own substation.

The next page or so focuses mainly on physical security - a return to the opening paragraph comment that some data centers are built so well they're proof against a Mack attack. A sample:

For Terremark, too, security is part of its value proposition. It recently built several 50,000-square-foot buildings on a new 30-acre campus in Culpepper, Va., using a tiered physical security approach that takes into consideration every layer from outside the fences to the machines inside.

For its most sensitive systems, there are seven tiers of physical security a person must pass before physically touching the machines. Those include berms of dirt along the perimeter of the property, gates, fences, identity cards, guards, and biometrics.

Among Terremark's high-tech physical security measures are machines that measure hand geometry against a database of credentialed employees and an IP camera system that acts as an electronic tripwire. If the cordon is breached, the camera that caught the breach immediately pops up on a bank of security monitors. That system is designed to recognize faces, but Terremark hasn't yet unlocked that capability.

Some of what Terremark says are its best security measures are the lowest tech. "Just by putting a gutter or a gully in front of a berm, that doesn't cost anything, but it's extremely effective," says Ben Stewart, Terremark's senior VP for facility engineering. After the ditches and hills, there are gates and fencing rated at K-4 strength, strong enough to stop a truck moving at 35 mph.

The last part of the article advocates data center automation - here's a bit:

Our data centers are pretty dark," says Larry Dusanic, the company's director of IT. The insurer doesn't even have a full-time engineer working in its main data center in southern Nevada. Run-book automation is "the tool to glue everything together," from SQL Server, MySQL, and Oracle to Internet Information Server and Apache, he says.

Though Dusanic's organization uses run-book automation to integrate its systems and automate processes, the company still relies on experienced engineers to write scripts to make it all happen. "You need to take the time up front to really look at something," he says. Common processes might involve 30 interdependent tasks, and it can take weeks to create a proper automated script.

One of the more interesting scenarios Dusanic has been able to accomplish fixes a problem Citrix Systems has with printing large files. The insurance company prints thousands of pages periodically as part of its loss accounting, and the application that deals with them is distributed via Citrix. However, large print jobs run from Citrix can kill print servers, printers, and the application itself.

Now, whenever a print job of more than 20 pages is executed from Citrix, a text file is created to say who requested the job, where it's being printed, and what's being printed. The text file is placed in a file share that Opalis monitors. Opalis then inputs the information into a database and load balances the job across printers. Once the task is complete, a notification is sent to the print operator and the user who requested the job. Dusanic says the company could easily make it so that if CPU utilization on the print server gets to a certain threshold, the job would be moved to another server automatically. "If we had a custom solution to do this, it probably would have cost $100,000 end to end," he says.

Put all the pieces together and what you get is an innocent sounding question with an immediate corollary: how does today's "badass" data center differ from the 1970s glass house?

The answer, I think, is that it doesn't: from physical design to controls imposed on users, this is the 1970s all over again - and that's what brings up the corollary question: all of this stuff is discussed and presented, both in the article and in the real world, from an IT management perspective - so who represents the users and what role do they have in any of it?

The answer to that, I think, is that the users weren't considered except as sources of processor demand and budget - and that everything reported in this article, from the glass house isolation achieved at Bryant to the obvious pride taken in the user tracking component for the ludicrous printing "solution" at Dusanic's company, reflects an internal IT focus placing enormous managerial barriers between users and IT.

Think about that a bit and I'm sure you'll agree that all of this brings up the most difficult question of all: assume, as I do, that the analyses undertaken before these organizations committed to the increased controls and centralization praised in the article showed them to produce significant savings to IT, and then ask how it netted out organizationally after the impact on users is accounted for?

My guess is first that the question is never seriously considered by the people proposing or executing this type of IT power grab; and second that the answer will be expressed, in the longer term, as the organizational cost of rebel and personal IT. In other words, when some professor spends an extra dollar on a laptop so he can work independently of the network, spends an extra hour trying to make his own backups work, or relies on his home machine to serve course PDFs to his students, he's functioning as a largely untrained, $100,000 per year or more, sysadmin and thus incurring enormous organizational costs that should be charged against those centralization projects - but almost certainly were not.

And from that I get my bottom line on this: a pithy new rule for executives reviewing data processing proposals from mainframers and their Wintel colleagues: the more money organizations save by centralizing IT control and processing, the more it costs them.

Topics: Data Centers, Security, Servers

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

12 comments
Log in or register to join the discussion
  • GO Distributed!

    What a crock! A mac truck couldn't bring down your datacenter, but a power outage will. That idiot construction guy, running a trencher can cut that fiber connection lickity split (happened to me), and that all-or-nothing datacenter becomes . . . nothing.

    Whatever happened to "single-point-of-failure"? Did this physical law get repealed?

    I'm a great advocate for Distributed systems. Losing PART of your access allows you to switch to something else. Also fault tolerance works better when your servers are not in the same rack. Namby Pamby IT people don't like distributed architecture because it's TOOOOO HARDDDDD (*wah* *wah*) to manage (supposedly). Why have ANY platforms - with their cooling and power costs? Having a single rack at multiple sites can save you big cash money!
    Roger Ramjet
    • Like it, but SOX?

      Power failure - good UPS, plenty of batteries, on site automatic start generator, big tank of diesel, contract with someone to work on cables quickly - what?s the problem?

      Cut cable* - surely everyone not least the empire-builders will want a mirror site?

      I recall your previous advocacy re distributed. Not that I?m qualified to judge, but I found it persuasive.

      But what about the regulators? SOX and all that? Doesn?t hoarding the servers, data, and (hopefully not excessive) control centrally - with sufficient remote back-up capacity instantly ready - help a lot when it comes to dealing with all the security threats, and the requirement to demonstrate that you have? If every workstation is a server, isn?t every employee and visitor more of a risk?


      *Yes - I?ve been involved in construction and maintenance work and after a while I realised that the rule is that even if everyone does all the research, looks at plans, walks round with metal detectors, digs carefully, seeks out retired people who helped work on the site in the old days, etc, basically if a JCB or similar starts digging it will almost always find something unexpected and valuable, and will often damage it in the process. Often that something will have been wrongly laid e.g. not according to the line on the plan, or too shallow. Or the cable or pipe is too ancient to be on any plan, but is still much used.
      Ross44
      • Yes - auditors love centralized systems

        But they follow CoBIT - a 1930s rule set dressed up in 90s speak.
        murph_z
  • Really Murph...

    You write about centralized processing and you claim that it will liberate users...

    Someone else writes about centralizing servers and you claim they are trying to subjugate users...

    So, how on Earth is one to tell the difference between centralization as liberation and centralization as subjugation? On the surface, the descriptions here sound like significantly less of a power grab than what you frequently advocate.
    Erik Engbrecht
    • The difference is in control/management

      Centralized processing with decentralized control works - centralizing both does not.

      But notice that Wintel/data processing essentially force the centralization of both, and Unix does not.

      You can centralize control with Unix, but you don't have to. Instead you put everything on one system, put a mirror somewhere else (for when that back hoe operator surprises you) but leave the people working in the trenches in charge of making the decisions.
      murph_z
      • Where did they say centralized control?

        You're inferring a management style and culture for their solution.

        From a high-level technical and management perspective your solution and their solution look almost identical (except, of course, they make no mention of Sun Rays...).

        Your premise basically is that by centralizing processing on a sufficiently robust platform you can achieve greater efficiencies, thereby freeing resources to respond to user needs.

        They leave out, at least in your quotes, the destination of the freed resources.

        So you fill it in, saying they will use the freed resources to build their own empires rather than support users.

        Your argument boils down to "trust me, not them."
        Erik Engbrecht
  • Right. And I was wondering about the line

    [i]a pithy new rule for executives reviewing data processing proposals from mainframers [b]and their Wintel colleagues[/b] : the more money organizations save by centralizing IT control and processing, the more it costs them.[/i]

    I will go out on a limb and take a guess that next week you will post an article, similar to this one, yet the last paragraph will go similar to:

    [i]And from that I get my bottom line on this: a pithy new rule for executives reviewing data processing proposals from mainframers and their [b]Lintel[/b] colleagues: the more money organizations save by centralizing IT control and processing, the less it costs them.[/i]

    True, it is a guess, but let us see if it does materialize in the near future ;)
    GuidingLight
    • Nope: Linux is Unix

      So no, wintel/dp troops can (and generally do) mis use it, but there's nothing in Lintel that forces control centralization.
      murph_z
  • OT - I commend to your attention...

    ... a post entitled "8 common lies told by enterprise software sales people" and my observations. A number of the themes will be familiar.

    http://blogs.zdnet.com/projectfailures/?p=653
    Anton Philidor
    • Food for thought

      but I haven't read the book.
      murph_z
  • On Topic, Did you leave something out?

    The quotes you included discuss physical consolidation to avoid having "server racks stuffed into closets with little concern for backup and no thought to efficiency."

    The reason users are not discussed is apparently because the only effect on users is giving them their closets back.

    The issue of IT restriction on users is worth discussing, but why would you conclude that the relocation of equipment by itself has an effect on policy?
    Anton Philidor
    • Yes - sometimes it does

      I often recommend a form of distributed processing in which you put the biggest (physically, not in terms of capacity)computers your can find in user management office spaces - I do that when the user/IT relationship is pooched beyond belief because it gives them the feeling of ownership they need to start exerting actual control. In reality the computers don't need to do much - in their heads those are their machines and that makes running IT part of their jobs.

      FYI: I've never had a single user manager complain about this after he/she caught on - because by then they understand that telling IT what to do is part of their jobs and, more importantly, that the new IT people actually listen to them.
      murph_z