Uptime Institute: Long live the datacenter (thanks to an unethical Salesforce.com)

Uptime Institute: Long live the datacenter (thanks to an unethical Salesforce.com)

Summary: Earlier this week, I attended a panel discussion that was primarily hosted by AMD but included panelists from HP, EMC, and APC and was moderated by Uptime Institute's chief analyst Bruce Taylor. The Uptime Institute earns its keep by playing host to the community of people and technologists with an interest in datacenter uptime, along with the solution providers that serve them.

SHARE:

Earlier this week, I attended a panel discussion that was primarily hosted by AMD but included panelists from HP, EMC, and APC and was moderated by Uptime Institute's chief analyst Bruce Taylor. The Uptime Institute earns its keep by playing host to the community of people and technologists with an interest in datacenter uptime, along with the solution providers that serve them.

The point of the panel discussion was to shed light on the widening gap between the physics and economics of Moore's Law and what if anything can be done to close it. On the physical side, chips are still advancing at the Moore's Law pace (doubling in performance every 18 months, give or take). If you spin the math, that means that the cost of some given amount of performance is halving at the same rate. The problem? The cost of the chip is now being dogged by the total cost of ownership, a part of which is the infrastructure that's needed to keep that chip running. This is primarily a discussion about energy because as the cost of energy rises, so too does the cost of running that chip. Taylor sees this as the "economic meltdown of Moore's Law."

Naturally, AMD, HP, EMC, and APC are all working on the problem and all believe to have short and long term solutions that datacenter managers should be thinking about. Other solution providers like Intel, IBM, and Sun were noticeably absent from the panel. But rest assured, they're on the case too. In the course of the panel discussion, it became clear that datacenter scale and size matter when it comes to the potential savings that could result from automating certain energy saving measures. As I listened, on question nagged at me. If that's true -- that scale matters -- then maybe going green isn't as much about buying gear from the folks sitting at the dais as it is about outsourcing your IT and maybe your entire datacenter to someone with the sort of scale that can really make a difference.

This isn't a completely senseless idea. Ten years ago, readers told us how we were on drugs when we talked about oursourcing one of a companies most precious resources (customer data) to a service provider. Today, salesforce.com -- and outfit to which such needs can be outsourced -- continues to grow by leaps and bounds. Those that chose to outsource in this fashion get to reap a variety of benefits. But in the context of the datacenter discussion at hand, one of those benefits is the elimination of the hardware and software that were in place when customer relationship management and salesforce automation we're insourced.

So, if things are going pretty swimmingly with the customer data part of the datacenter, is it not possible that at some point in the future, the rest of the datacenter will go the same path? Are we potentially looking at the death of the datacenter as we know it? Sun seem's to think so. The consolidation of the world's datacenters into four or five 'systems' is the premise behind it's Redshift Computing theory. When I last sat down for a dinner with EMC CTO Jeffrey Nick, he agreed that moving forward, EMCs had to focus on the service providers as businesses shifted their IT out of the datacenter to outfits like salesforce.com. Clearly, businesses are making the shift. And if Sun isn't right about it ending at five 'systems,' it's definitely right about the direction things are heading. More and more businesses are outsourcing to service providers which in turn can only mean one thing: fewer opportunities to sell gear. This is particularly so if you consider the physical part of Moore's Law. After all, if that wasn't the case, could you imagine the space and the HVAC that would be needed to support today's computing needs on a 1960's class Univac?

Between Moore's Law, the outsourcing of entire datacenter chunks to service providers, and the ever maturing set of technologies (eg: virtualization) designed to eliminate idle capacity, the global hardware footprint can't go anywhere but down. If you're the Uptime Institute or a gear maker like HP, EMC, or APC, this is not exactly a great trend. Even if you're Sun, it's not a great idea (see When there's only five computers left in the world, will one or more of them be Sun's?). But at least Sun, in talking about Redshift, acknowledges the reality and is seeking ways to succeed in that world. The Uptime Institutes Bruce Taylor on the other hand doesn't think the rising tide of service providers can do as good a job at running datacenters as end-users can.

In the attached video, I've excerpted the exchange between the two of us during the discussion's Q&A session where, based on a couple of salesforce.com outtages, he referred to salesforce.com as "unethical." Using the alleged unethical behavior of salesforce.com as the example, he talked about how there are classes of applications that people will never outsource: a conversation that's eerily reminiscent of the discussion that took place in the pre-SaaS days of the mid-90's. He can pick on salesforce.com for its outtages all he wants, but what of other service providers like Google, NetSuite and Amazon?

Check out the video. At the very least one of us has our head in the sand. But beauty is in the eyes of the beholder. I'm fully willing to admit that for some of you, that head in the sand appears to be mine. But I think it's Taylor's.

Topics: Data Centers, Enterprise Software, Oracle

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

6 comments
Log in or register to join the discussion
  • UNETHICAL??????????

    I have to agree with you on this case. A crash doesn't mean they are unethical in practice. I inherently think of it as, like you mentioned, a growing pain. But take into consideration, What caused the crash? Was it large file transfers into the database, possible. If this is the case it looks as if they didn't plan for business to expand as fast as it did. How many start-up businesses make that mistake in the first year? I did. My business took off because I had an idea that quite a few customers were looking for and I didn't have enough inventory to sustain the influx of orders.

    My bet is that Salesforce.com accomplished what they were planning to do and the customer response exceeded their expectations. In the world of business this is a blessing not an unethical choice. But from what I gathered from the video was that Salesforce.com had a crash in Jan and Feb this year but there wasn't any mention of any other failures. I checked Salesforce.com as a business contact, because I am considering them for their services in the future and I haven't found any other mention of other failures in their system. This indicates that they fixed the issue at hand. Yes we are all still human and have to learn from mistakes, but don't degrade a company for a problem at start-up.

    Question that are running in my mind are "Does Bruce Taylor use Microsoft? If so Does he think they are unethical because they have glitches in their OS that need to be fixed?"
    birddawg
  • Bruce @ Uptime Inst. is upset because salesforce.com is not his customer

    Bruce Taylor from the Uptime Institute is obviously venting here because salesforce.com is not among his company's 100 customers: http://www.uptimeinstitute.org/index.php?option=com_content&task=view&id=13&Itemid=27
    sj76
    • SalesForce not our "customer" but we are its "customer"

      It is true that SalesForce is not a member of the Site Uptime Network. This is a independent, vendor-neutral,closed, dta center owner/member-driven, private knowledge-sharing network community on uptime reliability, abnormal incidence reporting (unplanned downtime). The Institute analyzes the data collected from the members, reports it back to the members and provides best-practices for improving uptime reliability. Whether SalesForce would or could qualify and be accepted as a member is unknown.

      My quibble with SalesForce is that we are a "customer" of SalesForce, and as a customer, we rely on 24/7 availability of our customer data. For CEO Marc Benioff to excuse a computer room systems failure (crash) of a brand new $50M data center as working the bugs out to my mind shows an arrogance and a disregard for customer expectations. To be certain, we are perhaps more sensitive to it, because we know that it need not and should not happen. We are not talking about a natural disaster or a terrorist attack, we are talking about, perhaps, poor engineering and certainly poor testing, pre throwing the switch. I think customers deserve more. And I'm not talking about Tier IV Classification-level uptime reliability, here either, just good engineering -- that's what the $50M should have bought and didn't, and for any CEO to say that's somehow OK is a joke.

      The Uptime Institute and its professional services arm, Computer Site Engineering, would have been pleased to review the data center design and perform a Tier Certification to whatever Tier-level of reliability SalesForce thinks it should offer its customers. And we are willing to do a site inspection at anytime we are retained to do so. My personal opinion is the cost of doing that is cheap insurance against unplanned downtime that interferes with customers' ability to do busines.
      BruceTaylor
      • "Arrogance"

        Hmmm - so you're saying that never happened to an in-house $50M data center? On what planet?
        devils_advocate
  • What's reasonable for uptime reliability of a data center?

    I should temper my concern until I know what the cause of the downtime was. However, a very high percentage of system failures are with the site physical infrastructure (power and cooling) and its usually with the power supply. This is preventable with good engineering. Perhaps "unethical" was an ill-considered word, but there is a difference between a "good-enough" software release, where the rapid customer feedback allows for rapid fixes and upgrades, and a data-center crash because of poor system design or engineering or operations management. Proper pre-operational system testing would have revealed the problem, I believe. My point was simply that it was an insufficient response for a CEO to say to customers that it's OK to have a failure in a new data center because the "kinks" haven't been worked out yet.
    BruceTaylor
  • Is my head in the sand, David? Is that better or worse than in the clouds?

    I have been told that it's in worse places!

    David, I thought your hypothetical posed as a question to the panel was a good one.

    At the Uptime Institute, we're chunking our R&D thinking into three categories as it relates to large-scale, high-density computing energy efficiency at the level of the server farm and data center. In each case, the question were are asking at the beginning of the inquiry, what are the three or four major areas where data center owners can achieve significant energy efficiency. What can be done beginning right now without significant new capital spending and with existing technology? What can be done over a 3 to 5-year horizon with the new technologies now coming to market? What are the new technolgies and designs of the future, 5 to 7 years out?

    That is the time horizon in which we now must deal. Billions of dollars will be spent on data centers and server farms in the next decade. There will not be a transformational shift in how business is done in terms of data center owning, co-locating, hoteling,outsourcing. It will depend upon the industry sector served and the criticality of the data to the individual enterprise. There will, with Internet 2.0, be an expansion at the lower Tier Classifications I and II, however Tiers III and IV will take much longer (if ever) to move to the mega-models you propose. To be truly intellectually honest, I should poll the members of the Uptime Institute's Site Uptime Network, because that is truly cross-industry, and they are dealing with very large-scale data center facilities.

    However, the issues of thermal density are present and real for most data center operators today. They need short-term fixes that buy back capacity (measured in Watts, not square feet) in order to do the kind of optimal, longer-range technology planning and design that will carry them into the near future.

    The carbon footprint of large-scale server computing (because the electricity is overly dependent upon fossil (coal) fuels is now a corporate sustainability concern. So we have technology, economic and environmental concerns all pressing for solution, and that is where the Uptime Institute is choosing to focus its energies.
    BruceTaylor