Application bloat survey: if you don't know the answer make something up

Application bloat survey: if you don't know the answer make something up

Summary: Survey silly season is well under way. Company after company is sending the results of surveys and hoping ZDnet will publicize the results. Unfortunately, these studies are often self serving, badly designed and produce results that are seldom representative.

TOPICS: Data Centers

Let's start with an apology

I accidentally jumped the gun and published this commentary prematurely. So, if this looks familiar, it was published and then withdrawn last Friday. While reading through the materials sent by the PR firm, I didn't notice that the information was being provided under a non-disclosure agreement (NDA) and that I shouldn't have published anything until Today. My only excuse is that typically people will send me a message asking if I'll honor an NDA prior to sending me materials. This time, the request to honor an NDA was burried near the bottom of the introductory text prior to the outline of the survey results.

Review of yet another survey

Recently a represented of Dell/Quest's PR team reached out to me to introduce me to a survey conducted by Harris Interactive. Dell/Quest sponsored this study. This is the tenth survey that has crossed my desk in the past month. Like most of the others, I have serious problems with the sample, the methodology and don't believe that the results are necessarily representative of the industry at large.

The Survey

The survey focused on what Dell/Quest called "application bloat." In the words of the announcement, "application bloat, or an overabundance of rarely used software applications deployed in business today, can lead to millions in lost revenue."

Survey findings at a glance

In Dell/Quest's own words, here are the points the company would like readers to take away from the survey:

Survey findings at a glance:

  • 75% of respondents said that in one year, annual monetary losses experienced (e.g., decreased productivity, lost transactions/sales, missed opportunities) as a result of slow, unresponsive or crashed applications cost businesses tens of thousands to tens of million dollars.
  • 50% of respondents work at organizations with more than 500 applications. Yet on a typical day the majority (57%) of organizations use 249 or fewer applications.
  • Of those applications accessed daily, less than half (48%) are accessed more than 5 times each day
  • 79% of applications are kept on-premise, while 21% are run in the cloud
  • 58% of respondents said the performance of applications have a major impact on the performance of their business
  • 77% of respondents would choose IT efficiency over reducing staff or outsourcing
  • 32% of respondents plan to implement a monitoring tool in the next 18 months
  • 89% of respondents currently have in place software that enables IT to monitor, uncover and address application-related issues.

Snapshot analysis

Having developed, fielded and then reported upon quite a number of surveys during my time at IDC and, later, at the 451 Group, I have a great deal of respect for the findings of a well designed, well implemented survey. This type of research can shine a much-needed light on what organizations are doing and planning. Unfortunately, this study and most of the ten others that have crossed my desk so far this month, fail to live up to this high standard. This, of course, makes the results of limited and, perhaps, questionable use.

The problems I have with quite a number of the marketing-focused surveys that come across my desk usually fall into one of the following categories:

  • Is the sample representative of the industry at large or, at least, the market segment it supposed to address, that is are the right people being asked?
    • Quite often, a small sample is used and broad, global statements are made.
    • The sample is made up of respondents from a single country and broad statements are made about the worldwide market.
    • The sample is made up of representatives of a single market segment and broad statements are made about the worldwide market.
    • The sample is often made up of only the sponsor's customers and the survey was conducted at the sponsor's own customer event making the sample quite limited and not at all representative of the market as a whole.
    • The sample perports to present what companies are planning to do, but the sample may not include company decision-makers. A more subtile problem is that decision-makers are included, but they may not represent the business unit or department that is really responsible for the decision.
  • The survey instrument is biased or leading making the results questionable, that is, are the right questions being asked and are they being asked in the right way?
    • The questions might assume a given positon and make no provision for respondents to make contrary answers. This is the "when did you stop beating your spouse?" type of question. There is no provision for someone who does not beat his/her spouse to respond.
    • The questions might be biased to support the need for the sponsor's product or service. In this case, the sponsor, Dell/Quest, offers products that discover, inventory and manage applications. So, of course, the questions would assume the need for this type of product.
    • The questions assume that companies would lose money because applications are not being used. I didn't see any provions made for applications having been purchased and used by another department. The assumption is that if the respondent doesn't use an application, it is not being used at all. Furthermore, the connection made from appearantly unused applications to revenue loss was tenuous at best.
    • The survey instrument asks specific questions for which respondents don't have answers. If the survey offers "don't know" or "not applicable" as choices, that is far better than offering a choice that asks respondents to estimate (or make up) answers. In the case of this study, the survey instrument asks respondents to estimate or make up answers if they don't know the answer or the question is not applicable to their environment.

I have to question how useful the results of this study will be to the industry. The respondents come only from the U.S. It is not at all clear what industry each respondent represents nor is company size information included. The questions are designed to lead to the conclusion that application inventory management tools are needed.

My biggest issue is that the questions asked respondents to estimate, or make up answers if they either didn't know the answer or it wasn't applicable to their company, business unit or department.

In the end, I would suspect that this survey, like the others that have come across my desk recently, might be useful to someone, somewhere and in some market. The use, however, appears quite limited to me.

Although broad, sweeping statements are made about the survey findings, they are not likely to be useful to everyone, everywhere and should be taken with a very large grain of salt.

Note: Dell/Quest's PR folks have sent along a document providing some responses from Harris Interactive. I will pass along their comments and my response after I've had an opportunity to evaluate their response.

Topic: Data Centers


Daniel Kusnetzky, a reformed software engineer and product manager, founded Kusnetzky Group LLC in 2006. He's literally written the book on virtualization and often comments on cloud computing, mobility and systems software. In his spare time, he's also the managing partner of Lux Sonus LLC, an investment firm.

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Minimum Requirements

    There was a survey published on zdnet the other week... a sample of 200 Americans to determine the future of smartphone uptake.
    Maybe there should be a "minimum requirements" criteria that must be achieved before publishing a survey?
  • We Stand By the Survey Results

    Timing is everything, and it seems that the results of our survey crossed Mr. Kusnetzky’s view at the wrong time–during what he calls “silly survey season.”

    While respect the author and his opinion, we couldn’t disagree more strongly with what he stated. This particular survey was conducted by Harris Interactive, a highly respected market research firm. Quest Software (now a part of Dell) commissioned this survey because many of our customers (80 percent of which represent Fortune 500 companies) are struggling to optimize the performance of business critical applications, which are multiplying now at an alarming rate and are consuming costly resources and taking a toll on data center operations.

    Application bloat, as we’ve defined it, is a real and prevalent problem that we wanted to understand better. So, we engaged Harris Interactive, which relies on its proven methodology, random sampling of IT decision makers and objective results, to help us gauge the impact of application overload.

    The survey queried 150 U.S. based IT decision makers at organizations with more than $500M in revenue (77% reported company revenue in excess of $2B USD annually)—not from a pool of our customers. Actually, we have no idea if the respondents are customers or not. In addition, though the survey did, at times, ask respondents to provide their best estimate when answering a question, both we and Harris Interactive believe that is different than asking people to ‘guess’ or ‘make something up.’ Over 40% of the respondents reported earning over $125,000 annually and over 70% earn over $100,000 annually. These were obviously senior people in large enterprise organizations so they have quite a bit of experience to leverage in order to provide “educated estimates.”

    The survey findings verify our premise: software overload can lead to poor application performance. Slow, crashed or unresponsive applications are costing businesses anywhere from tens of thousands to tens of millions each year, if left unchecked. In our view, the impact is neither inconsequential nor silly.

    While these findings won’t apply to every U.S. company, they provide a compelling snapshot and we stand by them. Our hope is that the information gathered fosters further dialogue about the issue and proves useful to IT and business leaders alike.