X
Business

Concerns over vendor offered survey results

Recently I've had the pleasure of being contacted by a number of company representatives who were presenting the results of surveys that they had recently executed. They appeared eager to allow me to review the results hoping that I would publish something here.
Written by Dan Kusnetzky, Contributor

Recently I've had the pleasure of being contacted by a number of company representatives who were presenting the results of surveys that they had recently executed. They appeared eager to allow me to review the results hoping that I would publish something here. I also suspect they hoped that I would think that their efforts would be considered in the clever marketing category.

Much of the time, the results were really interesting and tended to support the company's message and product set. While interesting, quite often I had concerns about the study and the applicability of the results generally.

My concern with such studies usually revolve around two things: 1) was the sample large and diverse enough to be considered representative and 2) was the survey instrument constructed in such a way as to be neither biased nor leading respondents to answer in certain ways.

A large sample that consists of people representing many regions, vertical markets and many levels of IT functions is to be preferred over a small study that includes only a single region, one vertical market or a specific IT job. I must point out that the results of a small-scale study (under 1000 respondents) can be very useful too. It is just very important that the results are presented correctly and the sponsors avoid broad or sweeping generalizations.

A study of folks attending a vendor-sponsored event might reveal a great deal of the thinking of the class of people who come to such events but not shine a light on worldwide thinking on that same topic. So a broad statement that x% of IT decision makers are thinking this or that based upon one of those studies is really suspicious to me.  The same results presented as x% of IT decision makers who represent users of a specific supplier's products is much more believable.

Survey instruments can be constructed to obtain facts rather than to support a specific vendor's position. If questions such as "how many hours do you listen to your iPod?" is very vendor specific. Asking "how many hours do you listen to your MP3/music player?" would be much better.

Surveys conducted by neutral entities often are more persuasive to me than those executed by a single vendor. The vendor, of course, is driving towards showing their product or service in the best possible light. The neutral entity is more likely just trying to uncover facts.

Do you find survey results offered by suppliers to be persuasive? Do you have other concerns that weren't mentioned here?

Editorial standards