X
Business

Survey Data - CA

Towards the beginning of April, I was approached by representatives of CA who wanted me to discuss a then-recent survey about the adoption of virtualization technology. Having conducted numerous studies while I was with IDC, I'm always skeptical when looking at survey data.
Written by Dan Kusnetzky, Contributor

Towards the beginning of April, I was approached by representatives of CA who wanted me to discuss a then-recent survey about the adoption of virtualization technology. Having conducted numerous studies while I was with IDC, I'm always skeptical when looking at survey data. The results may either be extremely useful or skewed and biased by 1) the questions that were asked or not asked, 2) how those questions were asked, 3) who was asked those questions (and who was not asked) and 4) how the data was "cleaned" and "adjusted" to make it match the general population. I won't bore you with a class in statistics here but, needless to say, a badly designed survey will produce results that don't represent reality. Wasn't it Benjamin Disreali who said "There are three kinds of lies: lies, damned lies, and statistics."? I'm sure he would have put computer benchmarks, market share analysis and badly designed surveys on his list had he known about them.

Let me start off with the fact that after listening to the CA presenters and later speaking with a representative of IDG Research, the folks who actually executed the study for CA, I came to the conclusion that the results of this study are fairly consist ant with results of other studies I've seen or executed myself. So far, so good.

Concerns about the sample

The sample size was more than adequate to produce useful results. Who was included and who was not included seemed a bit strange. Here's how CA/IDG Research describe the sample:

The U.S. study was conducted between November 5, 2007 and November 14, 2007. The non-U.S. portion was completed between November 15, 2007 and November 21, 2007. A total of 300 surveys were completed online in the following regions:

  • U.S. - 100 surveys
  • EMEA - 100 surveys: UK (50) and Germany (50)
  • APAC - 100 surveys: Australia (75) and Korea (25)

My questions about this focused on who was left out of the survey. Does speaking with folks in the UK and Germany produce truly representative results for the whole of Europe? Believe it or not, based upon my previous experience, it is very likely to produce useful results.

How about speaking only to Australian and Korean executives? Does that truly represent Asia/Pacific. Now we're on pretty shaky ground. Japan is a very large consumer of IT equipment, software and services. It would have been very wise to include some people from that country to round out the results.

What about Central and South America? Neither region is represented in this study. While not a large portion of the worldwide market for IT equipment, software and services, it is still important.

Results are consistent with other studies

As I said earlier, the results are pretty consistent with the results of other studies in spite of my concerns about the sample. I would suggest that you download the presentation deck pointed to earlier in this post and read through the results yourself.

What do you think of this study? Do the results suggest a different course of action than what your organization is doing now? If so, what changes would you make?

Editorial standards