There's a lot of unrest in the Windows Home Server 'Vail' community right now, with a number of enthusiasts and testers wondering aloud whether their feedback means anything any more.
The Vail testers -- who are angry about Microsoft's decision to pull what many of them consider a compelling feature very late in the test cycle -- aren't alone in their questioning. I've heard/read the same lately from Microsoft Most Valuable Professionals (MVPs), Windows testers, Windows Live testers, Office testers and others. Corporate decisions to de-emphasize or cut programs like the the Windows Clubhouse or Windows Test Pilots -- all the while talking up how much "telemetry" data Microsoft takes into account when building its products -- has even some long-term Microsoft loyalists wondering why bother.
This isn't a new concern: In fact, it was the topic of much debate when Windows 7 and Windows Live Waves 3 and 4 were in the works. But with Vail nearing its final release -- and as we move closer to the start of public test programs for Windows 8 and Office 15/Windows Live Wave 5 -- I'm sure it's going to come up again.
I started thinking about this last week while attending a press/analyst event on the Microsoft campus last week about the future of productivity, where members of the Office team revisited the role of telemetry data in building new Microsoft products.
P.J. Hough, Corporate Vice President of Office Program Management, noted that the Office team is doing everything from running traditional focus groups. to following information workers every minute of their work days from 9 to 5, to analyzing the "Send a Smile" data and other in-product feedback, to build a substantial set of data on how people use (and try to use) Office. It sounds like Microsoft is planning to continue to do the same as it develops and ultimately tests Office 15.
Hough reiterated during the event the philosophy which has ruled Office -- and the post-Vista Windows world -- regarding betas. The Windows and Office teams these days consider "beta" to be nearly-cooked versions of their products. Hough said Microsoft doesn't want to experiment on its beta testers, but still value their feedback and take it into consideration in shaping the final builds of Office.
Many of the Microsoft beta testers I know expect and want to be guinea pigs, however. They don't want to get products in near-final form; they are willing to get them when they are still raw and far from finished.
Office execs respond by saying the way the testing process is designed is to allow increasingly larger pools of people to test beta products. They say that there are much smaller, select groups who get the code for things like Office and Windows earlier, and that Microsoft increasingly grows this pool as the products move toward their release-to-manufacturing (RTM) date.
Maybe it's the terminology that needs updating. Tech previews, betas, Release Candidates (RCs): That nomenclature used to stand for something that is different from how those terms are being used now by some teams. And the window of time between when Microsoft provides information and code to its more tech-savvy "insiders" and the public seems to be shrinking substantially.
I realize Microsoft products are used by millions (or more than a billion, in the case of Windows) and that it is impossible to take every piece of feedback into account. Extrapolating from data gathered from a variety of means is necessary.
But I'd argue not all users/testers are equal and shouldn't be treated as such. Isn't some tester feedback more credible and noteworthy than others'? All the telemetry data in the world means nothing if you discount some of your most loyal (and savvy) customers'/testers' input.
Can/should Microsoft remedy this situation? Anyone have thoughts to share that may open closed ears?