Open source ratings and the law of unintended consequences

Open source ratings and the law of unintended consequences

Summary: This is really overdue, but I can't help worrying about the Law of Unintended Consequences

TOPICS: Open Source

Tony Wasserman - CMUOne of the OSCON sessions I'll most regret missing happens on Thursday, when Tony Wasserman of Carnegie-Mellon (right) and Murugan Pal of Spikesource announce the Business Readiness Rating system.

The idea is to create objective criteria that users can follow in publishable ratings of the 100,000 open source projects out there. It would operate a bit like Zagat's does for restaurants, using a lot of people who've eaten the stuff rather than a few highly-trained reviewers.

This is really overdue, but I can't help worrying about the Law of Unintended Consequences:

  • Could this merely validate what's popular and prevent good new projects from moving forward?
  • How good and objective are the reviewers going to be?
  • Could this become a crutch for business executives?

The best possible people are working on this. In addition to CMU and Spikesource, O'Reilly and Intel are also on board. I assume they all share my worries, and it's good to have them on the case.

Topic: Open Source

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • Open Source Maturity Model predates BRR

    Bernard Golden gave IT organizations a framework to provide the kind of assessment that the BRR extols in his book "Succeeding With Open Source". The BRR whitepaper credits Golden but provides its own (unnecessarily complicated) process/framework. With regard to "unintended consequences" I can't help but think that the BRR process is designed to result in an easily digestible number (scored 1-5) for trade press promulgation. "Foo server only scored 3 (Acceptable) on the BRR so you might want to wait until it scores 4 (Very Good) before giving it serious consideration."
  • OSMM and BRR

    I would encourage jbecker to join the discussion about BRR on the Open BRR site at Bernard Golden's book and website is a valuable contribution to evaluting open source software. Several of us met with him prior to announcing BRR, and we hope to work with him in the future.

    Today, many software product selections are made informally, based on analyst ratings (such as Gartner's quadrants), published evaluations (such as those in ZD's eWeek, or vendor-sponsored events or trade show booths. Until recently, it's been hard to rate open source products. Consulting firms (including CapGemini and Forrester) are building practices around open source, and have their own evaluation mechanisms. (CapGemini also uses the term OSMM; see

    One goal of the BRR is to use available project data as a key component for computing scores, with the intent of providing quantitative data to help support the evaluation process. The BRR is just a framework. If it continues to be well received, then we expect people to build tools that will extract the key project data and calculate BRR scores for a set of open source projects based on the intended use and the weighted categories.

    I agree with jbecker that some analysts will advise their clients not to use products until they achieve a particular score based on their evaluation criteria. That's not entirely a bad thing. Developers who want their projects to be widely adopted will know exactly what they have to do to make their software ready for business use.

    As a recent example, Release 0.8 of Firefox wasn't really ready for large scale public adoption; since Release 1.0, it's more than ready, and anyone continuing to use IE is putting their companies and their machines at risk.