The project found 985 bugs in the 5.7 million lines of code that make up the latest version of the Linux core operating system, or kernel. A typical commercial program of similar size usually has more than 5,000 flaws or defects, according to data from Carnegie Mellon University.
985 bugs in 5.7 million lines of code. While it would be far more impressive if the number were zero, that's still a pretty respectable showing.
Coverity isn't the first company to study kernel code using analysis software. Reasoning performed studies on the Linux TCP/IP stack, and found that the open source software had a defect density of 0.013 per 1,000 lines of code. Again, Linux was ahead of its proprietary counterparts. In the Reasoning study, the company did have access to commercial TCP/IP stacks to compare against, though they couldn't publish the competitors' names due to their agreements with those companies. >
This sort of analysis only goes so far, though. These companies look for specific types of flaws in software, using automated testing. The testing won't uncover every bug, and it's unlikely to compensate for poorly designed software. Coverity might be able to detect an out-of-bounds error or an uninitialized variable, but can their software detect a lousy user interface? (One assumes that "Clippy" would have sailed by Coverity's testing.)
It does provide an interesting data point for discussion, as well as ammunition for those who hold that the open source method of software development can lead to fewer bugs. Like the man says, "Given enough eyeballs, all bugs are shallow." Open source is not a panacea for bugs, of course. Projects with small developer communities are less likely to have enough eyeballs to vet the source code properly, or to fix bugs when found.
It's worth noting that Coverity and Reasoning sell their services to the producers of software, rather than the users of software. It would be interesting to find out how many errors per lines of codeare considered acceptable by most companies before a product is shipped. It's a shame that IT decision makers can't run these tests on all the software that they consider using. It would be interesting to see how proprietary software stacks up against open source software, bug for bug.