Coverity finds open source software quality better than proprietary code

Coverity finds open source software quality better than proprietary code

Summary: Coverity, a company specializing in software quality and security testing solutions, finds that open source programs tend to have fewer errors than proprietary programs.


The irony isn't lost on me: Coverity, a a company specializing in software quality and security testing solution, has found that open source software has fewer defects in its code than proprietary programs in the aftermath of open-source OpenSSL Heartbleed programming fiasco. Nevertheless, the numbers don't lie and the 2013 Coverity Scan Open Source Report (PDF Link) found that open source had fewer errors per thousand lines of code (KLoC) than proprietary software.

Coverity found that open-source programs tend to have less errors per thousand lines of code than their proprietary software brothers.

The Coverity Scan service, which the study was based on, was started with the US Department of Homeland Security in 2006. The project was designed to give hard answers to questions about open source software quality and security.

For this latest Coverity Scan Report, the company analyzed code from more than 750 open source C/C++ projects as well as an anonymous sample of enterprise projects. In addition, the report highlights analysis results from several popular, open source Java projects that have joined the Scan service since March 2013. Specifically, the company scanned the code of C/C++ programs, such as NetBSD, FreeBSD, LibreOffice, and Linux, and Java projects such as Apache Hadoop, HBase, and Cassandra.

The 2013 report's key findings included:

  • Open source code quality surpasses proprietary code quality in C/C++ projects. Defect density (defects per 1,000 lines of software code) is a commonly used measurement for software quality, and a defect density of 1.0 is considered the accepted industry standard for good quality software. Coverity’s analysis found an average defect density of .59 for open source C/C++ projects that leverage the Scan service, compared to an average defect density of .72 for proprietary C/C++ code developed for enterprise projects. In 2013, code quality of open-source projects using the Scan service surpassed that of proprietary projects at all code base sizes, which further highlights the open source community’s strong commitment to development testing.
  • Linux continues to be a benchmark for open source quality. By leveraging the Scan service, Linux has reduced the average time to fix a newly detected defect from 122 days to just six. Since the original Coverity Scan Report in 2008, scanned versions of Linux have consistently achieved a defect density of less than 1.0. In 2013, Coverity scanned more than 8.5 million lines of Linux code and found a defect density of .61.
  • C/C++ developers fixed more high-impact defects than Java developers. The Coverity analysis found that developers contributing to open source Java projects are not fixing as many high-impact defects as developers contributing to open source C/C++ projects. Java project developers participating in the Scan service only fixed 13 percent of the identified resource leaks, whereas participating C/C++ developers fixed 46 percent. This could be caused in part by a false sense of security within the Java programming community, due to protections built into the language, such as garbage collection. However, garbage collection can be unpredictable and cannot address system resources so these projects are at risk.
  • Apache HBase serves as benchmark for Java projects. Coverity analyzed more than eight million lines of code from 100 open source Java projects, including popular big data projects Apache Hadoop 2.3 (320,000 lines of code), HBase (487,000 lines of code), and Apache Cassandra (345,000 lines of code). Since joining the Scan service in August 2013, Apache HBase — which is Hadoop’s database — fixed more than 220 defects, including a much higher percentage of resource leaks compared to other Java projects in the Scan service (i.e., 66 percent for HBase compared to 13 percent on average for other projects).

Zack Samocha, senior director of products for Coverity, said in a statement, "Our objective with the Coverity Scan service is to help the open source community create high-quality software. Based on the results of this report — as well as the increasing popularity of the service — open source software projects that leverage development testing continue to increase the quality of their software, such that they have raised the bar for the entire industry."

Coverity also announced that it has opened up access to the Coverity Scan service, allowing anyone interested in open source software to view the progress of participating projects. Individuals can now become Project Observers, which enables them to track the state of relevant open source projects in the Scan service and view high-level data including the count of outstanding defects, fixed defects, and defect density.

"We’ve seen an exponential increase in the number of people who have asked to join the Coverity Scan service, simply to monitor the defects being found and fixed. In many cases, these people work for large enterprise organizations that utilize open source software within their commercial projects," concluded Samocha. "By opening up the Scan service to these individuals, we are now enabling a new level of visibility into the code quality of the open-source projects, which they are including in their software supply chain."

Related Stories:

Topics: Enterprise Software, Open Source, Software Development

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • OOPS

    "Linux continues to be a benchmark for open source quality. By leveraging the Scan service, Linux has reduced the average time to fix a newly detected defect from 122 days to just six. Since the original Coverity Scan Report in 2008, scanned versions of Linux have consistently achieved a defect density of less than 1.0. In 2013, Coverity scanned more than 8.5 million lines of Linux code and found a defect density of .61"

    Guess they did not include the Heartbleed bug as it was out there for two years.
    • What amazes me

      That so many large organizations were hit by Heartbleed.
      If you'd development proprietary code you would have tested it to death (hopefully) to ensure is was fit for purpose.
      Use open-source, just chuck it in, "someone else will have tested it"...............

      These organizations should take responsibility to verify all the code they are deploying to their production environments, whether open source or proprietary.
      • Zero defects is the *goal* of software quality assurance

        It's rarely, if ever, achieved.
        Rabid Howler Monkey
        • With the combined resources at their disposable

          You would have hoped on of them might have had some luck.
          I bet many just assumed "it's the standard", and didn't give it too much effort.
          Why waste money testing the standard, everyone else is using it................
        • OpenSSL debacle proves such report useless

          You can cook a government report all you want just like every other government report (Job#, CPI and so on), but you cannot fake it in the real world as OpenSSL shows us the true quality of FOSS or lack thereof.
          • What a ridiculous over-generalization.

            Does a single plane crash mean that all airplanes are suddenly death-traps?
          • There was no debacle

            It was an oversight which was repaired. All this teaches us is that we need to invest more in Open Source. Microsoft's quality is not as high as Open Source quality so investing in Microsoft would not have as great a benefit as investing in Open Source.
            Tim Jordan
          • ???

            Where does that conclusion come from? There are more companies than Microsoft that write proprietary code.
    • OpenSSL vs Linux

      You are aware these are different projects, aren't you?
      • Yup

        Both are open source. The article makes no reference as to what was used for comparison, in fact it implies any open source and proprietary software. But thaks for pointing out the obvious.
        • No it does not

          “implies any open source and proprietary software”

          From the article;
          “Specifically, the company scanned the code of C/C++ programs, such as NetBSD, FreeBSD, LibreOffice, and Linux, and Java projects such as Apache Hadoop, HBase, and Cassandra.”
  • Coverity finds open source software quality better than proprietary code

    Good job.
    Kudos to all developers.
    • apples and oranges

      Proprietary projects were 4 times larger than the open source projects.

      Conclusion is invalid. Next.
      • Not invalid

        Statistics work by percentages not sizes. With proprietary code there is a bug in every 1389th lines. With Open Source there is a bug in every 1695th line. Therefore there are 181,433 bugs in Open Source and 403,728 in proprietary software. So proprietary software contains 22% more bugs per line of code than does Open Source software.
        Tim Jordan
        • Remember, developers are amazing

          Microsoft hates developers because developers develop software and Microsoft want to be the only ones who develop software. I see developers winning this fight as now they have powerful companies behind them to help them survive in this predatory atmosphere which created between developers and themselves.
          Tim Jordan
        • Complexity

          Lines of code only give oportunity for an error.
          Howerwer complexity rises faster than lines of code it's not linerar.
          Thus comparin defect to lines of code is valid for similarly sized projects.
          If you're comparing differently sized projects you have to prove scalability of such comparsion before you draw any conclusions from it.
          • RE: Complexity

            Complexity only rises if it is a single project. These are multiple projects with fewer lines of code. Funny all these people advocating paying for software when the study says just the opposite.
            Tim Jordan
          • Read the study. It absolutely doesn't say this.

            Coverity, Inc., a Synopsys company (Nasdaq:SNPS), is a leading provider of software quality and security testing solutions. This for-profit company counts among its paying clients Microsoft, SAP, Novell, Zerox, Mentor Graphics, Intergraph, Symantec, BMC Software, Toshiba, Sharp, Epson, Samsung, Rockwell, and many others.

            So the company does a study to promote the benefits of their product. It apparently shows that for Open Source development 'projects which have adopted development testing via the Coverity Scan service' and then actively fix flagged defects, will see over time a material drop in the number of defects found on average per line of code. For the first time now they report a lower number of defects per line of code, on average, then seen in a pool of proprietary software they sampled. A few caveats - no indication of fewer defects per line found between open source and proprietary offerings when project codebase size exceeds 1mm lines. Of the 250 open source projects they measured they averaged 340K vs average of 1.39M for the 250 proprietary projects assessed. Also 10.2% false positives found on average but no indication of any delta in false rate, nor severity difference between open source and proprietary defects. No mention that they also have a number of enterprise customers who pay to use the software as a tool for internal software development. Did the pool of proprietary software evaluated include projects developed for internal usage or was this strictly proprietary commercial grade software? They only assessed open source that currently used their product as part of the dev/review/check-in process but didn't indicate whether the proprietary software assessed also only included active users of the service. How did Open Source that doesn't actively use this service compare? And they indicate that any software that averages 1.0 or lower is considered to meet or exceed the bar to be generally considered Good Quality.

            I'm not mentioning this to say open source sucks, proprietary rocks, for-profits can't be trusted to deliver unbiased studies of their own tools, that all software can't be improved, etc. But if Microsoft put out a study indicating that Visual Studio IDE code defect checking was superior to anything available for open source projects wouldn't you at least question the study's objectivity? These tools are great and apparently getting better. Coverity would like to have even more open source projects and more proprietary software developers using their service. More customers / projects / contributors will help improve the service and all code (open and closed) will likely benefit "IF" they use these or similar tools but must also then fix defects identified. As they say in the study their tool allows projects to focus on HIGH risk defects first and MEDIUM / LOW risk defects can be pushed down the priority list or in some cases ignored.

            You just can't use this study to support your belief that paying for proprietary software is a bad idea.
        • Statistics also work within reasonable context...

          ..otherwise they are worthless numbers.

          By the sheer difference in size of the projects being compared, it is pretty evident that the comparison is between software of a completely different nature or scope. Enterprise software (ERP/CRM/SCM) would likely have more errors per thousand lines of code than an operating system kernel due to way that type of software evolves (these types of systems are built to be customized to the nines and have more moving parts than an operating system).

          Just comparing numbers - whether they're percentages or raw figures is pointless without understanding the subject matter behind them.
  • Coverity did not find the Heartbleed bug

    So how much is their scanning worth?

    I would not recommend relying on advice from a company who claims to be able to scan for defects, but for two years missed THE biggest, most expensive and most embarrassing bug of all time.