/>
X
Innovation

Counting vulnerabilities is pointless

Application security vendor Cenzic released a report today highlighting Mozilla Firefox as the most vulnerable web browser based on vulnerability count. Problem is, counting vulnerabilities is pointless. In fact, it's worse than pointless, it can lead us to draw false conclusions.

Application security vendor Cenzic released a report today highlighting Mozilla Firefox as the most vulnerable web browser based on vulnerability count. Problem is, counting vulnerabilities is pointless. In fact, it's worse than pointless, it can lead us to draw false conclusions.

Sure, the report makes interesting reading, highlights of which are:

  • 78 percent of the total reported vulnerabilities affected Web technologies, such as Web servers, applications, Web browsers, Plugins and ActiveX, which is a significant increase from last year.
  • Of Web browser vulnerabilities, Firefox had the largest percentage, at 44 percent. Safari vulnerabilities came in at 35 percent, significantly higher than even Internet Explorer.
  • Sun Java, PHP, and Apache continue to be among the Top 10 vendors having the most severe vulnerabilities for the first half of 2009.

Problem is, the information you get form a vulnerability count is next to pointless. Why? Because it's a weak metric thrown around by people who put too much faith in numbers. Let me give you an example.

Let say I give you give me a gold coin to look after. Which would bother you more, the fact that I left your coin in an unlocked car on the side of the road, or unlocked in a secure compound surrounded by security cameras and attack dogs? In both these situations there's only big security vulnerability, but both situations are far from being equal.

As far as vulnerabilities go, there are far better metrics than count. Time to fix is an important one, as is the number of zero-day vulnerabilities. Then there's the overall severity of the vulnerability in question.

I also think that we should also be paying attention to more exotic metrics such as how many unpatched users there are and how many days a product has exposed users to vulnerabilities over a specific period.

Bottom line, counting individual vulnerabilities is not only pointless, but it can cause people to draw conclusions incorrect conclusions.

Thoughts?

Editorial standards