A report arguing that the first year of Vista has been more secure--or at least has had fewer vulnerabilities--than XP and other operating systems has raised a ruckus. The issue raises a question about whether there are any metrics that could accurately capture whether an operating system is more secure.
I posed the metrics question in my previous report on the claims by Jeff Jones, a security guru at Microsoft.
- How about compromised systems per vulnerability. Yes I know the flaw there is that Windows would be a sure loser by reason of it's larger install base. So, maybe state it as a percentage of the installed base? What really makes this difficult is that a Windows user is a fool if they don't have some kind of virus protection enabled. That make any metric a measurement of not only Windows itself but the malware protection companies as well. Do I blame MS when Norton or McAfee fails? Maybe. Because a really secure OS shouldn't need them.
- I think that a metric describing an entire operating system is of little use. Most operating systems can be configured in countless ways, with vast differences in their level of security. I think a more useful metric would be one that describes the security of an individual implementation. Perhaps one that scans the network or PC in question and compares the number of vulnerabilities found in the implementation to the total number of vulnerabilities discovered for the operating system.
- A metric on what is most secure gives you a false sense of security so ignore them. They purely marketing tools and that's it. Security is about layers. So you could have a completely unsecured OS but if you have layered you security that will not be problem. The OS is just one small part of the big picture in terms of security.
- I would like to see required user interaction vs. non required user interaction. If I can prevent issues through knowing what not to do, I am less worried about the vulnerability. Vulnerabilities that can't be educated against scare me more.
- All metrics will be flawed since measurement of security is subjective. You can never assess quantitatively the value of UAC in Vista or Apparmor or SELinux or Pax in Linux,.Similarly, you can not quantify the value of apt-get in Debian based distros or similar applications in other distros, you will only know that these make it very easy to patch and reduce the user-days-of-risk (which is the most important factor in most situations). Furthermore, user-days-of-risk varies from user to user. What actually should be measured is whether, given the mitigating factors available, you will have a fully functional box which is secure. And over here, you can actually assess the past record of the vendors. Giving a precise number to security is just like snake-oil. Functionality is of prime importance: I will do these and these things- Can I do it in both platforms or just one?, if I can do it in both, where can I be more secure?, If I can be secure in both, where is it easier to be more secure? And more cost-effective? These are the real world questions to answer, not some numbers comparing stones to fruits.
Thanks for the feedback. Bottom line: There isn't one perfect metric--and if someone on a talkback cooked up one I'd encourage him or her to patent it quickly. What's needed in future evaluations that try to portray whether on OS is more secure than another is a scorecard that incorporates vulnerabilities, patches, usability, ease of configuration and probably a dozen other factors. This scoreboard approach wouldn't be easy to summarize in a pithy blog post, but it would be far more accurate.