Security metrics: Is there a better way?

Security metrics: Is there a better way?

Summary: A report arguing that the first year of Vista has been more secure--or at least has had fewer vulnerabilities--than XP and other operating systems has raised a ruckus. The issue raises a question about whether there are any metrics that could accurately capture whether an operating system is more secure.

SHARE:

A report arguing that the first year of Vista has been more secure--or at least has had fewer vulnerabilities--than XP and other operating systems has raised a ruckus. The issue raises a question about whether there are any metrics that could accurately capture whether an operating system is more secure.

I posed the metrics question in my previous report on the claims by Jeff Jones, a security guru at Microsoft.

Here is a look at some of the feedback:

  • How about compromised systems per vulnerability. Yes I know the flaw there is that Windows would be a sure loser by reason of it's larger install base. So, maybe state it as a percentage of the installed base? What really makes this difficult is that a Windows user is a fool if they don't have some kind of virus protection enabled. That make any metric a measurement of not only Windows itself but the malware protection companies as well. Do I blame MS when Norton or McAfee fails? Maybe. Because a really secure OS shouldn't need them.
  • I think that a metric describing an entire operating system is of little use. Most operating systems can be configured in countless ways, with vast differences in their level of security. I think a more useful metric would be one that describes the security of an individual implementation. Perhaps one that scans the network or PC in question and compares the number of vulnerabilities found in the implementation to the total number of vulnerabilities discovered for the operating system.
  • A metric on what is most secure gives you a false sense of security so ignore them. They purely marketing tools and that's it. Security is about layers. So you could have a completely unsecured OS but if you have layered you security that will not be problem. The OS is just one small part of the big picture in terms of security.
  • I would like to see required user interaction vs. non required user interaction. If I can prevent issues through knowing what not to do, I am less worried about the vulnerability. Vulnerabilities that can't be educated against scare me more.
  • All metrics will be flawed since measurement of security is subjective. You can never assess quantitatively the value of UAC in Vista or Apparmor or SELinux or Pax in Linux,.Similarly, you can not quantify the value of apt-get in Debian based distros or similar applications in other distros, you will only know that these make it very easy to patch and reduce the user-days-of-risk (which is the most important factor in most situations). Furthermore, user-days-of-risk varies from user to user. What actually should be measured is whether, given the mitigating factors available, you will have a fully functional box which is secure. And over here, you can actually assess the past record of the vendors. Giving a precise number to security is just like snake-oil. Functionality is of prime importance: I will do these and these things- Can I do it in both platforms or just one?, if I can do it in both, where can I be more secure?, If I can be secure in both, where is it easier to be more secure? And more cost-effective? These are the real world questions to answer, not some numbers comparing stones to fruits.

Thanks for the feedback. Bottom line: There isn't one perfect metric--and if someone on a talkback cooked up one I'd encourage him or her to patent it quickly. What's needed in future evaluations that try to portray whether on OS is more secure than another is a scorecard that incorporates vulnerabilities, patches, usability, ease of configuration and probably a dozen other factors. This scoreboard approach wouldn't be easy to summarize in a pithy blog post, but it would be far more accurate.

Topics: Operating Systems, Security, Software

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

14 comments
Log in or register to join the discussion
  • Open Source vs. Closed Source

    Unfortunately, the biggest problem here is open source vs. closed source. Here's the deal, it's easier to find bugs in open source apps, as you can see the source. On top of that, closed source (Windows) often means closed door policy on vuln. disclosures. Microsoft hires some of the best and brightest to help with their security initiative, you have to imagine that they are finding a lot of flaws, and those flaws do not get released to the public.

    The comparison that Jones makes is flawed. I'm not sure there is a good way to track metrics between open and closed source softwares.
    nmcfeters
    • Absolutely true!

      There is no way to validly compare the numbers. Too many closed source vulnerabilities are found but undisclosed.

      And another point I'd make about those who say "the only reason Linux has fewer attacks against it is that Microsoft has a larger installed base and therefore is a bigger target". That's a faux defense against making the switch - if I'm seeing an OS that has that lesser attack surface, I'm gonna think that is actually a reason to move to it! The argument that it may catch up in virus/worm/etc. count after it gains a larger base is moot - by then I can consider moving to another iteration or product that is the new "too small to be of interest and pretty darn secure" product. What matters is now.
      Techboy_z
      • attack surface != installed base

        The attack surface of an OS is the set of commands that can be used to attack a
        system. If a command cannot be accessed by an attacker, it isn't part of the attack
        surface. The attack surface of a system is the union of attack surfaces for all
        components of that system. It is difficult to say that a particular API call is safe, so
        it is hard to say a priori that an API is safe because of how you configured your
        system. The only reliable way to reduce the attack surface is to make a minimal
        install. Software that isn't present is safe from attack. The most effective thing you
        can do is to have your systems properly configured - strong password, keep
        patched, run exposed services via users with minimal permissions and use a
        layered defense. *nix is far superior to Windows in your ability to remove software.
        (At least until people start running servers on embedded versions of Windows.)

        This doesn't have much to do with installed base except more people will be
        probing the attack surfaces in popular operating systems.

        I actually think that you would be less secure using an 'off brand' server, because it
        it is likely that you will have a hard time finding competent admins.
        shis-ka-bob
        • Therein lies the problem

          Competent admins. On most Windows networks that I've configured and secured do I feel secure from the outside world? Most definately. On any network that I'm not privy to the security architecture I'm most definately not. And if I've got someone plugged into my local network without MAC filtering on the switches and/or IPSec running then it's definately a serious concern. Example... I recently got a call that a PC that is used strictly for web browsing by employees who work extremly long shifts wasn't able to get online. I arrive on-site to find an access point plugged into the network, wide open. That's pretty scary. Fortunately the network equipment in place recognized the unauthorized access point and immediately shut it down so they weren't able to do anything with it, but imagine if you didn't have that in place. And in this situation in order to accomplish that, the firewall has to have wireless devices attached that scan for SSID's that are passing traffic back to it's LAN interface and immediately shuts down that traffic.

          It's along the same lines that I won't do a full security audit for my regular customers, recommending they get someone else. For one, since I've configured it my confidence in my own work could possibly lead to me not checking things I often would on a stranger's network. But it's fairly easy to secure a network from the outside. What's scary is when I do a security audit, walk into an office I've never stepped foot into and no one recognizes me, tell them I'm the new computer tech with their IT department/consulting company. I think out of maybe 40+ times of doing so, only twice have I had people demand proof. Every other time they've let me do whatever I wanted, sitting at their computers logged in as them with full access to whatever that particular user has. This extends all the way throughout the company typically, going as high as the CEO. Needless to say I don't bother the person who hired me but they're usually extremely shocked by my reports. I've even had a few cases where they crawled under their desk and handed me their patch cable so I could plug my laptop in.

          OS Security is such a small piece of the puzzle it's almost insignificant. Security starts with education and has to be implemented at every level of network and business operations.
          LiquidLearner
          • This thread...

            Is probably one of the most intelligent I've ever read here.

            I would say that I am not certain that open source software makes it easier to identify bugs and security holes. I would tend to think an architectural black box makes it far less likely that people will quickly and easily identify security risks they can take advantage of. It does at times however lull companies into a false sense of security, as stated above. There really aren't enough statistics to call it either way, and statistics (metrics included) are generally skewed to the opinion of the auditors anyways.

            As far as those who write the viruses and exploits, you have to realistically account for one consistent psychological factor, motive.

            I think the sense of comradery is a protective feature for the open source community. Generally people attack products because they have an inherent dislike for them, and enjoy breaking security and finding holes (those that aren't doing it to make a quick buck as 'security analysts' that is, not quite the rapid growth market it once was). There's not a lot of this in the open source community, as they all have a mutual 'enemy,' which acts as a defense mechanism.

            Though, no matter how you spin it, people who are looking to gather saleable information (credit, bank account info, id, etc) are targetting windows because it has the highest possible success rate. Also, the public release of the viruses is more a saturation and effectiveness test than anything else. The goal of anyone looking to make serious money this way is to get access to corporate systems and servers, it's a less dangerous and nearly untraceable method of potentially infecting one of those machines and gaining access to said systems (see: bank of america)
            Spiritusindomit@...
  • Security by role

    Rather than focussing on 'secure operating system',
    why not focus on securing a 'web application server'
    and then compare a full stack? (This is one example of
    a role, you could also compare other roles like
    'desktop', but I want to start with one concrete
    example.)
    A full stack should be something like
    Windows 2003 + IIS + SQL Server 2005 + ASP.Net 2.0
    vs
    Ubuntu Server + JBoss/Java 1.5 + PostgreSQL 8.2
    (Running all this on one box may be a little unrealistic,
    but you will probably have the same OS on db server
    and web server).
    You can still count vulnerability reports, but carefully
    limit the included products to those that you would
    actually install.
    I would NOT install X.org, KDE, Gnome, Firefox, etc. on
    an Ubuntu web server. I would have to install IE on a
    Windows server, because as we know you can't
    realistically remove IE. On the Li/U-nix side, you
    would include the basic libraries, like OpenSSL.

    The big advantage of the Li/U-nix world is that you
    can install much less software. I get angry when you
    count any vulnerability in any installable software in a
    Linux distro and then say 'ooh look, barebones
    Windows has fewer vulnerability reports'. This is
    grossly misleading because this is the opposite of the
    real situation. In Windows, I have to accept a
    windowing system and a browser. In Linux, I have the
    ability to remove (or never install) all sorts of services
    that I don't want to run.

    What I don't know is how you take into account a
    layered defense. If you add layers in this scheme, you
    end up with more software so there are more vulnerabilities. For example, if I were to put Pound on
    OpenBSD in front of either web server, it would prevent
    a whole host of malformed http traffic. This should
    improve security. I suppose that the rigorous approach
    would be to see if adding Pound would have eliminated
    a particular attack. If that is the case, then the
    underlying defect should not be counted.

    The other way to improve the measure would be to
    assess a cost of length of time server is
    vulnerable/time of the study to give each vulnerability
    a weight as to its damage.


    What I would really like to see would be a neutral third
    party, like Coverity, run scans of source code. They
    would not be allowed to discuss particular defects, but
    they could give a count of the lines of code scanned,
    the number of defects identified and the number of
    defects closed. But there is no way that the
    commercial vendors will allow those sorts of numbers
    to be published, they don't have the b*lls to enter into
    a fair comparison of code quality. And you (the IT
    press) will not boycott them if they refuse, so they have
    no incentive.
    shis-ka-bob
  • First thing...

    believe nothing from MSFT. Anything from MSFT should be considered beyond suspect.

    Any report from anyone there should be considered shot through with falsehoods and special pleading because of the continuously ethically challenged nature of its management and marketing.

    Look at Vista or Xbox or Zune or WM or Live Search. These are all products whose sales and marketing have been shot through with lies and deceit ("The Wow starts now!???).

    Furthermore, each MSFT "research report" should be viewed as merely another "marketing report" subject to the same scrutiny that would be given to any emanation issued from a communist dictatorship.

    The point is simple: look at the looooong list if lies, deceit and outright falsehoods from MSFT and its management and judge accordingly. Nothing from MSFT should be given initial credence; everything from MSFT should be considered "virus-redden" - for good reason.

    The default assumption on any statement from MSFT should be that it is utterly false until repeatedly demonstrated that it is possessed of even a scintilla of credibility.
    Jeremy W
  • i agree with you

    security is about layers
    and yes the media for TOO long has just lumped them together
    but i blame the 1st programmers for allowing holes
    2nd IT staff for not testing and implementing enough
    3rd and most of all i blame the carriers (AT&T ECT,ECT) for allowing the spam and botted machine to continue to spit out the garbage that chokes down the bandwidth and perpetuates the problem!
    cwhull
  • RE: Security metrics: Is there a better way?

    Why did I have to buy all new perephals also for vista...camera ,printer ,scanner ,etc...this SUCKS
    cardinal33
  • RE: Security metrics: Is there a better way?

    I lost more time in 2007 from security issues than in my prior 20+ years in IT.
    TeranceH
  • RE: Security metrics: Is there a better way?

    I lost more time in 2007 from security issues than in my prior 20+ years in IT combined.
    TeranceH
  • The only metric the whiners want to see...

    Is the one that makes microsoft look like flaming dog crap in a paper bag. People will never accept anything that is contradictory to their belief systems, so there's really no point in discussing it with them. Let them whine and the people who think objectively can put all platforms to best use.
    Spiritusindomit@...
  • RE: Security metrics: Is there a better way?

    Benjamin Disraeli and Mark Twain popularized:
    There are three kinds of lies: lies, damned lies, and statistics.
    Security and many other aspects of computer and network operations have many facets and using one method will "blind" you from other facets of security and operations. Metrics used properly used along with other methods of collecting information will help you understand the situation you are dealing. More views of security and operations you have the better the awareness you have of your system. The information maybe overwhelming at first but each piece of metric and information is needed to view what the other has missed. Learning to view this information is important so you know if something is awry then you can check the proper system.
    phatkat
  • RE: Security metrics: Is there a better way?

    I don't know about metrics. What I can state is that while using XP Prof. my antivirus software used to find 3-6 malware infections per day. Since I switched to Vista Ultimate last May, I've found a total of two infections. Vista is doing something right!
    kanadaiy@...