Should we be focusing on vulnerabilities or exploits?

Should we be focusing on vulnerabilities or exploits?

Summary: Mitja Kolsek argues that there's a hidden danger in focusing on limiting exploitability instead of exterminating vulnerabilities.

SHARE:
TOPICS: Security
4

Guest editorial by Mitja Kolsek

This post was inspired by a recent ZDNet article "Offensive security research community helping bad guys" and this ThreatPost interview after the Kaspersky security analyst summit, in which Adobe security chief Brad Arkin explains his (Adobe's) philosophy on addressing software vulnerabilities. The crux of this philosophy can be summarized with Brad's words: "My goal isn't to find and fix every security bug, I'd like to drive up the cost of writing exploits.". Subsequently, he mentioned that offensive security researchers are"driving that cost down when they research a new technique to hack into software, write a paper and publish it to the world."follow Ryan Naraine on twitter

Although the average sentiment of the comments under the "offensive security" article was, well..., offensive, one thing is true: if the only alternative to driving up the cost of writing exploits were to find and fix every security bug, and one would have to choose between the two, the former is the logical choice - after all, it is a general consensus (or as some prefer: excuse) that you can never find all security bugs, while one can achieve demonstrable success in driving up the costs of exploitation for many vulnerabilities. (And Adobe, having introduced sandboxing to the Reader, has undoubtedly made real progress in this area.)

[ SEE: Ten little things to secure your online presence ]

Reality vs. Perception

If you're in charge of product security, your official job description is probably something like "make our products secure". But in all likelihood, your effective job description, as your employer sees it, is more akin to "make our products perceived as secure". Don't misunderstand this: Your employer won't mind if your product is actually secure, but he will mind if it is not perceived as such and it adversely affects the sales. I'm sure most people would do their best - and actually do bend over backwards - to make their products actually as secure as possible, but what affects a company's bottom line is customers' perception, not the reality. And the market's invisible hand (through superiors' and owners' not-so-invisible hands) will make it really clear that perception has priority over reality. Which is, incidentally, not only a case with infosec, but the way things work wherever reality is elusive.

[ SEE: 'Offensive security research community helping bad guys' ]

Let's think about that for a while. Where does the difference between perception and reality come from? As already noted, reality is elusive in information security, full of known unknowns (have we missed any buffer overflows or XSSes; is our product being silently exploited?) as well as unknown unknowns (who knows what new attack methods those pesky researchers will come up with tomorrow?). And while you do know that security of your product improves with each identified and fixed vulnerability, you don't know where you are on the scale - there is, alas, no scale.

Perception, on the other hand, is more measurable and more manageable: you can listen to your customers and prospects to see what they think of your security - and this will, in the absence of your marketing material, largely depend on their knowledge of (1) your product's vulnerabilities and (2) publicized incidents involving your product. The former tend to frequently find their ways to public vulnerability lists - and your customers -, but the latter are more tricky: I'm confident that an overwhelming majority of break-ins are never even detected (typically: data theft), much less publicized. And for those detected, is the exploited vulnerability ever determined at all? As a result, most publicized incidents that are actually linked to vulnerable products involve self-replicating exploits (e.g., worms) that ended up in malware researchers' labs. The point being that we generally only know about incidents involving specific remotely exploitable vulnerabilities, suitable for worm-like malware. Others remain unknown.

The Hidden Danger

Developing methods for limiting exploitability is of great value. Sandboxes, ASLR, DEP and other exploit mitigation techniques do drive the cost of exploitation up, and do so for a wide range of different vulnerability types. This is good.

[ SEE: Hackers pounce on just-patched Windows Media vulnerability ]

There is, however, a hidden danger in focusing on limiting exploitability instead of exterminating vulnerabilities. Let me illustrate with a (maybe not so) hypothetical dialog:

You: "There is a vulnerability in your product." Vendor: "Yes, but it's not exploitable." You: "How do you know it's not exploitable?" Vendor: "Well, it hasn't been exploited yet." You: "How do you know it hasn't been exploited yet?" Vendor: "We're not aware of any related incidents. Are you?" You: "Uhm..., no, but..." Vendor: "Case closed."

The danger here is that replacing a determinable value (existence of a known vulnerability) with a non-determinable one (absence of exploits/incidents) when deciding whether to fix a security flaw may result in a better perception of security ("We don't know of any incidents, therefore there aren't any") but worse reality. Why? Because it opens the door to reasoning that it doesn't make sense to fix vulnerabilities if there's a second layer of defense that blocks their exploitability. And then, once someone finds a hole in this second layer of defense, there will be an array of vulnerabilities to choose from for mounting a successful attack.

[ SEE: Responsible disclosure, the Microsoft way ]

So let's hope that software vendors don't have to choose between limiting exploitability and exterminating vulnerabilities, but can actually do both. (Google's Chris Evans replied to Brad on Twitter,"Unfortunately, modern security best practice is BOTH 1) sandbox and 2) find/fix bugs aggressively"). I know from personal experience that Adobe is actively finding and fixing bugs in their products in addition to making exploitation harder, so I think Brad is being misunderstood there. But as far as hacking exploit-mitigation mechanisms goes, a flaw in such mechanism is a vulnerability like any other: it allows an adversary to do something that should have been impossible. As such, it is unreasonable to expect that these vulnerabilities would not be researched, discussed, privately reported, published on mailing lists, sold and bought, and silently or publicly exploited just like others are - depending on who finds them.

At ACROS Security, Mitja is leading a team of security researchers working for clients who consider low hanging vulnerabilities embarrassing.

Topic: Security

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.

Talkback

4 comments
Log in or register to join the discussion
  • RE: Should we be focusing on vulnerabilities or exploits?

    Quoted in the article:
    "modern security best practice is BOTH 1) sandbox and 2) find/fix bugs aggressively

    Not to mention a security development life-cycle for software. A question: when should highly-exploited, legacy software such as Adobe Reader/Acrobat and Oracle Java be rewritten within the confines of a security development life-cycle? The world was a different place at the time of their creation and they seem destined to be around for some years to come. [Note: I left Adobe Flash Player out as it appears that HTML5 will eventually replace it.]
    Rabid Howler Monkey
  • The beauty of LSM

    Linux Security Modules, SELinux, AppArmor provide a sandbox environment to police BOTH the 'kernel' and the 'app'.

    If you use Linux with LSM and have, for example, Firefox[1] running in LSM, then there is no immediate urgency (unlike zero-day with Windows) to apply security patches.

    If an 'exploit' succeeds in vectoring itself via a known vulnerability, Microsoft Windows cannot stop the process from making calls the the system kernel.

    Even if you use Chrome on Windows, the Engineers have made a specific 'disclaimer' for this, e.g., DLL injection.

    ---------------------
    [1] Or any 'app' that requires sandboxing
    Dietrich T. Schmitz *Your
  • RE: Should we be focusing on vulnerabilities or exploits?

    [redacted]
    jestersniper@...
  • RE: Should we be focusing on vulnerabilities or exploits?

    I agree wholeheartedly that it is entirely irrational to chastise security researchers for doing their best to ensure that security measure put in place to sandbox users are actual security measures and not simple band-aids which are easily bypassed. If the creators of such sandboxing measures don't like the fact that they are being audited so to speak, they shouldn't be attempting to create a security through obscurity type situation where they tell their consumer they are secure and hope the consumer (and the hackers) just assume it to be true.
    hacktalk