Guest editorial by Mitja Kolsek
Although the average sentiment of the comments under the "offensive security" article was, well..., offensive, one thing is true: if the only alternative to driving up the cost of writing exploits were to find and fix every security bug, and one would have to choose between the two, the former is the logical choice - after all, it is a general consensus (or as some prefer: excuse) that you can never find all security bugs, while one can achieve demonstrable success in driving up the costs of exploitation for many vulnerabilities. (And Adobe, having introduced sandboxing to the Reader, has undoubtedly made real progress in this area.)
If you're in charge of product security, your official job description is probably something like "make our products secure". But in all likelihood, your effective job description, as your employer sees it, is more akin to "make our products perceived as secure". Don't misunderstand this: Your employer won't mind if your product is actually secure, but he will mind if it is not perceived as such and it adversely affects the sales. I'm sure most people would do their best - and actually do bend over backwards - to make their products actually as secure as possible, but what affects a company's bottom line is customers' perception, not the reality. And the market's invisible hand (through superiors' and owners' not-so-invisible hands) will make it really clear that perception has priority over reality. Which is, incidentally, not only a case with infosec, but the way things work wherever reality is elusive.
Perception, on the other hand, is more measurable and more manageable: you can listen to your customers and prospects to see what they think of your security - and this will, in the absence of your marketing material, largely depend on their knowledge of (1) your product's vulnerabilities and (2) publicized incidents involving your product. The former tend to frequently find their ways to public vulnerability lists - and your customers -, but the latter are more tricky: I'm confident that an overwhelming majority of break-ins are never even detected (typically: data theft), much less publicized. And for those detected, is the exploited vulnerability ever determined at all? As a result, most publicized incidents that are actually linked to vulnerable products involve self-replicating exploits (e.g., worms) that ended up in malware researchers' labs. The point being that we generally only know about incidents involving specific remotely exploitable vulnerabilities, suitable for worm-like malware. Others remain unknown.
The Hidden Danger
Developing methods for limiting exploitability is of great value. Sandboxes, ASLR, DEP and other exploit mitigation techniques do drive the cost of exploitation up, and do so for a wide range of different vulnerability types. This is good.
You: "There is a vulnerability in your product." Vendor: "Yes, but it's not exploitable." You: "How do you know it's not exploitable?" Vendor: "Well, it hasn't been exploited yet." You: "How do you know it hasn't been exploited yet?" Vendor: "We're not aware of any related incidents. Are you?" You: "Uhm..., no, but..." Vendor: "Case closed."
The danger here is that replacing a determinable value (existence of a known vulnerability) with a non-determinable one (absence of exploits/incidents) when deciding whether to fix a security flaw may result in a better perception of security ("We don't know of any incidents, therefore there aren't any") but worse reality. Why? Because it opens the door to reasoning that it doesn't make sense to fix vulnerabilities if there's a second layer of defense that blocks their exploitability. And then, once someone finds a hole in this second layer of defense, there will be an array of vulnerabilities to choose from for mounting a successful attack.
* At ACROS Security, Mitja is leading a team of security researchers working for clients who consider low hanging vulnerabilities embarrassing.