Indefinite vulnerability secrecy hurts us all

Michal Zalewski: Indefinite vulnerability secrecy hurts us all by removing all real incentives for improvement, and giving very little real security in return.

Guest editorial by Michal Zalewski

When explaining why it is not possible to meet a particular vulnerability response deadline, most software vendors inevitably fall back to a very simple and compelling argument: testing takes time.

For what it's worth, I have dealt with a fair number of vulnerabilities on both sides of the fence -- and I tend to be skeptical of such claims: while exceptions do happen, many of the disappointing response times appeared to stem from trouble allocating resources to identify and fix the problem, and had very little to do with testing the final patch. My personal experiences are necessarily limited, however - so for the sake of this argument, let's take the claim at its face value.

To get to the root of the problem, it is important to understand that software quality assurance is an imperfect tool. Faulty code is not written to intentionally cripple the product; it's a completely unintended and unanticipated consequence of one's work. The same human failings that prevent developers from immediately noticing all the potential side effects of their code, also put limits of what's possible in QA: there is no way to reliably predict what will go wrong with modern, incredibly complex software. You have to guess in the dark.

follow Ryan Naraine on twitter

Because of this, most corporations simply learn to err on the side of caution: settle on a maximum realistically acceptable delay between code freeze and a release (one that still keeps you competitive!) - and then structure the QA work to be compatible with this plan. There is nothing special about this equilibrium: given resources, there is always much more to be tested; and conversely, many of the current steps could probably be abandoned without affecting the quality of the product. It's just that going in that first direction is not commercially viable - and going in the other just intuitively feels wrong.

[ SEE: Skeletons in Microsoft's Patch Day closet ]

Once a particular organization has such an QA process in place, it is tempting to treat critical security problems similar to feature enhancements: there is a clear downside to angering customers with a broken fix; on the other hand, and as long as vulnerability researchers can be persuaded to engage in long-term bug secrecy, there is seemingly no benefit in trying to get this class of patches out the door more quickly than the rest.

This argument overlooks a crucial point, however: vulnerabilities are obviously not created by the researchers who spot them; they are already in the code, and tend to be rediscovered by unrelated parties, often at roughly the same time. Hard numbers are impossible to arrive at, but based on my experience, I expect a sizable fraction of current privately reported vulnerabilities (some of them known to vendors for more than a year!) to available independently to multiple actors - and the longer these bugs allowed to persist, the more pronounced this problem is bound to become.

[ SEE: Postcards from the anti-virus world ]

If this is true, then secret vulnerabilities pose a definite and extremely significant threat to the IT ecosystem. In many cases, this risk is far greater than the speculative (and never fully eliminated) risk of occasional patch-induced breakage; particularly when one happens to be a high-profile target.

Vendors often frame the dilemma the following way:

"Let's say there might be an unspecified vulnerability in one of our products.

Would you rather allow us to release a reliable fix for this flaw at some point in the future; or rush out something potentially broken?"

Very few large customers will vote in favor of dealing with a disruptive patch - IT departments hate uncertainty and fire drills; but I am willing to argue that a more honest way to frame the problem would be:

"A vulnerability in our code allows your machine to be compromised by others; there is no widespread exploitation, but targeted attacks are a tangible risk to some of you. Since the details are secret, your ability to detect or work around the flaw is practically zero.

Do you prefer to live with this vulnerability for half a year, or would you rather install a patch that stands an (individually low) chance of breaking something you depend on? In the latter case, the burden of testing rests with you.

Or, if you are uncomfortable with the choice, would you be inclined to pay a bit more for our products, so that we can double our QA headcount instead?"

The answer to that second set of questions is much less obvious - and more relevant to the problem at hand; depriving the majority of your customers of this choice, and then effectively working to conceal this fact, just does not feel right.

[ SEE: Security engineering: broken promises ]

Yes, quality assurance is hard. It can also be expensive to better parallelize or improve automation in day-to-day QA work; and it is certainly disruptive to revise the way one releases and supports products (heck, some vendors still prefer to target security fixes for the next major version of their application, simply because that's what their customers are used to). It is also likely that if you make any such profound changes, something will eventually go wrong. None of these facts makes the problem go away, though.

Indefinite bug secrecy hurts us all by removing all real incentives for improvement, and giving very little real security in return.

* Michal Zalewski is a computer security researcher. He has written and released many security tools, including ratproxy, skipfish and the browser security handbook.  He can be found at the lcamtuf’s blog and on Twitter.