Guest editorial by Pete Lindstrom
The many flavors of vulnerability disclosure have a long history in the information security field. While security professionals sometimes support a moderate form of managed disclosure, the introduction of higher consequences associated with SCADA systems has caused many to rethink their perspectives. A seasoned approach to technology risk management and a closer look at the risk model can contribute to clearer thinking in this area.
The key to understanding the impact of disclosure on IT risk is to acknowledge the interaction of threats and vulnerabilities. It is easy to fall into the trap of thinking that only vulnerability levels are affected.
When researchers disclose vulnerabilities, they do it in the name of more secure software. In actuality, the vulnerability level is not impacted at all during initial disclosure; the levels change when they are introduced into the environment, regardless of whether we know about them or not (because the attacker might). The intention of disclosure is to identify the vulnerability and then mitigate it - usually through a patch - so that the vulnerability level is reduced.
But patching a system can be a very risky, time-consuming process. As with any system change, a patch must be thoroughly evaluated and tested prior to its move to production. Even then, patches fail. So, the costs associated with the patch process must be compared to the perceived benefits of protecting against compromise. Since we know that very few vulnerabilities are ever actually compromised, and with SCADA systems we are dealing with highly sensitive systems that are often not changed for long periods, the likelihood of a patch actually being applied is fairly low. What’s more, since no patch is even available in this case, some other mechanism must be deployed. Ultimately, it is unlikely that vulnerability levels will be affected for many months.
Many (but not all) researchers seek out vulnerabilities in an attempt to reduce risk. However, they ignore the threat component. And that means that, in order for risk to be reduced, any reduction in vulnerability level must be greater than the increase in the threat levels. Even though most vulnerabilities are never exploited, there are a number of examples from the past that show that more incidents occur after a disclosure event. Given the SCADA situation at hand, it is unlikely that the vulnerability level will be reduced to offset the increase in threat, and therefore more incidents are likely.
The second point is simply one of futility. In the face of these “more secure” programs, how has the risk changed? Is there anyone out there suggesting that risk is actually going down? The problem is that there is too much software in the world’s codebase (or really any of today’s large data centers) to try to find every bug using the techniques employed today. Not only that, but the gap is widening – it’s like a bad math word problem – software development is heading east at 50 mph and vulnerability research is going in the same direction at 5 mph, when will it catch up?
SCADA systems are serious business where human lives could be at stake. Stuxnet is an excellent example of the futility of bug-finding – we missed those vulns – and the promise of alternative techniques – we still identified the breach and recovered. It makes no sense to exacerbate a problem in order to highlight its significance. Let’s get security researchers – the best talent in our field – to start thinking about the problem in different ways.
* Pete Lindstrom, principal of Spire Security, has been assessing IT risk and managing information security for over 20 years. He blogs regularly at Spire Security Viewpoint. A different version of this op-ed first appeared on the Verizon Business security blog.