How SCADA highlights the futility of finding security vulnerabilities
Pete Lindstrom argues that 'irresponsible' disclosure of security holes in SCADA systems could put human lives at risk and calls on the security research community to start thinking about the vulnerability problem in different ways.
The many flavors of vulnerability disclosure have a long history in the information security field. While security professionals sometimes support a moderate form of managed disclosure, the introduction of higher consequences associated with SCADA systems has caused many to rethink their perspectives. A seasoned approach to technology risk management and a closer look at the risk model can contribute to clearer thinking in this area.
The key to understanding the impact of disclosure on IT risk is to acknowledge the interaction of threats and vulnerabilities. It is easy to fall into the trap of thinking that only vulnerability levels are affected.
When researchers disclose vulnerabilities, they do it in the name of more secure software. In actuality, the vulnerability level is not impacted at all during initial disclosure; the levels change when they are introduced into the environment, regardless of whether we know about them or not (because the attacker might). The intention of disclosure is to identify the vulnerability and then mitigate it - usually through a patch - so that the vulnerability level is reduced.
But patching a system can be a very risky, time-consuming process. As with any system change, a patch must be thoroughly evaluated and tested prior to its move to production. Even then, patches fail. So, the costs associated with the patch process must be compared to the perceived benefits of protecting against compromise. Since we know that very few vulnerabilities are ever actually compromised, and with SCADA systems we are dealing with highly sensitive systems that are often not changed for long periods, the likelihood of a patch actually being applied is fairly low. What’s more, since no patch is even available in this case, some other mechanism must be deployed. Ultimately, it is unlikely that vulnerability levels will be affected for many months.
What can seem counter-intuitive is that threat is impacted significantly with disclosure details. Since intelligent adversaries have their own cost-benefit analyses, anything that drives down their costs increases their benefits. The more information provided, such as weaponized exploit code, the lower the costs. With SCADA systems, the attacker knowledge base is still relatively small and thus expert contributions from whitehats help the bad guys tremendously.
Many (but not all) researchers seek out vulnerabilities in an attempt to reduce risk. However, they ignore the threat component. And that means that, in order for risk to be reduced, any reduction in vulnerability level must be greater than the increase in the threat levels. Even though most vulnerabilities are never exploited, there are a number of examples from the past that show that more incidents occur after a disclosure event. Given the SCADA situation at hand, it is unlikely that the vulnerability level will be reduced to offset the increase in threat, and therefore more incidents are likely.
Researchers commonly refer to their successes by highlighting software applications that they believe have gotten more secure – usually, it’s a Microsoft love-fest. This is a great example of their misaligned focus on vulnerabilities and the importance of understanding threat. While these applications may actually be more secure, it really doesn’t matter unless the incidents are being reduced. This situation highlights two things – first, something that works in contained environments doesn’t necessarily work in the aggregate – that is why QA departments still make sense, for example. And second, that this whole exercise is futile.
The second point is simply one of futility. In the face of these “more secure” programs, how has the risk changed? Is there anyone out there suggesting that risk is actually going down? The problem is that there is too much software in the world’s codebase (or really any of today’s large data centers) to try to find every bug using the techniques employed today. Not only that, but the gap is widening – it’s like a bad math word problem – software development is heading east at 50 mph and vulnerability research is going in the same direction at 5 mph, when will it catch up?
Since it is impossible to find all vulnerabilities and unlikely that we can somehow find the “right” ones, we need to come up with better, more effective ways to protect ourselves. Though the “frontal assault” on vulnerabilities is futile, there are other ways to attack the problem. First, we can design better controls into the architecture. Initiatives like Microsoft’s Blue Hat Prize are more likely to lead to security breakthroughs. Second, we can work harder on threat monitoring. We have another needle-haystack problem there, but have shown success along the way and are getting better. That’s just a start.
SCADA systems are serious business where human lives could be at stake. Stuxnet is an excellent example of the futility of bug-finding – we missed those vulns – and the promise of alternative techniques – we still identified the breach and recovered. It makes no sense to exacerbate a problem in order to highlight its significance. Let’s get security researchers – the best talent in our field – to start thinking about the problem in different ways.
* Pete Lindstrom, principal of Spire Security, has been assessing IT risk and managing information security for over 20 years. He blogs regularly at Spire Security Viewpoint. A different version of this op-ed first appeared on the Verizon Business security blog.