Security engineering: broken promises

Security engineering: broken promises

Summary: For several decades, we have in essence completely failed to come up with even the most rudimentary, usable frameworks for understanding and assessing the security of modern software.

TOPICS: Software, Security

Guest editorial by Michal Zalewski

On the face of it, the field of information security appears to be a mature, well-defined, and an accomplished branch of computer science. Resident experts eagerly assert the importance of their area of expertise by pointing to large sets of neatly cataloged security flaws, invariably attributed to security-illiterate developers; while their fellow theoreticians note how all these problems would have been prevented by adhering to this year's hottest security methodology. A commercial industry thrives in the vicinity, offering various non-binding security assurances to everyone, from casual computer users to giant international corporations.

Yet, for several decades, we have in essence completely failed to come up with even the most rudimentary, usable frameworks for understanding and assessing the security of modern software; and spare for several brilliant treatises and limited-scale experiments, we do not even have any real-world success stories to share. The focus is almost exclusively on reactive, secondary security measures: vulnerability management, malware and attack detection, sandboxing, and so forth; and perhaps on selectively pointing out flaws in somebody else's code. The frustrating, jealously guarded secret is that when it comes to actually enabling others to develop secure systems, we deliver far less value than could be expected.

So, let's have a look at some of the most alluring approaches to assuring information security - and try to figure out why they fail to make a difference to regular users and businesses alike.

Flirting with formal solutions

follow Ryan Naraine on twitter Perhaps the most obvious and clever tool for building secure programs would be simply to algorithmically prove they behave just the right way. This is a simple premise that intuitively, should be within the realm of possibility - so why haven't this approach netted us much? Well, let's start with the adjective “secure” itself: what is it supposed to convey, precisely? Security seems like a simple and intuitive concept, but in the world of computing, it escapes all attempts to usefully specify it. Sure, we can restate the problem in catchy, yet largely unhelpful ways – but you know we have a problem when one of the definitions most frequently cited by practitioners is:

“A system is secure if it behaves precisely in the manner intended – and does nothing more.”

This definition (originally attributed to Ivan Arce) is neat, and vaguely outlines an abstract goal – but then tells very little on how to achieve it. It could be computer science - but in terms of specificity, it just as easily could be a passage in Victor Hugo's poem:

“Love is a portion of the soul itself, and it is of the same nature as the celestial breathing of the atmosphere of paradise.”

Now, one could argue that practitioners are not the ones to be asked for nuanced definitions - but, ask the same question to a group of academics, and they will deliver roughly the same. The following common academic definition traces back to Bell-La Padula security model, published back in the sixties (one of about a dozen attempts to formalize the requirements for secure systems, in this particular case in terms of a finite state machine – and one of the most notable ones):

“A system is secure if and only if it starts in a secure state and cannot enter an insecure state.”

Definitions along these lines are fundamentally true, of course, and may serve as a basis for dissertations, perhaps a couple of government grants; but in practice, any models built on top of them are bound to be nearly useless for generalized, real-world software engineering. There are at least three reasons for this:

  • There is no way to define desirable behavior of a sufficiently complex computer system: no single authority can spell out what the “intended manner” or “secure states” are supposed to be for an operating system or a web browser. The interests of users, system owners, data providers, business process owners, and software and hardware vendors, tend to differ quite significantly and shift rapidly – if all the stakeholders are capable and willing to clearly and honestly disclose them out to begin with. To add insult to injury, sociology and game theory suggest that computing a simple sum of these particular interests may not actually result in a satisfactory outcome; the dilemma, known as “the tragedy of the commons”, is central to many disputes over the future of the Internet.
  • Wishful thinking does not automatically map to formal constraints: even if a perfect high-level agreement of how the system should behave can be reached in a subset of cases, it is nearly impossible to formalize many expectations as a set of permissible inputs, program states, and state transitions – a prerequisite for almost every type of formal analysis. Quite simply, intuitive concepts such as “I do not want my mail to be read by others” do not translate to mathematical models particularly well - and vice versa. Several exotic approaches that let such vague requirements to be at least partly formalized exist, but they put heavy constraints on software engineering processes, and often result in rulesets and models far more complicated than the validated algorithms themselves – in turn, likely needing their own correctness to be proven... yup, recursively.
  • Software behavior is very hard to conclusively analyze: static analysis of computer programs to prove they would always behave in accordance to a detailed specification is a task that nobody managed to believably demonstrate in complex real-world scenarios (although as usual, limited success in highly constrained settings or with very narrow goals is possible). Many cases are likely to be impossible to solve in practice (due to computational complexity) – or even may turn out to be completely undecidable due to the halting problem.

[ ALSO SEE: Postcards from the anti-virus world ]

Perhaps more frustrating than the vagueness and uselessness of these early definitions is that as decades fly by, little or no progress is made on coming up with something better; in fact, a fairly recent academic paper released in 2001 by the Naval Research Laboratory backtracks some of the earlier work, and arrives at a much more casual, enumerative definition of software security: one that explicitly disclaims it is imperfect and incomplete:
“A system is secure if it adequately protects information that it processes against unauthorized disclosure, unauthorized modification, and unauthorized withholding (also called denial of service). We say 'adequately' because no practical system can achieve these goals without qualification; security is inherently relative.”

The paper also provides a retrospective assessment of earlier efforts, and the unacceptable sacrifices made to preserve the theoretical purity of said models:

“Experience has shown that, on one hand, the axioms of the Bell-La Padula model are overly restrictive: they disallow operations that users require in practical applications. On the other hand, trusted subjects, which are the mechanism provided to overcome some of these restrictions, are not restricted enough. [...] Consequently, developers have had to develop ad hoc specifications for the desired behavior of trusted processes in each individual system.”

In the end, regardless of the number of elegant, competing models introduced, all attempts to understand and evaluate the security of real-world software using algorithmic foundations seem to be bound to fail. This leaves developers and security experts with no method to make authoritative statements about the quality of produced code. So, what are we left with?

Risk management

In absence of formal assurances and provable metrics, and given the frightening prevalence of security flaws in key software relied upon by modern societies, businesses flock to another catchy concept: risk management. The idea, applied successfully to the insurance business (as of this writing, with perhaps a bit less to show for in the financial world), simply states that system owners should learn to live with vulnerabilities that would be not cost-effective to address, and divert resources to cases where the odds are less acceptable, as indicated by the following formula:risk = probability of an event * maximum loss

The doctrine says that if having some unimportant workstation compromised every year is not going to cost the company more than $1,000 in lost productivity, maybe they should just budget this much and move on – rather than spending $10,000 on additional security measures or contingency and monitoring plans. The money would be better allocated to isolating, securing, and monitoring that mission-critical mainframe that churns billing records for all customers instead.

Prioritization of security efforts is a prudent step, naturally. The problem is that when risk management is done strictly by the numbers, it does deceptively little to actually understand, contain, and manage real-world problems. Instead, it introduces a dangerous fallacy: that structured inadequacy is almost as good as adequacy, and that underfunded security efforts plus risk management are about as good as properly funded security work.

Guess what? No dice:

  • In interconnected systems, losses are not capped, and not tied to an asset: strict risk management depends on the ability to estimate typical and maximum cost associated with a compromise of a resource. Unfortunately, the only way to do it is to overlook the fact that many of the most spectacular security breaches in history started in relatively unimportant and neglected entry points, followed by complex access escalation paths, eventually resulting in near-complete compromise of critical infrastructure (regardless of any superficial compartmentalization in place). In by-the-numbers risk management, the initial entry point would realistically be assigned a lower weight as having low value compared to other nodes; and the internal escalation path to more sensitive resources would be likewise downplayed as having low probability of ever being abused.
  • Statistical forecasting does not tell you much about your individual risks: just because on average, people in the city are more likely to be hit by lightning than mauled by a bear, does not really mean you should bolt a lightning rod to your hat, but then bathe in honey. The likelihood of a compromise associated with a particular component is, on an individual scale, largely irrelevant: security incidents are nearly certain, but out of thousands exposed non-trivial resources, any resource could be used as an attack vector, and none of them is likely to see a volume of events that would make statistical analysis meaningful within the scope of the enterprise.
  • Security is not a sound insurance scheme: an insurance company can use statistical data to offset capped claims they might need to pay across a large populace with the premiums collected from every participant; and to estimate reserves needed to deal with random events, such as sudden, localized surges in the number of claims, up to a chosen level of event probability. In such a setting, formal risk management works pretty well. In contrast, in information security, there is nothing contributed by healthy assets to directly offset the impact of a compromise, and there is an insufficient number of events to model their distribution with any degree of certainty; plus, there is no way to reliably limit the maximum per-incident loss incurred.

Enlightenment through taxonomy

The two schools of thought discussed previously have something in common – both assume that it is possible to define security as a set of computable goals, and that the resulting unified theory of a secure system or a model of acceptable risk would then elegantly trickle down, resulting in an optimal set of low-level actions needed to achieve perfection in application design.There is also the opposite approach preached by some practitioners – owing less to philosophy, and more to natural sciences: that much like Charles Darwin back in the day, by gathering sufficient amounts of low-level, experimental data, we would be able to observe, reconstruct, and document increasingly more sophisticated laws, until some sort of an unified model of a secure computing is organically arrived at.

This latter world view brings us projects like the Department of Homeland Security-funded Common Weakness Enumeration (CWE). In the organization's own words, the goal of CWE is to develop a unified “Vulnerability Theory”; to “improve the research, modeling, and classification of software flaws”; and “provide a common language of discourse for discussing, finding and dealing with the causes of software security vulnerabilities". A typical, delightfully baroque example of the resulting taxonomy may be:

Improper Enforcement of Message or Data Structure ? Failure to Sanitize Data into a Different Plane ? Improper Control of Resource Identifiers ? Insufficient Filtering of File and Other Resource Names for Executable Content.

Today, there are about 800 names in this dictionary; most of them as discourse-enabling as the one quoted here.

A slightly different school of naturalist thought is manifested in projects such as the Common Vulnerability Scoring System (CVSS), a business-backed collaboration aiming to strictly quantify known security problems in terms of a set of basic, machine-readable parameters. A real-world example of the resulting vulnerability descriptor may be:

AV:LN / AC:L / Au:M / C:C / I:N / A:P / E:F / RL:T / RC:UR / CDP:MH / TD:H / CR:M / IR:L / AR:M

Given this 14-dimensional vector, organizations and researchers are expected to transform it in a carefully chosen, use-specific manner – and arrive at some sort of an objective, verifiable, objective conclusion about the significance of the underlying bug (say, “42”), precluding the need to more subjectively judge the nature of security flaws.

I may be poking gentle fun at their expense - but rest assured, I do not mean to belittle these CWE or CVSS: both projects serve notable goals, most notably giving a more formal dimension to risk management strategies implemented by large organizations (any general criticisms of certain approaches to risk management aside). Having said that, none of them yielded a grand theory of secure software yet - and I doubt such a framework is within sight.

* Michal Zalewski is a security researcher at Google. He has written and released many security tools, including ratproxy, skipfish and the browser security handbook.  He can be found at the lcamtuf’s blog and on Twitter.

Topics: Software, Security

Kick off your day with ZDNet's daily email newsletter. It's the freshest tech news and opinion, served hot. Get it.


Log in or register to join the discussion
  • RE: Security engineering: broken promises

    Mature? Getting a handle on software security engineering? Give me a break! The vast majority of ITSec issues are related to basic access control, policy and poor management process. Sure, there are inherent flaws in some software. However, if configurations and wayward use were brought into line the likelihood of exploit will be far, far less. If practitioners would put aside all their fancy-smancy and expensive ITSec control appliances and layers and layers of complexity, we can deal with both broken promises and the flume of excuses and placing of blame elsewhere.
    • RE: Security engineering: broken promises

      very nice.
      people usually dont comment in this community. <a href="">replica watches</a>
    • RE: Security engineering: broken promises

      I like them' too. :)<a href="">discount uggs</a>
    • RE: Security engineering: broken promises

      I loved them!
      Defintely post more <a href="">ugg boots outlet</a>
    • RE: Security engineering: broken promises

      <a href="">swiss replica watches</a>
    • RE: Security engineering: broken promises

      <a href="http:///">replica chanel bags</a>
  • Defining security

    Your take is right on: security as a field has a long, long way to go.

    Regarding the issues you mention regarding the definition of security and the use of formal methods, in my opinion, the practical thing to do is to define "secure enough" for each system, rather than trying to define "security" or "secure" for all systems in general. As far as I have been able to determine via experimentation with systems being built by industry, tool development, and consultation with requirements folks, this means formally stating:
    - The relevant parts of the starting state of the system when protection is active
    - The starting privileges of the attackers the system should protect against
    - The specific actions the attackers should not be able to take, at the same level of abstraction as the functional requirements
    - What the system ought to do when these attackers try to take these actions

    This does not solve the problem of getting the correct values in each of these slots (some of which is a generic requirements problem and some of which is an inherent part of formal methods), but it does begin to define a less-fuzzy interface between the humans and analysis tools, and it does provide clearer criteria for noticing when apparently low-value paths actually merit high-value protection.
  • RE: Security engineering: broken promises

    GREAT article. With the utilization of social media apps in regards to business productivity the role of IT is to safely enable Enterprise 2.0 applications through the use of smart policies. So, to block or not? I am a consultant for Palo Alto Networks. Check this whitepaper out: Let me know your thoughts. Pass it around to fellow IT. Would love your take on it.
    • It's not a matter to block or not to block

      @kellybriefworld : There is a bigger problem when you don't have the tools to prevent certain applications to be installed inside those 2.0 applications. There is a matter of rational use; there is a matter of "unsecure people" by itself, who don't understand the implications of their own actions --in management you'll even find peope who doesn't WANT to understand-- and as this is a new world you will fail miserably. It's a matter of disaster contention which will save our asses at the end of the day.
  • RE: Security engineering: broken promises

    All you need to do is put the length of the executable into the header of windows executable files. Ta Da! Viruses knocked out, just like on UNIX (and Mac OSX, which is UNIX).

    Everything else is chicken feathers and incense handling of security.
    • RE: Security engineering: broken promises

      @tburzio There is a place in the PE to put the length of the exec in Windows...
  • What about the cloud?

    You completely omitted mention of the obvious cloud services that companies like Google are trying to seduce users with, which are lacking in basic privacy tools. Is this because Google et al. are completely evil or is it because the security/privacy tools are immature? I think it's [i]both[/i] and that it will take a company with Google-sized resources to make progress on privacy. Fat chance that will happen.
  • Here's the beef!

    ZDnet should have more articles like this instead of the mamby pamby stuff.
    • RE: Security engineering: broken promises

      @dogbreath1 Yep! I LIKE IT A LOT!!! But I think I'm to geek and ZDNet points to a wider audience... or not? I'm lost!
  • RE: Security engineering: broken promises

    "On the face of it, the field of information security appears to be a mature, well-defined, and an accomplished branch of computer science."

    As a computer science major - yeah, right. We still don't have a clue. All we really know is that if you really want security with today's threat model, you're probably best throwing away everything - including the hardware - and starting with scratch with the lessons we've learned in mind.

    "Perhaps the most obvious and clever tool for building secure programs would be simply to algorithmically prove they behave just the right way. This is a simple premise that intuitively, should be within the realm of possibility - so why haven?t this approach netted us much?"

    Between the halting problem and the P=NP problem - we've basically proven to ourselves that it's simply impossible to trace every algorithm to its ultimate conclusion.

    In addition, what we CAN prove is only provable with a lot of work - it's not something we could easily scale to any decent size of software.

    "Wishful thinking does not automatically map to formal constraints"

    So, so true - not everything we want from software is easily formalized into a proof.

    My thought is that we're taking the wrong approach with mathematical proofs - frankly, they come from some strange obsession CS professors have with mathematics. The halting problem pretty much proved that turning Computer Science into a branch of mathematics was a mistake. We need to look beyond the maths and look elsewhere for an answer to the questions we still have concerning security and other aspects of computing.
    • RE: Security engineering: broken promises

      @CobraA1 I am a programmer, and an expert on the field of security programming and I think the mathematical approach will work when computers write that software. In that moment, the "programmers" will be really designers of that software with the aid of computers, and the people we call "designers" today will be all in confinement ;-).
  • It's Time to Abandon the Turing Computing Model

    Almost every major problem in computer science is the result of our infatuation with the Turing machine. The problem with the Turing computing model is that time is not an inherent part of the model. Timing is the key to solving the cyber security and reliability crises. Turing is the problem, not the solution.

    Check out this short discussion at the brand new Federal Cybersecurity R&D Forum:
  • Apple's iDevices are almost....

    bulletproof, as far as viruses, worms and other nasty software is concerned. Only those foolish users who have circumvented Apple's controls by jail breaking have suffered any kind of malware. There are millions of iDevices, all of them connected to the Internet, an extremely tempting target for criminals. Yet none of them, not even one, as far as I know has been affected by any kind of security problem. Many users, especially techies, may not like Apple's draconian, iron-fisted control over their platform, but it has certainly resulted in a rather secure system.

    There is, and there will always be a trade-off between security on one side and liberty and ease-of-use on the other. A house with strong doors, three locks on each door and iron bars in front of all Windows me be more secure, but most people would not want to live in such a house. Having to carry multiple keys just to get in the front door, is not something most people will put up with.

    Maybe there are other ways to implement good security, but Apple has certainly shown the world one way that is working pretty well so far.
    • There's a third option.. in a nice house keeps itself perfectly secure without hindering you.

      Windows and OSX aren't the only games in town.
    • RE: Security engineering: broken promises

      @arminw This kind of devices are FAR from being secure. It's an illusion. The main thing is this: you have to give to someone else your security policies, a company whose main interest and knowledge aims towards a different goal: selling more devices.