The IT dirty bomb

We pretend, as a figure of speech, that computers can catch viruses - they can't; but they can harbor parasites and there's more than one way those can destroy your business.

In the real world the point of a dirty bomb is the resource waste created by fear of repetition and exposure rather than the bomb itself - much as fear of air terrorism now cripples airline, airport, and air travel efficiency around the world while fear of spam and network attack clogs both our networks and our computers with useless traffic and unproductive software.

In the IT context the analogies we all repeat with respect to things like computer viruses are quite wrong: the Lamarckian nature of computer viruses means they're not virus analogs at all - but I've been wondering whether dirty bomb analogs might not be possible.

Bear in mind, please, as you think about this that you can't fight a threat you don't recognize - and so thinking about how the bad guys are likely to attack you is the first step to figuring out how protect your organization.

In terrorism the dirty bomb, to be effective, has to be many orders of magnitude cheaper to deliver than prevention and remediation - so in the IT context you think naturally of adapting today's virus ideas to render whole networks unusable for extended periods of time. All jokes about Vista aside, that's actually harder to do than you might think: you can, for example, use the email blacklisters to reduce the usefulness of email for whole IP blocks - but the effect wears off in a month or two and the hidden truth is that whitelisting combines with bureaucracy to guarantee that many target organizations wouldn't even notice the problem.

Direct intervention techniques are probably more effective: today's reliance on wireless communication suggests, for example, that a spray containing micrometer tuned antennae that get triggered, and powered, by harmonic EM fields in the right frequency ranges could be cheaply made, easily delivered, and prove devastatingly hard to find and remove while blanketing every signal around them in static.

As you go down the list of candidate methods and technologies looking for opportunities for the bad guys one thing becomes clear very quickly: there are lots of glitsy tech opportunities, but their direct effects are generally transient and their usage predictably leads to new equilibria in which attrition works against the attackers while continuing defense costs rise, but not cripplingly so.

Have recourse to traditional social engineering, however, and opportunities seem to multiply while costs and risks arguably go down. Getting a few hundred of your bright young engineering students in as long term residents working in major corporations and agencies might seem to mean, for example, that you could temporarily shut down the target economy at any time.

Fortunately for the defense, however, this only works up to a point, and generally only in the short term - because in the longer term mutual dependence sets in and the relationship becomes parasitic. In the legitimate commercial variant, for example, it's possible to make an entire group of people so dependent on the job value of their knowledge of your technology (and only your technology, of course) that getting one of them in place as a customer decision maker gives you control of that customer's technology spending and infrastructure - but doing this creates a parasitic relationship characterized by mutual dependence: imagine, for example, how long Microsoft or IBM would last if their stuff actually killed the customer quickly instead of just siphoning off money and competitive opportunity.

Recognize that infiltration tends toward stability and the conclusion is obvious: the manchurian candidate approach is cheap and low risk - but can be effective in the longer term only if the period during which the damage done is too short and too obvious for mutual dependence to become an issue.

Notice that this applies to deployment, not set-up. The feasibility of a plan to plant hidden functionality in PC devices by taking manufacturing control over twenty or even thirty years is not affected by this - but the ability to act broadly on the existence of that functionality is largely a one time thing and even very limited use of the technology to achieve finely targeted goals carries some risk of exposure despite the natural cameoflage provided by the normal wintel experience.

At least one usage pattern possible with this, however, perfectly illustrates the notion of an IT dirty bomb: imagine that a third party which has the attack planner pretty completely infiltrated, decides that exposing this is in its best interest and so gives information on how to deploy the technology to a large, but unrelated, group of political terrorists.

The latter then deploy it as a true dirty bomb: bringing down IT operations at more or less randomly chosen agencies and businesses in their target countries to create world-wide panic amid the usual press gibbering, official impotence, and executive over-reaction.

That's an imaginary scenario, of course; but not, I think, an impossible one - and particularly not in today's IT monocultural world where most of the technology is made by the same people and hardly any businesses and agencies have truly diversified IT hardware, software, methods, and staffing in place to protect themselves.