X
Tech

In zombies we trust

* Ryan Naraine is on vacation. Guest Editorial by Dan GeerWhen the Internet was young, the design assumption for electronic commerce was clear: The client initiated the connection from a trusted machine and needed to be assured that the server side was not an impostor.
Written by Ryan Naraine, Contributor

* Ryan Naraine is on vacation.

Guest Editorial by Dan Geer

Dan Geer -- managing risk
When the Internet was young, the design assumption for electronic commerce was clear: The client initiated the connection from a trusted machine and needed to be assured that the server side was not an impostor. This is how we got the general design of SSL -- the browser would check that the credential of the server was valid and the transaction itself would be an encrypted conversation between a client who trusts himself and a server newly shown to be trustworthy.

Would that it were still so simple.

A little over a year ago, I wrote an editorial where in back-of-the-envelope style (.pdf) I estimated that perhaps 15-30% of all privately owned computers were no longer under the sole control of their owner. In the intervening months, I received a certain amount of hate mail but in those intervening months Vint Cert guessed 20-40%, Microsoft said 2/3rds, and IDC suggested 3/4ths. It is thus a conservative risk position to assume that any random counterparty stands a fair chance of being already compromised.

We already know, through various published numbers, that amongst people whose computers have at least one infection that the average number of infections is four. We already know that amongst people who are being watched by a key logger that at least 10% of them have a second keylogger. We already know a lot of things of this sort. This parallels the real world where people who get venereal diseases tend to get more than one. The reason is simple, the infections -- computer or cellular -- are side effects of behavior and consistent behavior tends toward consistent results.

So, going back to the design framework of early electronic commerce, one assumed that the initiating client and the responding merchant server were safe, only the Big Bad Internet was not. However, if you are a merchant and I am right, a large portion of the clients who are calling you up today are infected. If you are the sort of merchant who really, really likes your clients to establish login names, passwords, and all the accoutrement of a formal relationship, your merchant software will be quite regularly kissing some infected machines on the lips.

So here is a theory; there are two kinds of people out there: Those who always say "Yes" and those who always say "No." Those who always say "Yes" are eventually infected. Those who always say "No" may escape infection. As a merchant, what are you to do, especially as the point of most client infections is to use the infected clients to get to you? If you do nothing, then you end up choosing which of these two phone conversations you have:

Customer: I did not buy that stock. Merchant: You are an idiot.

-or-

Customer: I did not buy that stock. Merchant: We'll make it up to you.

Now the merchants, being big boys and girls, can do whatever they like though they generally choose Option B. However, as the cost of "We'll make it up to you" mounts, we security people inherit a design choice that is the point of this essay.

So, without further buildup, here's the punchline: Contingent on the premises that (1) there are the always-Yes sorts and the always-No sorts, and (2) that as a merchant your opponent is after you, then purely as a matter of rationality we should design merchant systems to do this:

When the user connects, ask whether they would like to use your extra special secure connection. If they say "Yes," then you presume that they always say Yes and thus they are so likely to be infected that you must not shake hands with them without some latex between you and them. In other words, you should immediately 0wn their machine for the duration of the transaction -- by, say, stealing their keyboard away from their OS and attaching it to a special encrypting network stack all of which you make possible by sending a small, use-once rootkit down the wire at login time, just after they say "Yes."

If they say "No," then you presume that they always say No and thus they are likely to be sharp enough that they are not infected and you can proceed with a transaction in the normal way -- a secure way, of course, but one that does not  involve 0wning them up.

So, Dear Reader, if you want to comment on all this, comment on whether the two kinds of people is what you see and whether to do business with an already compromised host it is or is not wise to take advantage, asking permission, of course, of them further. Done right, there is little doubt that this is net risk reducing...

* Dan Geer is VP and chief scientist at Verdasys and principal of Geer Risk Services. He published a 2003 paper arguing that Microsoft was a monoculture; he was fired the day the report was made public.

Editorial standards