He has held senior security roles at a variety of high-profile companies in which he has administered the White House email system. He has consulted for many Fortune 500 organisations, and has been a key presenter at countless security events around the world. Ranum resides on a remote farm in Pennsylvania far from the cities and fast internet. He'd welcome the end in the battle for IT security, even if it meant the end of the industry.
Marcus J. Ranum (Credit: Munir Kotadia/ZDNet Australia)
Name: Marcus J. Ranum
Position: chief security officer at Tenable Network Security
Born: Guy Fawkes' Day, 1962, New York City
Education: BA Psychology, Johns Hopkins University, 1985
Career: Ranum left DEC to work for Trusted Information Systems as chief scientist where he developed the TIS Internet Firewall Toolkit funded by the US Defense Advanced Research Projects Agency. He was also chief scientist at V-One, later forming security company Network Flight Recorder and taking on the role of senior scientist at TruSecure Corp. Ranum now serves as a chief security officer at Tenable Network Security where he advises to staff and clients.
ZDNet Australia: Why did you enter the information security industry? What do you find most interesting about it?
Marcus J. Ranum: I got dragged in quite by accident when my boss at DEC, Fred Avolio, put me in charge of one of the company's internet gateways and told me to "build a firewall like Brian Reid and Bill Cheswick's" — 20 years later I suppose you could say I'm still working on that assignment. And, to be honest, I didn't find anything particularly interesting about computer security; once you understand the strategic problem then it's all just a lot of attention to detail.
What I do find most interesting about security is how people react to it: they want to do something dangerous safely and are generally resentful when you tell them that's not going to work. So I see the whole industry as a vast dialectic between hope and concrete effort on one side, and cynical marketing and wilful ignorance on the other.
What do you find is the most pressing issue in the information security industry and what can be done to fix it?
The most pressing issue in information security is one we're never likely to do anything about, and that's achieving reliable software (security is a subset of reliability) on end-point systems. That means operating system design and reliable coding, two things that the trend lines are moving in the opposite direction of right now. Consequently, the current trend is "cloud computing", which, in effect, is visualising the mainframe: acknowledging that end-points are badly managed and unreliable and putting data and processes in the hands of professionals who are expected to do a better job maintaining them and making them reliable — and cheap — than departmental IT.
Of course, that's a pipe dream, because the same practices that brought us unreliable code-mass on the end points are being used to build the aggregated services. The backlash when it's all revealed to be a pipe dream is going to be expensive and interesting, in that order.
What can be done to fix it? Again, the trend lines are all going the wrong direction — the fix requires technically sophisticated management with healthy scepticism toward marketing claims, good software engineering and a focus on getting the job done right, not getting something that you can't understand from the lowest bidder. It will correct itself. The industry will re-aggregate into competence centres, which will become more expensive when they realise they have the upper hand, and that will re-trigger the fragmentation to the desktop and department cycle.
To fix things, we'd need to all focus ruthlessly on reliability, which means also quality, and not … "ooo! Shiny thing!"
You're no fan of blacklisting, yet much of the industry is built on it and it's the source of a lot of cash. Can you explain your opposition to blacklisting and whether you think change to a dominant whitelisting model is inevitable? What would happen to revenues in the security industry if such a shift happened?
I'm a huge fan of blacklisting! It's a crucial technology! It just doesn't answer the question that many people are expecting it to, which is "is this software good?" Blacklisting is the best technique for identifying something, because it can answer not only the question "is this thing bad?" but "what is it?" It seems to be human nature to want to know what was thrown at us, and that's why people are so intellectually comfortable with signature-based intrusion detection/prevention and signature-based antivirus. It's easy to implement and it's easy to understand — and it's easy to keep selling signature update subscriptions.
When you've got companies like Symantec saying that blacklists don't work, I think it's an important acknowledgement that a lot of the security industry is just happy to keep churning the money-pump as long as it's not sucking air. The trend there seems to be reputation — [meaning] "continue to trust someone else's opinion" — it's a more flexible approach to building a cloudy and hype-ful dynamic blacklist, but in the long run it's not going to work any better than static blacklists. By work I mean "solve the malware problem for customers". If by work you mean "solve the relevance and financial problems for antivirus vendors", I think it will "work" just fine for a long enough [time] to keep them happy.
Meanwhile, I keep asking IT managers "do you have any idea why you gave a user a computer?" and "if you know why they have a computer, why not configure that computer so that what it can do is what it's supposed to do and not much else" — where much else means things like "participate in botnets". I'm constantly baffled by how many IT managers say it'd be hard to enumerate all the software they run. It's bizarre because knowing the answer to that question is what IT's job is. If my company gave me a computer so I can do email and edit company documents, it seems pretty simple to imagine that it ought to run some office apps and an email client configured to talk to our IMAP server and maybe nothing else. For a while I was hopeful that the app-store model on increasingly powerful handheld devices would let us do away with the current "bucket of fish guts" approach to desktop security, but it looks like the app stores are going to be a big target and eventually a distribution vehicle for badware.
So, you need blacklists so that you can tell someone "that piece of weird stuff you just tried to run is called Stuxnet" and that's interesting and useful, but you need the whitelists more, because that's how you define your notion of what you think your computer should be doing. If you cast the problem in terms of a firewall policy it's the old default-permit versus default-deny all over again. Default-deny is what the survivors do, and default-permit is for the guys who want to spend all their time doing incident response and forensics. None of this is anything less than completely obvious.
As far as security industry revenues — who cares? Nobody is worrying about the impact that the internal combustion industry has had on the steam-power boilermakers' industry, are they? In fact, I think it'd be awesome if we could someday dry our hands, put away our tools and say "There, fixed it, now let's write something fun!" Believe it or not there was a time early in the firewall industry when I thought we'd built all the tools that security would need; it was just a matter of fielding policy-based access control, offline authentication, point-to-point cryptography and then levelling up software quality. But in the late '90s the lunatics took over the asylum and — well, the results speak for themselves.
You said once that businesses lack the willpower to brand devices as corporate, rather than personal, assets. Must this happen? Are platforms to "secure" bring-your-own devices not enough?
Let me throw that back at you, OK? How would you feel if the US announced that we were putting our ballistic missile systems control into an iPad application and we were going to let the guys in the silos use their personal iPads so we could save a whole bunch of money?
It always depends: it depends what's at stake, how replaceable it is, how easy it is to clean up an "oopsie" and whether you are really willing to be part of that "oopsie". Every single journalist who has ever complained that some agency or company leaked a zillion credit cards or patient data or secrets should never ask the question you just asked me.
You should be asking why do they tolerate systems and software that are so bad, so shoddy, so mismanaged that they've got no idea what they are doing, yet they allow them to be used to access my bank account? Are you insane?! These problems are inevitable side-effects of poor configuration management, which is poor system management, which means "don't know how to do IT".
Yes, I do realise that I am arguing against today's prevailing trends in IT management.
Do you still equate penetrate and patch to turd polishing? How prevalent is this and is it realistic to expect software vendors to change their attitude to security?
Yes, I do. It's one thing for a sculptor to say they start with a block of marble and then chip away everything that doesn't look like an angel, but that doesn't work for software. You can't start with the idea that a buggy mass of stuff [will] eventually turn into enterprise-class, failure-proof software by fixing bugs until there aren't any more. No matter how much polish you put on a turd, it's still a turd.
The software industry almost understands this — you'll occasionally see some piece of software get completely re-architected because its original framework became limiting. As pieces of software get more complex and powerful, developers usually resort to things like source-code revision control, unit testing, regression testing, et cetera. Why doesn't the idea that a security bug is just another bug sink in? If a manager can comprehend that there's a major cost to an out-of-cycle patch because of some reliability failure, they ought to be able to understand that a security flaw is just a particularly painful out-of-cycle patch with bad publicity attached to it.
The problem is that the software industry is target-locked on time-to-market because that is where the big rewards are — asking them to do anything that might affect time-to-market is asking them to risk being an also-ran. Some of that can be managed by adopting a model of "write a toy version, throw it over the fence, and if it succeeds take the lessons learned and write a real version shortly after", but I'm afraid that sometimes the toy version becomes the production codebase for a decade. We've seen the results of that and they're not very pretty.
It's been about six years into the 10 by which you predicted hackers would no longer be portrayed as cool and educating neo-luddite users on security would become a null point. What's your take of the current climate?
I think that, at least partly, thanks to the spread of malware and botnets, and the professionalisation of cybercrime, a lot more "normal people" are less impressed with hacker culture. The "grey hat" community's commercial interest is pretty clear to just about everyone now, so I think the hacking community has some reputation damage to deal with.
As far as educating neo-luddites, I think I was pretty much completely wrong there. Not wrong that education won't help — but wrong that the newer generation of executives will have a better grasp of security. From where I sit it looks like it's actually getting worse.
Which mobile platform will (or do you hope will) win out — the open Android, walled Apple or locked down Blackberry?
I wish they would all go away. Which they inevitably will. The song "Every OS Sucks" sums up my views very nicely. A disclosure: I bought an iPad because it plays movies nicely and doesn't pretend to be a telephone. I do like the delivery model of "app store" systems for fielding software — it's much better than letting users install things themselves or worse yet when the system comes bundled with 10,000 pieces of shovel-ware. I'm concerned about code quality, of course: it's not going to be possible for the app stores to vet code for malware, and I'm not convinced the "walls" in the "walled garden" aren't made of Swiss cheese.
You once told me privacy is a myth and something held by the privileged few. What is your take on privacy now, where do you think it is heading and what significance will this have?
I think that what I might have said is more that privacy has only ever been for the wealthy and powerful. What we've seen lately is the veneer coming off — the US Government is consistently and cheerfully trampling on privacy and has pardoned itself and its lackeys for all transgressions. Meanwhile, we see that if you read Sarah Palin's email you get in trouble, but if you read Joe Average's email you're the FBI. Privacy is a privilege of power — because the powerful need it so they can enjoy the fruits of their power without everyone realising how good they've got it.
Meanwhile, the entire population of the planet seems to want to join social-networking websites that exist to collect and re-sell marketing information and push ads in their users' faces — then they complain when they discover that the sites are doing exactly what they were created to do. What else did they expect? I never really cared about privacy, but a few years ago I adopted a strategy of leading a fairly open life. It's easy to get my phone number and address and email address and to find out where I've been and who I'm sleeping with and what and how much I drink or what music I listen to. There are only a few things about my lack of privacy that annoy me and it's mostly the stupidity of commercial marketing — I get a credit card offer from the same big bank every month. I've gotten one from them every month for 15 years. I periodically wonder why it hasn't sunk in to them that I'm not interested, but I have a big garbage can and it's their money they're wasting.
I'm a subscriber of your six dumbest ideas — are there some that you would update?
The piece was originally going to have a few more dumb ideas than it did, but the next one to write about was "ignoring transitive trust". I wrote that piece while I was stuck in Frankfurt Airport and I was pretty tired and trying to explain why transitive trust makes a mockery out of most of what we see as "internet security" was just too much for me to attempt. If I'd had more courage I'd have also tackled "cost savings achieved now will continue forever" for the outsourcing and cloud computing fans.
Could you briefly explain why you think cyberwar is BS?
There are several reasons cyberwar is BS: technological, strategic and logistical. The people who are promoting it are either running a snow-job (there's a lot of money at stake!) or simply don't understand that warfare is the domain of practicality and cyberwar is just a shiny, impractical toy. Unfortunately, there's so much money involved that the people who are pushing it simply dismiss rational objections and incite knee-jerk fear responses by painting pictures of burning buildings and national collapse and whatnot.
[See a longer explanation of the cyberwar phenomenon on Ranum's Rearguard podcast]
Probably the shortest rebuttal of cyberwar is to point out that it's only practical if you're the power that would already expect to win a conventional war — because a lesser power that uses cyberwar against a superpower is going to invite a real-world response, whereas it's attractive if you already have overwhelming real-world force — but then it's redundant. Cyberwar proponents often argue by conflating cyber crime, cyber espionage, cyber terror and cyberwar under the rubric of "cyberwar" but they ignore the obvious truth that those activities have different and sometimes competing agendas.
A short cyberwar: "be glad we jacked you up with Stuxnet because otherwise we'd have bombed you". A shorter cyberwar: "be afraid. give me money".