X
Tech

Security is hard, accept it

* Ryan Naraine is on vacation.Guest editorial by Dr Jose NazarioThe past 10 or 15 years have been about the same things, largely, over and over again: input problems into single system applications or kernels.
Written by Ryan Naraine, Contributor

* Ryan Naraine is on vacation.

Guest editorial by Dr Jose Nazario

Dr Jose Nazario
The past 10 or 15 years have been about the same things, largely, over and over again: input problems into single system applications or kernels.

Buffer overflows (splitvt! NCSA httpd!), heap overflows (with much respect to Shock), format string exploits, integer overflows, etc.  Basic input validation, stack manipulation and arbitrary code execution -- or execution path alterations. For the most part, we're still dealing with such issues; just look at all of the ActiveX control bugs, they're repeats of all of the classics.

The past few years of RFI, SQL injections, and XSS is a transition period, I think, to where we're headed now. As things move to the Web, and as the Web browser becomes our number one interface for computing, our threat landscape changes. We'll always have the basics, but when we get into the sorts of interactions we see on the Web we have increased challenges to worry about.

The next big open area of research is right in front of us: Web 2.0. Mash-ups. Communities. AJAX. All of these technologies and paradigms bring with them new challenges which we haven't begun to really address.

I got to thinking about this a lot in the past year or so, and it came up last fall in Malaysia. The OpenBSD project is probably responsible for me thinking about this question for so long. Theo and team have focused on basic coding errors for over 10 years now and they've demonstrated that simply applying good coding practices can be done on a large system and it makes a difference. But they've also shied away complex designs and implementations, saying that they're too complex to secure. They're simply not tackling new and interesting problems any longer.

A number of us were at a post-conference party and I got to talking about long term research with a friend. He correctly pointed out that most of what we're seeing can be classified into two different areas: the same problems being researched in new technologies, usually the low hanging fruit; or the same problems being found in new products, ie ActiveX input validations in dozens of products found by fuzz testing.

He doesn't consider this to be great long term security research, and I think he's right. It's just running around, finding the low hanging fruit. It's janitorial work, stamp collecting: important but insufficient when technologies keep changing.

The basic questions come down to this:

  • What kinds of security threats do we have when all pieces work as designed, and have secure coding, but have insecure interactions?
  • What constitute insecure interactions between components?
  • How can one component of a "mash up" trust another component?

To be fair, there are a number of basic security issues that have been examined: authorization, code safety, mobile code risks, etc. But we haven't begun, as a community, to look at the really complex problems. Groups like OWASP and a few others have looked at some of these factors but, as far as I know, no one in the open security research community has begin to look at these topics in any sustained efforts.

This is hard stuff, and will undoubtedly be the kind of thing that will be too challenging for many people. I expect it will take the better part of the next five years for some of these things to get explored. But these sorts of technologies aren't going away, they're the future of computing, but they're horribly insecure and we don't really know how to address it.

Secure coding of a single component can only take you so far. It's time to brush up on formal methods and start applying them to this space.

* Dr Jose Nazario is a senior security researcher at Arbor Networks.

Editorial standards