commentary We've been told that in order to innovate, we've got to move fast and break things, be more agile, experiment and fail, and generally just throw things out there and see what sticks. To a lesser extent, that is what we see companies do.
But how often do we actually see a final, static technology product? Most of the time, we don't. Everything is in a constant state of change, with patches, new features, new models. The vast majority of us don't even keep mobile phones for more than a few years before moving on.
But when it comes to security and, to a greater extent, privacy, none of that works. Whenever we hear about improved security, it's because someone forgot to implement best practice. Rather than introducing something new, everyone seems to be travelling towards some sort of static gold standard of security.
And the reason we rarely see emerging security features is that it's too dangerous. Facebook would face immense backlash if it took your personal information, experimented with a feature, and failed. Facebook might say it learned a rather large and important lesson, but after the fact, it can't undo the damage that has been done. As much as Zuckerberg says that he it moves fast and breaks things, I'm sure when it comes to security, he moves very slowly and surely, as failing or throwing things out there to potentially fail isn't an option.
Due to this fear of failure, we go back to what has worked, what has been tested, or more specifically, what is safe. And safety is what security is built on. The idea that if X is done, risk will be limited to a certain degree. To put it another way, to play it safe, security is all about not taking risks.
Yet, the most dramatic innovations and innovators of our time were built on some element of risk, whether that's throwing out there an outlandish proposition of indexing the entire web, recreating a social network that others had already done, or suddenly deciding to create a smartphone, rather than a PC-competitor.
Innovation is what makes things better, which challenges the competition to do things in a different way, which provides more options — but we rarely hear about it in security. Instead, we use old solutions to current problems, like using passwords for authentication, because most of the time, we're too scared to think of another way of doing it.
How old is old? Robert Morris and Ken Thomas, two researchers from Bell Laboratories, once wrote a paper on password security, noting many of the issues that we hear about. The importance of salting passwords and enforcing password complexity are covered in the short paper, and they even touch on the idea of a second factor of authentication. This was in 1979.
In fact, few modern security measures are anything but a rehash of old technology. Facial recognition? Built and tested in the 60s. Two factor authentication? Morris and Thomas mentioned it, and RSA may have brought about greater awareness when it introduced cryptographic tokens in 1995, but we're still seeing Google struggle to convince people to use it. Contextual authentication? It's newer, but introduced around 10 to 15 years ago (PDF), though we still have little to show for it.
Compare that to processor speeds, the weight and size of computers, the huge effect of social media and the way we communicate, and suddenly, our so-called advances in security seems pathetic.
I don't have a magic solution to the problem — if I did, I'd probably be a millionaire — but I think part of the issue stems from a collective attitude that the challenging thing to do is to break systems, find flaws, or point out how dumb others are.
What makes us pay attention are the giant breaches, Anonymous and LulzSec pointing out how lame our security is, or the biggest, baddest zero-day to hit a system. Throw in "nuclear facility" and "state-sponsored" in there, and watch out; we've got a badass here.
If you don't believe that the focus is on breaking things, take a look at the events and competitions that are held in the information security space. The majority fall into two broad categories of pointing out or breaking systems — the DefCon- and Black Hat-style events; or otherwise, they highlight how much we need protection — practically any analyst- or vendor-held event.
There are very few events that have securing a system as the sole focus, but try pitching that to hackers: "Here's your chance to apply the latest patches, looking through logs, or audit payment systems for compliance!" They'd much rather go back to the fun of breaking something, which is much more challenging.
Or is it?
What most have yet to realise is that breaking things isn't the challenge any more — just grab a fuzzer, use Metasploit, or any number of automated tools, and you're bound to find something — it's keeping everything from being hacked that's the real deal.
Those selling protection say it's simple, even Australia's Department of Defence thinks that 85 percent of all attacks could be mitigated through four measures. But make no mistake, implementing it all is a challenge. After all, if it were so simple, everyone would be tightly secured.
The question is, when will we realise that the real challenge that is worthy of undertaking and, in the end, will provide us with greater innovation isn't breaking an insecure system, it's building a secure one?