commentary Prevention is better than a cure, and all that, but what about when an attack isn't preventable? What do organisations do to ensure that they know who hit them? Judging by how long it takes to get answers, I'd say they don't do enough.
It's become almost generally accepted that in the aftermath of an attack, an organisation is going to take a while to recover. You may even sympathise with them. There's a whole heap of issues to address, such as working with their hosting provider, checking what patches they actually had and searching for any evidence that the hackers left behind, all while trying to bring their server back online from back-ups, and ensuring that it's no longer vulnerable. I think that how an organisation responds in the aftermath of an attack is the real test of its security. It weeds out those who understand their network, and those who simply put up "security installed here" signs — the digital equivalent of dummy security cameras.
If organisations are really on top of their security, why is it that most take so long to complete their investigations, or never find out how they were attacked?
Is it a lack of technology that prevents expedient answers? I don't think so. We live in pretty interesting times, when consumers can back up to cloud services like Dropbox, or use their mobile phones to check in on their home computer. Despite this, forensic analysts are consistently appalled by large organisations that fail to enable even the most basic logging measures. Is it too difficult to ensure that these logs are turned on and periodically backed up somewhere safe? Something as simple as that would give organisations a better chance to catch some digital evidence of a hackers' rampage.
But instead of tracking unauthorised system access, organisations are left to find out about their hacked infrastructure from the hackers themselves, who post their spoils on sites like Pastebin.
I'm talking about hackers like Evil — an unemployed truck driver who taught himself how to hack, defaced the University of Sydney's website, signed off with his hacker alias, spent six weeks undetected on Platform Networks' systems and broke into Distribute.IT. This was the same hacker who apparently didn't have the skills to work in the IT industry.
Evil wasn't covert about what he was doing. He was like a burglar that kicked in the front door, ransacked the place, joined the family at the table for breakfast and then set the house on fire as he left. By the time the fire brigade had arrived, everyone else was wondering how he was never noticed. And yet, despite Evil's reckless behaviour, several organisations failed to clue in that something was amiss until they were well and truly burnt.
What about the governor-general's website recently? In two separate incidents, one a few days ago and one as far back as April, hackers found a way to break into the site and upload their calling cards. We're only fortunate that the two hackers who broke in decided not to do anything more malicious.
How about Stratfor? Logs leaked by the hackers show them laughing at Stratfor as they read their emails and ridiculed their initial inability to recognise that anything was wrong.
The current emphasis on perimeter security continues to be important in securing a network, but, given even just the negative reputation and embarrassment that these organisations have faced, I think it highlights a fairly clear case for the need to understand activity on their own systems.
Slip-ups are expected at times, but, with proper monitoring, companies should be telling their customers that they knew exactly where the problem was and took action immediately — rather than giving the impression that they have no idea how they were compromised, and that any investigation could take months.