X
Tech

Spearphishing and how to stop it: Some lessons from AusCERT

Targeted phishing attacks are how the vast majority of network intrusions start. Teach your people to protect themselves, and use data to improve your training.
Written by Stilgherrian , Contributor

"We know that the phishing people, and the malicious community, are measuring their return on investment, because they're getting better at this. We can see the progression in their emails. We can see it getting more sophisticated," said Laura Bell, founder of SafeStack.

"So why aren't we doing the same?" she asked.

"We spend hundreds of thousands of dollars on all sorts of training for our organisations, but we very rarely look at [questions like]: 'Actually, did it do the job?'"

Bell was speaking at the AusCERT Information Security Conference on Australia's Gold Coast last week, and she was scathing about the quality and effectiveness of most organisations' security awareness training. The need to tick the compliance box for training has us "racing to the bottom", she said.

"How many of you make your new starters sit through the security awareness video with the hacker and the gloves and the hood? Yeah? ... These videos are dangerous generic monsters, where we basically spend 25 minutes of our life wishing we could get 25 minutes back," Bell said.

"If you want a bit of fun, please go and Google 'information security awareness posters', and go and see what the world has to offer us. There is treasure," Bell said. "These do not work. This is not how humans learn. We've forgotten about the entire world of education, but we've found clip art."

Security awareness is about learning, and different people learn in different ways. Tactile learners need to be playing around with examples, and trying things for themselves, not watching a video. Verbal learners will want to read something instead.

"You have to tailor your tuition to these things. When we fail to do that, we fail to teach," Bell said.

Many organisations also get the timing wrong, running a once-off security awareness session on an employee's first day on the job, when they're already trying to process a massive amount of information. Organisations need to repeat the lessons at appropriate times, and they need to reward good behaviour.

Defending the organisation from intrusion requires attention to technology, processes, and people, but Bell said we're spending all of our time and money on the first two. Penetration testers can evaluate the effectiveness of technical defences. Auditors can evaluate the processes. But what about evaluating the people?

"You can pay people vast sums of money to come and test your organisation... You can get a red team in. They are phenomenally expensive, and they will do a wonderful job of destroying your organisation for the scope you've allowed them to test. You can get a social engineering pentest. That's fantastic, too. [But] these things might not be helping as much as you think," Bell said.

"There are very few people who've actually measured the return on investment for how much they're spending on security awareness."

Many organisations aren't even getting that far. According to a YouGov survey of UK workers released last week, nearly one in five said they'd never had any IT security training whatsoever. A mere 6 percent of UK employees surveyed had received training and guidance on phishing attacks.

The survey, sponsored by Blue Coat Systems, showed that even though social engineering attacks are becoming more complex, 54 percent of respondents said they would connect with strangers on social media, and 56 percent said they hadn't set up privacy controls on their social media accounts.

In New Zealand, the University of Otago has been analysing the impact of spearphishing attacks directed against it, starting with a series of attacks in June 2013, and trying to understand why people fell for them.

According to information security manager Mark Borrie, the university is an attractive target because it's a large organisation with good infrastructure and a good email reputation. This makes it a good staging point for attacking targets that trust the university, such as other universities.

When Borrie analysed the June 2013 attacks, he found several instances where the university's anti-spam defences had intercepted all of the phishing emails received, and yet accounts had been compromised. It turns out that the emails sent to those staff members hadn't passed through the university.

"In our online phone book, we had a large number of staff who list non-university email addresses. I had been fighting a battle for many years to not allow that sort of thing, but it's out of my hands," Borrie told the AusCERT conference.

"The reality was that we had an awful large number of staff who work in our medical schools, who actually are employed by health boards of the hospitals in the cities that our schools are in. And so they put down their primary email address, which in many cases is the hospital address."

So while the university's email systems could detect and block these phishing emails, some of the hospitals' systems could not.

Borrie found that the employees who fell for the phish were, in most cases, away from their desk, and using mobile devices -- which didn't necessarily display the email in full. It was usually outside business hours, too, either 10 or 11 o'clock at night, or first thing in the morning.

Compromised accounts started being used by the attackers, or their customers, in as little as five hours.

"That's a real take-home point for everybody... This expectation that we're going to ask people to work long hours, be on call to answer emails and queries at any time, has a huge downside, and that's about managing expectations," Borrie said.

As Bell told the conference, we want to be good at our jobs. We want to be helpful. And that's part of the reason we fail to spot the obvious signs of a potential phishing email.

"Most of the time, we don't even read what's there. One of our biggest human vulnerabilities is the assumption we know what's in front of us before we've actually read it," she said.

Organisations implicitly train users to respond to bad emails, said Borrie, by allowing inconsistent-looking email systems to be used. One legitimate type of University of Otago email, for example, came from the enrolment system, telling staff about a student timetable clash -- however, it was sent not from an address in the usual otago.ac.nz domain, but the username otago-m at an external .com domain, and contained a clickable link in a second, different external .com domain.

"Again, it's very, very difficult to get these guys here, who are running the system, to actually take basic steps to make the emails look much more accountable. So I just tell people to delete them, and then I ring up the departments and say, 'Do you know that half your emails are being deleted without being read?' It's their workflow," Borrie said.

Given the continued success of spearphishing attacks and the low rate of training, there's an obvious market here -- and there are companies entering this market. The South African-developed service Phish5 launched in 2013, and recently, Virginia-based startup PhishMe raised series B funding of $13 million for its enterprise anti-phishing training.

Yet, all this may soon be obsolescent. Bell works with some fast-moving organisations that are actively trying to rid themselves of email.

"They now use chat platforms. They use Twitter as a business tool. We are living in a world where communication is interconnected in ways we do not understand. We are writing a set of tools for technologies that, realistically -- big government, sorry, you're probably about 10 years away -- but the rest of us, we're kinda starting to move away from this," she said.

The answer, Bell said, is to apply to testing humans the same mindset that we use to test technology.

"What if we can have that same cold-hearted killer instinct, where we don't care if things get hurt or upset, and apply it to people? That sounds great, right?"

Bell and SafeStack have developed an open-source software framework to automate this testing: The AVA Human Vulnerability Scanner.

In the same way that the Qualys vulnerability management tools or Tenable's Nessus map computer networks and seek out their vulnerabilities, AVA maps out an organisation's social networks -- the real network, not the one on the formal organisation chart -- and tests the communication pathways through them.

"At the moment, we treat all of our individuals, when we test them, as individuals. But we don't treat our computers in that way," Bell said. "When we penetrate test a network of computers, we look for the links between them, because linked computers are interesting. They give us a path through a network. So why don't we do this with people?"

AVA gathers information from sources including Active Directory, LinkedIn, Facebook, Twitter, external email providers, and soon Google Apps for Business. And it tests attack vectors as varied as removable hard drives to SMS.

Bell says that training against phishing needs to go well beyond the classic Nigerian pre-payment scam, or malware attached to what purports to be a topical news story.

"What if you've got someone on your first-line help desk who is desperate to please people, and stay in their job, and get a good job done? And what if a C-level emails them and asks them for their password? How many front-line people do you know who would be brave enough to tell the C-level to go forth and multiply?" she said.

"It's really hard for us to know if we will behave well in the face of threat if we are never tested. How many of you have ever seen someone walk into your building unannounced, who's not supposed to be there?... How many of you have ever consciously gone up to them and challenged them being there? It's hard, right?

"We need to create an environment where it's actually safe for us to go: 'This feels funny. I do not like this. Hello, people, look at this. Does this look strange to you?'"

Disclosure: Stilgherrian travelled to the Gold Coast as AusCERT's guest.

Editorial standards