RSA: Hack was like 'a spy novel'

The hack that stole RSA SecurID data was perpetrated by a nation state, according to the security company. RSA executive chairman Art Coviello talks to ZDNet UK about the attack
Written by Tom Espiner, Contributor

The breach of authentication data from security company RSA caused ripples across the globe in March. A number of governmental organisations, defence contractors, and corporations use RSA SecurID authentication tokens to allow employees to access sensitive data.

The attack, which involved the theft of SecurID data, was disclosed in March. Two teams of hackers working for a nation state stole SecurID data, according to the company.

Following the attack, RSA liaised with a number of concerned organisations. Many companies felt that RSA had not given enough timely information, RSA president Tom Heiser said in a keynote speech at the RSA Conference in October.

RSA executive chairman Art Coviello talked to ZDNet UK and described the attack as "the stuff of a spy novel". He added that RSA felt it had given its customers enough information to prevent use of the stolen data.

Q: Can you talk me through the attack on RSA?
A: Let's start with the following premise. A company was attacked to get at us. That's where the phishing emails came from.

Is that a premise, or is it what actually happened?
This is true. Why would someone want to steal security information?

To compromise customers who are using SecurID?
Right. Why would they want to do that? Because SecurID is keeping them from impersonating employees or partners or contractors, and it's a very effective technology.

If I can steal information from RSA to make it easier to attack others, then I might attack RSA. But the information the attackers got from RSA wasn't sufficient, in and of itself, to be useful. They still needed information that only the customer would have.

Let's say they could get the information the customer has, and combine that with the information from RSA. Then they could potentially impersonate one of the employees of the company being attacked, right?

So they steal some SecurID information from RSA, but it only becomes useful when they get some information from, say, Lockheed Martin?
Right, as a hypothetical. Now, did they need that to get into another company? Couldn't they just get into that other company the same way they got into RSA? With a phishing email?

The answer, of course, is yes. But what they have in mind is to get in without being seen. What they have in mind is to get in, and steal whatever it is they want to steal, and erase any evidence that they've stolen it. That's the ultimate goal.

Wouldn't it become obvious that they had managed to steal the data?
Perhaps, but perhaps long after they'd used whatever information they'd taken.

If our customers adopted our best practices, which included hardening their back-end servers, it would now become next to impossible to take advantage of any of the SecurID information that was stolen.

And is that what happened?
We gave our customers best practices and remediation steps. We told our customers what to do. And we did it quickly and publicly. If the attackers had wanted to use SecurID, they would want to have done it quietly, effectively and under the covers. The fact that we announced the attack immediately, and the fact that we gave our customers these remediation steps, significantly disadvantaged the attackers from effectively using SecurID information.

So you're saying you blew their cover?
Exactly. We think because we blew their cover we haven't seen more evidence [of successful attacks].

SecurID information in and of itself couldn't have been used in an attack.

I don't know that for a fact, but what I do know is this — SecurID information in and of itself couldn't have been used in an attack.

The attacker would need other information from a customer. We told the customer how to protect that other information. To date, there have been no losses as a result of the attack, and only one indication the information was even used in an attack, and that attack was unsuccessful.

It's not like we haven't chased down every single instance where a customer even suspected the information might have been used. Believe me, we are staying vigilant. We are keeping our eye on this. If we see any indication, we're all over it. But to date, seven months later, nothing.

Is that because nothing's happened, or is that because you believe that nothing has happened?
It's impossible to prove a negative, but law enforcement is looking into this, all kinds of people are looking into this. We're asking people to come forward if there's any evidence, and we're being very vigilant.

But we still maintain the fundamental fact that...

...the information stolen from us in and of itself could not be used in a direct attack, and we think we gave people the right remediation steps.

Why couldn't the information be used in a direct attack?
Because it's incomplete. You needed several pieces of information. We were very humbled by this. I think we've learned a hell of a lot from it.

RSA has said there were two groups of hackers in the SecurID attack, and that one of the groups was 'less visible' than the other. Does that mean you found fewer traces of one of the groups?
Exactly. The first group was far more active.

I have to say, this was the stuff of a spy novel. We think it was a nation state. We think government agencies and defence contractors were primary targets.

When you found out about the breach, what was the immediate reaction, and what were the consequences of that reaction?
We worked the issue. We monitored the attackers. We were disappointed when we realised they'd exfiltrated information related to SecurID, and then we totally went into customer-focus mode. [We asked] how are we going to communicate this to customers, how are we going to make sure that we mitigate any potential risk, what exactly is the risk. Those were all discussions we had internally.

How did you decide the best way to mitigate the risk?
We understood that the information could not be used in a direct attack. We knew there was certain information held only by the customer, so we developed strategies to mitigate the risk to that information.

What was the information that was only held by the customer?
That focused on the upfront, end-user portion of the customer. Keep in mind, man-in-the-middle and man-in-the-browser attacks on one-time pass codes already existed. We wanted to make sure that as a result of our breach that the attackers couldn't take more advantage of those kinds of situations.

This was the stuff of a spy novel. We think it was a nation state.

We gave advice about that, and about protecting infrastructure generally. Specifically we gave advice on hardening the access to the [Microsoft System Center] Ops Manager where a lot of that additional information resided. We felt that was pretty important from a remediation standpoint.

The attackers don't have a complete piece of the puzzle.

Could the complete piece of the puzzle be gained through social engineering? A social-engineering attack was successful against you, why not against other companies?
A social-engineering attack doesn't necessarily have anything to do with SecurID. Social-engineering attacks have been designed to get around strong user authentication, and there have been social-engineering attacks designed to attack one-time passcodes, but beyond that there's been nothing else.

Could the attackers not do to the customer exactly what they did to you, to gain that piece of the puzzle?
Well, yes, but they might not necessarily be able to get all that information from the user. As I said, there are social-engineering attacks to get round one-time passcodes irrespective of what was taken from us.

The issue is the attackers would only be able to attack one employee at a time. We believe that would be a more obvious attack, and would be more visible, and so there would be less risk.

There have been a number of reports about attacks aside from Lockheed Martin — notably, L-3 Communications and Northrop Grumman.
I'm not going to respond to any specific attacks rumoured in the press. In every instance when a customer came to us, or when we proactively went to a customer, we never found any evidence related to us.

How many customers have you followed up with?
There haven't been that many instances. We spent time, and we continue to spend time, monitoring our customer base, making inquiries and making inquiries to law enforcement, because we want to understand if there's an issue. If there is an issue, there might be more information or additional protections we need to give to a customer.

How have you managed to reassure customers that they haven't been attacked using SecurID data?
We've been able to go in and forensically look at any evidence, and we haven't been able to find any.

Why have you decided to talk about this now? Do you feel that you're out of the woods?
No, we remain very vigilant. We've now done two APT [advanced persistent threat] events, one in the US, and one in the UK. We've had numerous conversations with customers about the same content. These APT summits we've had with a broader audience.

Which sectors came to the APT summits?
We focused primarily on government, critical infrastructure, financial services and companies with really important intellectual property.

Such as defence contractors?
Yes. I would say there was a pretty good cross-section of companies with some of the biggest security issues that exist today. The topic was a lot broader than just our particular situation. It was the broader experience, of sharing the experiences of many.

Who did RSA link up with to organise the summits?
In the US it was RSA and Tech America, and in the UK it was RSA and Intellect.

What were the general viewpoints on APTs? Do different sectors have different focuses? Or is everybody a bit jittery?
I think what people are worrying about is that we're seeing more compound attacks — attacking one company to get at another. A lot of the discussion was about information-sharing, and about how we could speed information-sharing.

Editorial standards