X
Tech

Here's how the NSA decides to tell you about a zero day - or not

The White House has offered an insight into how the NSA and others make decisions about when to reveal software bugs – and which to keep secret.
Written by Steve Ranger, Global News Director

The White House has provided some detail on how the NSA and other US government agencies make decisions around whether to publicise tech security flaws they have discovered — or whether to keep them under wraps for intelligence purposes.

The recent Heartbleed bug has put the spotlight back on zero day flaws — hitherto unknown and unfixed security flaws — and how they are used by the US government as part of secret surveillance projects.

In a blogpost, White House cybersecurity coordinator Michael Daniel reiterated that the US government had no prior knowledge of the existence of Heartbleed, one of the most high profile IT security flaws of recent times, but he acknowledged that the case had re-ignited debate about whether the government should ever withhold knowledge of a computer vulnerability from the public — that is, whether the intelligence or military benefits of a vulnerability outweigh the benefit to the broader internet of making the problem public and getting it fixed.

"In the majority of cases, responsibly disclosing a newly discovered vulnerability is clearly in the national interest," he said, but warned the downside of disclosure is that the US might "forego an opportunity to collect crucial intelligence that could thwart a terrorist attack stop the theft of our nation's intellectual property, or even discover more dangerous vulnerabilities that are being used by hackers or other adversaries to exploit our networks".

"Building up a huge stockpile of undisclosed vulnerabilities while leaving the internet vulnerable and the American people unprotected would not be in our national security interest," Daniel said.

"But that is not the same as arguing that we should completely forgo this tool as a way to conduct intelligence collection, and better protect our country in the long-run. Weighing these tradeoffs is not easy, and so we have established principles to guide agency decision-making in this area."

Daniel highlighted some of the issues he considers when an agency (the NSA or FBI, for example) wants to keep a vulnerability secret:

  • How much is the vulnerable system used in the core internet infrastructure, in other critical infrastructure systems, in the US economy, and/or in national security systems?
  • Does the vulnerability, if left unpatched, impose significant risk?
  • How much harm could an adversary nation or criminal group do with knowledge of this vulnerability?
  • How likely is it that US would know if someone else was exploiting it?
  • How badly does the US need the intelligence we think we can get from exploiting the vulnerability?
  • Are there other ways the US can get it?
  • Could the US utilise the vulnerability for a short period of time before we disclose it?
  • How likely is it that someone else will discover the vulnerability?
  • Can the vulnerability be patched or otherwise mitigated?

The US government has until now provided little detail about its use of previously unknown vulnerabilities are part of surveillance, but following a number of revelations from former NSA contractor Edward Snowden it has been forced to respond.

Late last year President Obama's Review Group on Intelligence and Communications Technologies recommended that the National Security Council should manage an a regular review of US government usage of zero day attacks and said: "In rare instances, US policy may briefly authorize using a zero day for high priority intelligence collection, following senior, inter-agency review involving all appropriate departments."

Read this: Inside the secret digital arms race: Facing the threat of a global cyberwar

While Daniel put the focus on intelligence and surveillance it's also important to note these undisclosed vulnerabilities aren't just used for surveillance — they're also the raw material that can be built into the new breed of cyber-weapons. For example, the Stuxnet attack on the Iranian nuclear programme (generally considered to be the work of the US) only worked because it used a number of zero-day exploits.

These vulnerabilities are thus also part of a secret arms race as many countries are building up their cyberwarfare capabilities. And as 'cyber' — as the military and politicians like to call it — becomes a standard part of the military armoury, the demand for such unknown vulnerabilities will go up, not down. That's especially the case because each of these vulnerabilities tends to have a short shelf-life: when it spread so widely after the initial attack, Stuxnet became useless as a weapon.

"It's like dropping the bomb, but also [saying] here's the blueprint of how to build the bomb," Peter Singer, author of the recent book Cybersecurity and Cyberwar, said recently.

For the US it's a tricky balancing act; the intelligence community would argue that its rivals around the globe will use similar flaws and that it can't do its job without them. At the same time, undermining trust in internet technologies would be an economic disaster for the US.

As it stands, Daniel's list of criteria still leaves plenty of opportunity for the US to use zero-day flaws for its own secret missions, but the existence of what Daniel describes as "a disciplined, rigorous and high-level decision-making process for vulnerability disclosure" suggests that the agencies are at lease aware of the risks of keeping these flaws to themselves. Indeed, no other government has discussed its use of software flaws so candidly.

However, it's also hard to see how the interests of the US intelligence agencies and those of the wider internet community can be easily aligned.

Further reading

Editorial standards