Supply chain security is actually worse than we think

Most enterprises have no clue they're sitting ducks for average attackers of moderate skill, much less nation state-backed adversaries with unlimited resources.

Special feature

Cyberwar and the Future of Cybersecurity

Today's security threats have expanded in scope and seriousness. There can now be millions -- or even billions -- of dollars at risk when information security isn't handled properly.

Read More

Guest editorial by Haroon Meer. Meer is the founder of Thinkst, the company behind the well regarded Thinkst Canary. Haroon has contributed to several books on information security and has published a number of papers and tools on various topics related to the field. Over the past decade (or two) he has delivered research, talks, and keynotes at conferences around the world.

The recent SolarWinds mega-hack has managed to grab mainstream media headlines around the world but the more I read, the more I think the press coverage has buried the lede. 

The incident gets called a "supply chain" attack which hints at war-time tactics and, I'm willing to bet, will launch a dozen VC-backed startups. People are (rightfully) worried about the knock-on effect since the SolarWinds attackers had access to several other development-houses and could have also poisoned those wells. 

Must read:

This is definitely scary but there's a hard, sobering truth below that actually makes this a bit worse than you might think.

An abstracted, low resolution summary for those (very few) who haven't paid attention to the incident:

  • SolarWinds make a network management product called Orion that is deployed on hundreds of thousands of networks worldwide;
  • Attackers broke into SolarWinds and made their way to the SolarWinds build environment;
  • They compromised the build pipelines, to inject malicious code into the SolarWinds update process;
  • Networks all over the world updated themselves with this poisoned update;
  • (Now-compromised) SolarWinds servers worldwide attacked internal networks of selected organizations;
  • Almost nobody discovered any of this for months until a security company discovered its own compromise.

Here are the four main reasons why it's actually worse than we think.

The state of enterprise security: While we've made progress in some areas of information security (e.g. the degree of knowledge and skill required to exploit memory corruption bugs in modern operating systems) , enterprise security is still stuck pretty firmly in the early 2000s. An enterprise network consists of an untold number of disparate products, duct-taped together through poorly documented interfaces where often the standard for product integration is "this config works, don't touch it!". Any moderately skilled attacker will decimate an internal corporate network long before they are discovered, and the average time it takes to gain Domain Admin is measured in hours and days instead of weeks or months. 

Most organizations, sadly, don't know this. They know they spend money on security and they know they see charts with red and green boxes and arrows tracking progress. Most have no clue they're sitting ducks for average attackers of moderate skill, much less nation state-backed adversaries with unlimited resources.

Enterprise Products: Even ignoring the weakness that comes with cobbling together many products (security at the joints), most enterprise products won't hold up very well to serious security testing. Heavyweight vendors like Adobe and Microsoft were publicly spanked into upping their game years ago, but it drops off pretty steeply after them. There's an interesting carveout for online SaaS companies who have to build security competency since they run their own infrastructure and compromising their products is the same as compromising them. But for products installed into an Enterprise network the incentives are horribly misaligned. Owning, say, Symantec's antivirus agent doesn't compromise Symantec, it compromises you (who are running it) and this separation makes all the difference.

Enterprise networks have too many moving parts: The past few years have seen creative hackers exploit software in places that we never knew were running software. The Thunderstrike crew ran code on Apple VGA adaptors. Ang Cui has rwritten exploits for monitors, and office phones. Bunnie and xobs ran code on SD-cards and a number of people have now run Linux on hard drive controllers. This makes it clear that the average office network is connected to dozens and dozens of types of devices that wont ever make it into a regular audit, that are nonetheless capable of hiding attackers and injecting badness into your network. 

Third Party Risk Evaluations:  The joke going around after the incident was that SolarWinds had negatively impacted hundreds of enterprises, but definitely passed their third-party risk evaluations. It's slightly unfair, but also true. We simply do not have a good way for most organizations to test software like this, and third-party questionnaires have always been a weak substitute. Even if we could tell whether a product was meeting a minimum security bar (using safe patterns, avoiding unsafe calls, using compile time safety nets, etc.) automatic-updates mean that tomorrow's version of the product might not be the product you tested today. And if the vendor doesn't know when they are compromised, then they probably won't know when their update mechanism is used to convert their product into an attacker's proxy.

I'm not saying that auto-updates are bad. We believe they solve important problems, but they do introduce a new set of variables that need to be considered.  

The current focus on "supply chain" security will no doubt see the VC-backed creation of next-gen start-ups claiming to solve the problem, but this part of the problem seems intractable. There's the "easy" suite of software you know about: applications installed on your infrastructure and their dependencies.  But, for one, this ignores your vendor's own vendors. In addition, what product is going to provide guidance on the provenance of the code running in your monitors (on processors we didn't even know were there?). Will we examine the firmware on the microphone that people are now using for their Zoom calls? Will we re-examine it post-automatic-update? There are way too many connected pieces of code to tackle the problem from this angle.

If it takes just hours or days to successfully compromise an internal network, and if the average network has enough hiding places for skilled attackers to burrow deep, what do you think happens when attackers are allowed to move around undetected for months? 

A bunch of analysts looking at the SolarWinds incident point out (correctly) that compromised SolarWinds servers were installed on so many networks that the ripples of this attack could be crazily exponential. What this analysis misses is that the average enterprise runs dozens and dozens of SolarWinds-look-alikes everywhere.

Ransomware didn't spring up overnight. Networks hit by ransomware were typically vulnerable for years and ran along blissfully unaware until attackers figured out a way to monetize those compromises. Most enterprises have been completely vulnerable to their vendors' horrible insecurity too, the SolarWinds incident just published a blueprint for how to abuse it.

The situation is dire not because we are fighting some fundamental laws of physics, but because we've deluded ourselves for a long time. If there's a silver lining out of this, it's that customers will hopefully demand more from their vendors. Proof that they've gone through more than compliance checklists and proof that they'd have a shot at knowing when they were compromised. That more enterprises will ask "how would we fare if those boxes in the corner turned evil? Would we even know?"

Related stories: