There is a fortress atop a hill overseeing two rivers: One natural, the other a man-made canal. The first building made there was wooden, but by 1683, the Cossacks who had conquered Siberia has grown tired of soft perimeters. They sent for Russians from the south to begin the tradition of building impenetrable stone structures from the finest masonry. They never stopped building it, for centuries. The fortress of Tobolsk -- prized by Peter the Great and revered for its stark and simple beauty -- became the center of its own self-sustaining security industry, around which the town and later all of Siberia would revolve. The word for the style of spiral stone structure that marks the center of the fortress is kremlin.
Anyone standing inside this kremlin's walls in 1982 would have witnessed one of history's greatest natural gas explosions just a few miles away. Its catalyst, wrote former US Air Force Secretary Thomas C. Reed in 2004's At the Abyss: An Insider's History of the Cold War, was a Trojan horse program planted by American agents in code they knew would be stolen by Soviet agents.
Though Reed's account remains disputed by folks who like certain things to be red, but not their own faces, US officials have long feared the retribution of some red-starred ghost.
In 2005, the Department of Homeland Security commissioned Livermore National Labs to produce a kind of pre-emptive post-mortem report [PDF]. Rather than wait for a vengeful ex-KGB hacker agent to ignite an American pipeline until it could be seen from space, the report issued recommendations for preventing an incursion that had yet never happened, from ever happening again.
Recommendation No. 1 was this: Know your perimeter.
"What is the boundary of your network perimeter?" the report reads. "Is it simply the border gateway that separates your control system from other external networks? Is it at the firewall? What about a modem that connects directly to the SCADA [Security Control and Data Acquisition] system or the field technician's laptop that gets connected to both the control network and untrusted networks (e.g., at home, hotel, or airport)?"
Once you mapped your network's access points, the report explained, you could essentially connect the dots to reveal your perimeter. From there, it advised that this perimeter must be defended, tested, and hardened.
It was the correct recommendation for defending a gas pipeline's SCADA system, circa 1982. But its principal presumptions -- that everything such a system should protect is on the inside, and everything that would threaten it were on the outside -- had already been rendered obsolete.
"The perimeter model is dead," pronounced Bruce Schneier, author of The New York Times' best seller Data and Goliath, and the CTO of IBM Resilient. "But there are personal perimeters. It doesn't mean there exists no perimeters. It just means it's not your underlying metaphor any more. So I wouldn't say to anyone running a corporate network, 'There are no perimeters, zero.'"
In this second leg of our journey for ZDNet in search of security for the modern, distributed data center, we consider this most bizarre of possibilities: A security perimeter may be established around the people who use systems, with a more immediate effect than building more walls around the "kremlins," if you will, of those systems themselves.
Imagine a fantasy story where a kingdom's last castle is about to fall. Its mightiest magician casts a spell that produces individual castles around every person in the kingdom, both citizens and would-be attackers. Wherever they travel, they take their castles with them. No one could amass an army to penetrate any single barricade without breaking through his own first. Each fortress would be the entire universe and a complete prison. It would be a blessing and a curse.
It is the absolute inverse of a software fortress. And it is actually being tried now.
The perimeter is dead; long live the perimeter
"As long as you have internal, flat networks that can be accessible, then the users who connect to them are always over-entitled," explained Randy Rowland, chief product officer of Cyxtera Technologies, speaking with ZDNet. "They always have network access to more than what they need to do their job."
Cyxtera is a firm launched just last May after a pair of private equity firms spent $2.8 billion to acquire CenturyLink's North American data center portfolio. One of those firms was headed by Manny Medina, the former CEO of Terremark -- itself a major data center provider acquired by Verizon in 2011, only to have Verizon sell those assets to Equinix for $3.6 billion last May. As major telcos exit the data center market, colocation firms are taking over, but not without also acquiring the security technologies they need to ensure access to resources hosted in their new facilities.
In the deal that created Cyxtera, BC Partners and Medina's firm also acquired a security company called Cryptzone. It was not an accident or an afterthought, but a concerted strategy to collect a braintrust of engineers responsible for creating an evolving concept called the Software-Defined Perimeter (SDP).
"We absolutely believe the cloud technologies -- even the public cloud -- are starting to completely eat or eliminate the perimeter as it exists," remarked Rowland. "And we feel a heavy burden at Cyxtera to get the message out about the ability to make dynamic, one-to-one connections from user to service, versus the perimeter model. Because as long as the perimeter model exists, we're going to continue to read news articles where people have been compromised."
It's a model backed by the Cloud Security Alliance (CSA), the non-profit research organization representing the security interests of cloud service providers. At the RSA Security conference in San Francisco in early 2017, Cryptzone's Jason Garbis -- who leads the CSA's IaaS initiative, and is now a VP at Cyxtera -- presented the latest version of the inspiration behind SDP.
"One of the foundational network security components is a firewall," said Garbis. "And if we think about what a firewall rule looks like, it's very simple. It says, 'Packets from this IP address are allowed to go to that IP address.' And that isn't meaningful to anybody. There's no indication in that of why that rule exists. . . Under what conditions should those packets be able to flow? What we really need is a way to align our security approach with what the business and compliance teams really want, which is a shift to something that's identity-centric."
TechRepublic: How to become a cybersecurity pro
As Cyxtera's Rowland explained to us, once the identity and permission of a user (typically called a "security principal") is verified, the SDP system creates a "hardened" firewall rule. What's hardened about it is that it creates the network addresses that direct this user to the resources to which the user has access. No addresses exist for anything that isn't supposed to be accessible. Every network path represents an already verified entitlement, and that path only exists between the resource and the user entitled to access it, or between the service and the user entitled to issue an API call to that service.
"It's actually implementing a traditional, perimeter-based idea," he said. "It's just doing it in a more precise manner, so it doesn't create over-entitlement."
We took some liberties with a map of old Fort Pitt to demonstrate the SDP principle at work, and how it would remap the working environment for IT security. We begin with a map of the old software fortress in its 2003 incarnation, back when it was the "New Fort."
This is the ideal of the hardened perimeter, where every asset and every user protected by the outer wall can be presumed trusted. If you've made it "in," obviously, you passed the test. For this map, "in" is represented by A, the "Zone of Trust."
The guards of Roger Sessions' version of the fortress model guard their stations at B, overseeing the perimeter P. While the internally installed applications K and internal data L are well protected within the center by Session's wall F, trusted users are allowed to produce their own documents and other data structures which will eventually populate the "City of Protected Users," represented by U.
The typical remote entry would take place from an outside client H. In order to gain admission, that client must pass messages back and forth over the bridge with what Sessions called an envoy E. That envoy would then present credentials through the gateway G, to be evaluated by a set of rules processed at the firewall C. Once those credentials are validated and cleared, the cryptographic keys are generated to produce and protect the Virtual Private Network (VPN) V.
In this re-imagining of the Fort Pitt map circa 2017, the modern counterparts of the old software fortress' components retain their letters from before. But most of them have now been moved outside the fortress. Here you see the new perimeter idea as it would truly be modeled: as a kind of micro-fortress around each remote client H. If there are such things as zones of trust A, they would exist inside the virtual spaces inside each perimeter. The Internet, in this case, provides the highway with which clients are linked to the corporate data center's downgraded fortress (Pittsburgh residents will please forgive me for borrowing I-279). The old fort would still retain some of its old components, including internal applications K and an internal firewall C, but now it also has a monitor component R that oversees the transactions in this space, evaluating their behavior for regularity.
Notice now that each client has its own internal resources K and L, which from its vantage point, appears to be local. Even if the sources of those components are inside the fort or even in the public cloud, SDP would enable them to be perceived as local in the context of everything else inside each zone of trust A. Picture a virtual PC for everyone, although the internal connections are actually continually monitored, encrypted sessions between resources in the network.
One realization you may come to as you study the revised model closely is this: There are no "remote" users any more. Once the distinctions between "inside" and "outside" have been effectively erased, a user across the river, if you will, would be treated exactly the same as one inside the office's home peninsula.
As Cyxtera's Randy Rowland told us, his firm's implementation of SDP would be governed by a controller that serves as the main arbiter and enforcer of the policies with which networks are created. (Not all SDP models refer to such a component.) Each client's view of the network is substantiated by a set of policies enforced at each client's gateway G. Cyxtera's name for these policies is live entitlements.
"It's not a static thing; it's something that is active and living," described Rowland. "Once entitlement has gone through its approved policy and has been handed down from the controller, the client takes that entitlement and the gateway creates a micro-firewall instance, where the only rule set in that micro-firewall is that entitlement. That's how we get cloud scale. Instead of having these huge, monolithic, perimeter-based firewall devices, if I can break it into tokens or into entitlements that I can distribute across multiple gateways, now we can scale as large as the cloud itself, and still give that micro-firewall, and the entitlement that's required to access a system, complete autonomy."
It's an explanation that seems to adopt some of the motif, if not the meaning, of the microservices model to which Adrian Cockcroft introduced us at Waypoint #1. There, services can be scaled up or down because they've been decoupled from the underlying body of code, and from the infrastructure. The ideal of autonomy meshes well with the idea that a service orchestrator should not be a micro-manager. And there's a hint of elegance in the notion that a policy component should serve as the fundamental building block for a virtual network in itself. This bodes well for any hope we may have that something inspired by the old security model of the data center can be applied to the new operations model.
Bang, bang: Maslow's silver hammer
"I think it's Maslow's Hammer," declared Chet Wisniewski, principal research scientist with IT security provider Sophos. "When all you have is a hammer, everything looks like a nail. All I have is a network; I have to create a perimeter so I can control the network for security reasons, because I can't do it any other way."
One could easily come to the conclusion that anything so frequently and vociferously declared dead for so long by so many cannot possibly be dead. In Wisniewski's world, security engineers, researchers, and advocates continue to describe the threats to networks as encroaching upon the perimeter -- usually before joining the chorus singing a requiem for the perimeter.
"The perimeter is dead. Don't create new perimeters, don't create ten thousand perimeters," he warned us. "There hasn't been a perimeter already for ten years, which is why everybody's breached every other day -- because they still think there's a perimeter. I don't think we'll ever have a perimeter again, because it's impractical and it's not really the right way to solve the problem."
Cyxtera's Randy Rowland cautioned that SDP should not be confused with a "soft perimeter."
"If you think about 'software-defined anything,' sometimes it sounds like it is not as rigid as a physical device," Rowland remarked. Cyxtera's SDP implementation, he said, represents a firewall as software, spinning up each instance when a one-to-one connection is needed.
That may be the case today. But four years ago, when CSA CEO Jim Reavis unveiled the concept of SDP for the first time at his annual group summit, a soft perimeter was exactly how it was presented.
"The traditional fixed perimeter model is rapidly becoming obsolete," stated the CSA's December 2013 white paper [PDF], "because of BYOD and phishing attacks providing untrusted access inside the perimeter and SaaS and IaaS changing the location of the perimeter. Software defined perimeters address these issues by giving application owners the ability to deploy perimeters that retain the traditional model's value of invisibility and inaccessibility to 'outsiders,' but can be deployed anywhere -- on the internet, in the cloud, at a hosting center, on the private corporate network, or across some or all of these locations."
For the SDP model to be completely successful, it would need to protect the newest and most highly distributed hybrid cloud data centers -- the ones with all the microservices. Perhaps SDP erodes our time-tested notions of perimeters, forged from both the fire and the fantasies of the 17th through 20th centuries. But once we begin tackling the very real problem of protecting data centers both for and from all those millions of users that are in fact inanimate -- i.e., not people at all -- that corrosive agent may actually show signs of weakening.
On the next leg of our journey through perimeters old and new, we ponder the not-so-metaphysical problem of attributing identity to everything and everyone in a rapidly shifting network. Next, we'll take a hard look at whether we will need artificial intelligence to help us attain the control over our network behaviors that we may not achieve with identity alone. Until then, hold fast.
Journey Further -- From the CBS Interactive Network:
- BT unveils bandwidth on demand under Dynamic Network Services by Corinne Richert, Mobility
- Citrix looks to cloud, security, analytics to power 'future of work' at Synergy by Conner Forrest, TechRepublic
- Cloud Security Alliance lays out security guidelines for IoT development by Asha McLean, Internet of Things
- We need to turn our security model inside out by Lori MacVittie, F5 Networks
- RSA 2016: There Is No Cloud Security Stack Yet by Scott M. Fulton, III, The New Stack
- The Cloud is Evolving Faster than Cloud Security by Scott M. Fulton, III, CMSWire
The race to the edge:
The race to the edge, part 1: Where we discover the form factor for a portable, potentially hyperscale data center, small enough to fit in the service shed beside a cell phone tower, multiplied by tens of thousands.
The race to the edge, part 2: Where we come across drones that swarm around tanker trucks like bees, and discover why they need their own content delivery network.
The race to the edge, part 4: Where we are introduced to chunks of data centers bolted onto the walls of control sheds at a wind farm, and we study the problem of how all those turbines are collected into one cloud.
Our whirlwind tour of the emerging edge in data centers makes this much clear: As distributed computing evolves, there's less and less for us to comfortably ignore.