X
Business

Machine learning and the spectre of the wrong solution

Where a beautiful relic of history stares us in the face to remind us that if we take all the time in the world to render our most perfect ideas, reality will leave us in the dust.
Written by Scott Fulton III, Contributor
171204-w4-fig-01.jpg

Photograph of a scale model of old Fort Pitt in the 1760s taken at the Fort Pitt Block House Museum in central Pittsburgh, Pennsylvania.

Scott Fulton

Technically, it should have been a brilliant idea: building up a fortress by building down the earth around it. The British built Fort Pitt at the convergence of three great rivers -- the Ohio, Alleghany, and Monongahela -- to stand as a monument to their empire's control over a main route of transportation and shipping. It was touted as impregnable.

But during the mere eleven years of British control, that claim would only be tested once, in 1763, by a coalition of native American tribes led by the Ottawa chief Pontiac. Though the fort sustained itself admirably, it didn't exactly serve as a bastion of firepower. Unable to push back against the siege, according to records, the fort's commander received a truce delegation, and then tried to inject a virus into their system. Specifically, the British offered gifts of blankets and silk handkerchiefs that had belonged to victims of smallpox.

Amazingly, just months before the first sparks of the American Revolution, Britain sold the fort to private landowners. It would become an American base, and even then wouldn't be tested by fire. Something about there being three rivers, coupled with the perpetual presence of gravity, led to an unwelcome abundance of water.

Fort Pitt -- which exists today only in scale models and restored maps -- is the most superb example we may find of people applying their collective brilliance to create a solution to a problem they would rarely ever face, and not quite the right solution at that.

171204-w4-fig-02.jpg

A day at the races

We wind up this journey in search of the new perimeter for IT security at a location suspiciously reminiscent of a place we visited last month: the hyperscale data center, which in our model has set up shop at the edge closest to the customer. It's in these boundless environments where containerization and microservices have already taken root, where we can begin demanding some brutal honesty from ourselves, on this issue: Does identity, the way we have utilized it in modern computing, actually give us the tools we need to manage the chaos of distributed systems effectively? Or have we pulled off what Groucho Marx would call a "quadfecta:" looking for trouble, finding it everywhere, diagnosing it incorrectly, and applying the wrong remedies?

CoreOS' CTO Brandon Philips is perhaps in the best vantage point to give us clues. His firm produces the rkt (pronounced "rocket") containerization system, as well as a commercial version of Google's Kubernetes orchestrator platform, called Tectonic. As Philips explained, Kubernetes maintains a comprehensive audit log of all activities that lead to a change in the system. At the top level, each activity is attributed to an identity and, for now, that identity is personal. Tectonic federates with whatever identity management system that enterprises are already using, such as an LDAP directory. And each personal account that LDAP may identify is designated a role, whose profile establishes the account's permissions.

TechRepublic: Kubernetes: The smart person's guide

At a deeper level, Philips believes that the physical machines (as opposed to virtual machines) that support Tectonic and the other software platforms on the cluster should provide incontrovertible identities to the environment, hopefully by way of a Trusted Platform Module. This would enable the sources of problems within the environment to be traced to particular servers. Much of the work on machine identity for containers remains in progress, he said.

No TPM (Trusted Platform Module) in your PC? No problem, here's how to fit one

As for a container itself, he told us, Kubernetes (and by extension, Tectonic) requires it to be identifiable. The orchestrator actually provides it with something Philips describes as an identity. Part of it is generated at the time the container is instantiated, but the rest is sourced directly to the user. The roles granted to the attributed user (and in the future, its personas or permission profiles) are inherited by the containers spun up under the user's account. Certain pieces of data that are necessary for containers to work together are naturally called secret. Without something serving as the identity provider in the system, there would be no way for containers to exchange these secrets.

Here is where things could get sticky: The Software-Defined Perimeter (SDP) would create little pocket universes for every user account, each of which has an exclusive and restricted view of the network based on the account's role and upon those rules that apply specifically to it. The way containerization works is by limiting each container to addressing all the others through the network, using a common Web API protocol that can be monitored and audited. Confining a user's view of a network makes perfect sense if all that user is doing is logging onto services. But if each container's network is confined to only the other containers which the user is explicitly permitted to see, the universe won't be large enough for distributed services to happen.

Theoretically, every container should at least be able to address all the others which the orchestrator has created for the express purpose of communicating with one another. Widening the scope of SDP to permit this to happen could conceivably render its own confinement abilities rather pointless.

But CoreOS' Philips is actually hopeful. He foresees a merger of SDP's confinement concept with the notion of a failure domain -- a way to minimize the contents of any container's working environment to as few components that could cause the pod to fail, as possible.

TechRepublic: Containers: The smart person's guide

"The way Kubernetes is built, we give least privilege -- as much as possible -- to the components of the system," said Philips. "That's what we're building for in Kubernetes as well." As an example, he cites how Kubernetes grants each of the nodes (units of virtual compute power) within a cluster a unique identifier. In the future, when the orchestrator traces the cause of poor behavior to any specific node, it can order that node shut off.

All of this good work, however, assumes that the critical assumption of such an identity-centric system of people and things, is somehow overlooked. And Philips can't overlook it. Presently, he noted, Kubernetes' API server has administrative access to everything in the cluster. Conceivably, workloads within those clusters could be partitioned, but the means for doing so is a) somewhat artificial, and b) under construction.

"The challenge is, at the end of the day, some human has to have some view of the entire system in order to be useful at all," he told us. In the context of an automated container operator that needs to see what it's doing when it's creating a cluster, that makes perfect sense. But if that operator's identity is tied to a person, then the restrictions for that person must be as open to the entire network as it would be to the orchestrator as a whole.

This, perturbingly, is the perennial problem with what Cyxtera's Jason Garbis described as "user-centric" security: Arguably, every human being's account should be restricted. For practicality, a software service that oversees a network should oversee the entire network. When that service's account is irrevocably bound to a user account, the needs of the two classes make them conflict. It's what has made PC operating systems so vulnerable for so long, and it threatens to open a similarly exploitable hole here and now.

171203-w2-fig-03.jpg

Patterns of un-learning

Could the behavior of any entity with broad visibility -- be they administrators or microservices -- be monitored by an automated service effectively? More to the point, could such a service learn what normal behavior in such a system should be, and thwart the aims of any entity -- identified or not -- that could negatively impact the function of a system? And armed with that knowledge, could the system signal a firewall or a gateway in time for it to enforce a rule stopping the impactful behavior?

Machine learning algorithms generally work by being presented with multiple samples of data at particular states or in certain periods of time, so that "weights" keeping track of the degree of internal variances in those samples can be used as sensors of a sort, to spot patterns. Once a large number of samples have been scanned, an ML system is said to have "learned" the underlying pattern, even without a comprehensive mathematical or geometric formula to reproduce that pattern in a graph.

TechRepublic: Machine learning: The smart person's guide

The patterns of a human user's behavior may be extremely difficult to normalize. But as MapR Chief Application Architect Ted Dunning tells us, the behavior of other programs -- especially very small ones -- may be easier to learn. And the more replicas of them there may be, the sooner such a system may be able to craft firewall-like rules to keep them in line.

"In a situation where simple rules fail, machine learning can provide a very key boost," said Dunning. "The problem there is that the domain over which you have to define the roles, is too big for humans -- too big to even define categories of Web pages, and too big to define the scope of human activities." Some businesses may provide exceptions -- for example, call centers where operators are restricted in which buttons they can click on and when -- but even those are extreme cases.

"Patterns exist even more strongly with automated systems," he continued, offering some real-world experiences as examples. With fraud detection systems that wade through financial transactions, inconsistencies with payroll deposits (which may or may not be attributable to fraud) turn up almost instantly. Compulsive gamblers, Dunning noted, tend to frequently draw cash from the ATMs at casinos. As time progressed, more variable human-triggered patterns would eventually show up. But in situations like this, he said, "the mechanized things show stronger and more reliable patterns than the humans did."

F5 Networks' principal technical evangelist, Lori MacVittie, remains skeptical. "I don't know that machine learning is the answer," she told us. "I'm not convinced that what we call machine learning is really anything other than a really fast decision tree -- which is not really A.I. but an expert system, which is a completely different class of construct than true machine learning and A.I., which is very complex and not something that most people can actually manage.

"That said, what are they learning?" she continued. "Are they going to learn about the behavioral patterns of every user? Of every thing? And if so, are we then going to have to force that restriction on those things, to say you can only operate that way? With users, you can't do that. Users will do the darndest things; they're going to do things outside of the prescribed steps in a process."

But my conversation with Dunning led us to the conclusion that this very fact could work out to the advantage of IT security in the end, in this way: Machine learning may actually not be required in the case of microservices specifically, whose functions are limited and whose impact may be chaotic in scope, yet easily quantified in type. Simple metadata may be enough identifying information for an ML algorithm to know what to start looking for.

But then there's the problem of applying remediation in a system that is already chaotic -- as AWS' Adrian Cockcroft would assert, chaotic by design. And like a physicist trying to cheat the Uncertainty Principle, simply monitoring a behavior could end up changing it into something else.

"I think it's inevitable," said Oracle VP for Identity and Cloud Rohit Gupta. "We're already seeing the beginnings of machine learning taking in much more critical form and function, if you will, in today's enterprises, particularly as it relates to anomaly detection. But I think if you talk about the projected scale, when it comes to hundreds of thousands, if not millions, of objects, workloads, microservices, that essentially are deployed as part of an organization's distributed fabric, the reality is that applying deterministic approaches where you are writing rules dynamically. . . that approach just doesn't scale."

171204-w4-fig-04.jpg
Scott Fulton

Arrival

I chose old Fort Pitt as our Scale model, if you will, for this adventure into the depths of information technology because, like the software fortress, it's an example of a brilliant and beautiful concept that was actually several years behind its time. What made the fortress model viable for the brief time that it was viable at all, was the presumption that an enterprise's IT assets would always be possessions that required being locked up behind protective barriers. The internet had become a critical business communications medium by the turn of the century. Web services had already begun to prove themselves practical and useful for client/server applications. Still, the idea of "hardening the perimeter" was pitched to security professionals as their very purpose in life.

When enterprises first began outsourcing their applications to service providers, the early interfaces between servers and clients were blamed for the widespread attacks that followed. But a greater community of security engineers and developers could apply themselves to strengthening the controls and better business practices that eventually alleviated these issues -- much greater than any one organization could assemble on its own. As a result, the practice of security outside the old perimeter is stronger than it ever was inside.

Yet this progress has only led us so far. The radical shift in the dynamics of computing platforms, which has catapulted us from the first generation of virtual machines straight to microservices, took place on a separate track from that of the evolution of IT security. It's still facing the integration between SaaS platforms such as Salesforce and Workday, and enterprise identity servers. And here we've heaped on the shoulders of security engineers the weighty philosophical problem of managing identity policy in a highly dynamic realm of ephemeral (i.e., extremely temporary) services launched by an automated orchestrator.

It would be like forcing the British builders of Fort Pitt to contemplate how to resist an attack by amphibious vehicles bearing armored tanks. It's not an unsolvable issue. It just requires us to get wiser, faster.

Journey Further - From the CBS Interactive Network

Elsewhere

Editorial standards