X
Home & Office

Internet inside out: Kubernetes becomes the service delivery engine of the data center

The Internet has always relied upon DNS as its “phone book” to locate Web sites, corporate domains, and Web services. In a world dominated by microservices, all that might actually have to change.
Written by Scott Fulton III, Contributor

When a smartphone manufacturer reveals a new model to a captivated audience, what it's trying to do is leverage the tools of fashion marketing to make subtle, and often semi-relevant, changes to its product line exciting and motivating. Look, the manufacturer beckons, we changed the way our corners are rounded, we relocated the button you don't want to a place you won't notice it, and we removed the one button you do want because, hey, it's fashion!

Smartphones succeed or fail not because of the placement of their buttons or the smoothness of their corners, but as a result of how their operating platforms deliver services their users want. Windows Phone failed not because it was a bad phone (it wasn't). It could not deliver the services users wanted, in the way they wanted them.

Service delivery is the make-or-break issue in the technology business. If your service fails to be both innovative and efficient — a pairing that's much harder to achieve than it too often seems — it will fail in the market. Every successful technology product was built on a successful technology platform. The product that fails is the one whose platform was left behind when the service delivery model moved beyond it. Just ask BlackBerry.

Kubernetes is a service delivery engine. It takes a workload that produces a service engineered for people to use simply and methodically, and distributes it to the locations where that workload may be used most efficiently. And now, just as importantly, Kubernetes supports methods that make that workload more discoverable, both to the people whose applications are looking for them, as well as to other workloads that may cooperate with them. The way these methods are being implemented, DNS — the system that resolves names to addresses on the Internet — may be rendered redundant or even unnecessary.

scale-002-fig-06.jpg

The new definition of network automation

Brendan Burns, distinguished engineer at Microsoft and Kubernetes' co-creator, believes that developers of software and services will now begin paying attention to the ideal that the services everyone is building will need to play nicely with one another.

170614-brendan-burns-01.png

Microsoft Distinguished Engineer Brendan Burns.

"I think a lot of what people are going to start automating is the ways in which services work together," Burns told ZDNet.  "Even just mundane things like access control — if you think about how you authenticate one service to another service, there's a lot of very mechanistic stuff today in order to make that work, be it issuing and rolling certificates, or using an identity system. Those things are conceptually very easy. You say, 'I want to have a new user named Scott, and I want him to be able to call this service.'  Actually putting that into an operable, managed system is not simple.

"That's an example of the kind of stuff that's required," he continued, "but doesn't get done because it's too hard. Developers say, 'Well, they're all my systems, and we're all friends, so we're going to have one token and I'm not going to differentiate.'  And then somebody spins up a development mode test, they send it to production, and they take down production because they doubled the traffic on the production endpoint. Whereas if they'd had access controls to differentiate between production traffic and developer traffic, they could very easily shunt off that developer traffic."

Burns' example points to a problem with most network automation today, especially with a first-generation virtualization platform. Software developers need the means to test the efficacy of their services before making them available to general customers ("sending them to production"). Most organizations don't have the resources to give developers their own fully isolated, scale-model networks with which to test their works in progress. So test traffic has to cohabit the same network as production traffic.

VMware has tried to implement a way to segregate network traffic by workload class, using a methodology it introduced called microsegmentation. Think of it as a system of software-based firewalls on the server side, applying access control policies and behavior management rules that apply to specifically identified services. Firewalls enforce behavioral policies on communications systems that may not have "good behavior," however that may be defined, built in — but they typically do so after the fact, once the services they marshal have already been deployed.

The more evolved system that Burns envisions is one where rules of a sort are capable of specifying how the orchestrator should respond to these requests, fulfilling a role not unlike VMware's microsegmentation. He points to Microsoft's Azure Functions mechanism as a way of developing orchestrated responses to certain events, such as an increase in size for an online storage bucket, or an incoming request for data. But he envisions less code, not more. The result would be an orchestration platform that's capable of moving a service, even while it's running, to an area of the platform whose importance is sufficient for the quantity and priority of the work it's performing.

The culmination of Burns' ideal system includes this concept of the service mesh. If you're familiar with the idea of software-defined networking (SDN), you know that inside a data center, addresses can be applied to services and other workloads, not just servers and hardware. This is perhaps the catalyst for the entire containerization movement: the fact that a workload has its own address.

When the Internet first became the backbone of a commercial market, servers were given domains, and those domains were mapped to IP addresses. Those domain names typically identified the corporate owners of the servers, and subdomains identified the departments in charge of those servers. So addresses reflected the budgets of their corporate owners, not the work they did.

Up until recently, the destination point for a request from a service over an enterprise network, happened to be the address of a virtual machine (VM) where that service was being hosted. Containerization changed that relationship. In enterprises where Kubernetes oversees this level of infrastructure, the orchestrator can direct that request toward the service itself. There may actually be many copies of that service running simultaneously, so this re-routing process now incorporates what older architectures still call load balancing.

What could replace DNS

The Domain Name System (DNS) of the Internet translates URLs — the names for the owners of network space — into the numeric addresses to which data packets are routed. Enterprises that conduct business and commerce online use these addresses as gateways, which are transfer points between the outside Internet and inside the data center. There, machines still have IP addresses, but they use a different logic than the system that supports the Web. In fact, many enterprise networks use overlays, which map one set of addresses onto another. The overlay map can be changed pretty much as necessary, enabling a system where a service or an address may be reliably called using one address, and the request can be relayed to wherever the other one happens to be today. This is one of the methods required to enable workloads to be relocated from one server to another, physical or virtual.

Using DNS to resolve which function belonged in what domain has always been a performance bottleneck. Containerization takes the first step in breaking that bottleneck. Service mesh takes a giant leap further. Because microservices are both highly portable and highly volatile, a service mesh employs active agents to locate where workloads have moved. Think of how the wireless telephone network must use logic to resolve where a customer's device is located — logic the wireline network could never have employed — and you'll get the basic idea.

Here's where the revolution begins to do real damage to the old system. The way services on the Internet have traditionally worked required a sophisticated method of location called service discovery. (I'd compare it to a kind of telephone directory that had pages that were yellow, but I can't just say "yellow pages" without potentially getting into a trademark dispute.)  It was a way of leveraging DNS to resolve the issue of which IP address represents what service. In 2015, when containerization first caught fire, before the advent of Kubernetes, it seemed service discovery could be its ultimate, unresolvable bottleneck, the point where connecting the new world to the old world would prove impractical or even impossible.

As happens surprisingly frequently in the history of technology, service mesh architecture was created by a handful of different engineers simultaneously. At its outset, service mesh was a way for services distributed within a network to find each other and to make use of one another, especially so applications that essentially use the same library functions wouldn't have to maintain duplicates of the same code. When a function inside a container has a dependency linking it to library code, that code need not be contained within the same unit at the same address — the service mesh can resolve dependencies such as this in real-time. With Istio and other service mesh platforms, each service's identity and access policies are maintained in an exclusive service registry, which is used instead of the conventional DNS lookup function. This way, in a perfectly meshed data center, all functions can be interoperable with one another. And if each service can find a way of declaring its own purpose, the service discovery problem could be solved — at least within the enterprise network boundaries.

190918-kubernetes-hub-01-avi-networks-universal-service-mesh.jpg

Avi Networks service mesh architecture.

Avi Networks

Originally, the service mesh's purpose was to help workloads inside a network make contact with one another. But communications networks throughout history rarely stay bottled up for long. Late last year, SDN tools provider Avi Networks began promoting the idea of leveraging its existing service platform, called Vantage, as a mechanism for extending service meshes such as Istio beyond customer premises and into multiple public cloud spaces. This architecture could enable cross-platform service discovery, which would arguably preclude the need for DNS — one of the defining services of the Internet — in many cases if not all.

If you recall the name "Avi Networks," you're a regular ZDNet reader. VMware acquired Avi last June, and announced the following August it had already integrated a good chunk of Avi's engineering into its NSX network virtualization platform.

Like all technologies born from service-defined networking SDN, a service mesh has a control plane kept separately from the data plane. This way, the controlling functions of the mesh are bound tightly together, giving applications their own address space and their own traffic flow. Think of service mesh as the evolved form of a network overlay: a system where the routes are developed organically, and the policies for using those routes are determined and enforced along the way.

The Service Mesh Interface

VMware's work follows up on innovations completed just weeks earlier at Microsoft. Last May, Microsoft's Burns, along with colleague Gabe Monroy, announced their introduction into the community of a concept called the Service Mesh Interface (SMI), which is a way for different mesh platforms built around Kubernetes (there are quite a few) to connect with one another and share accessibility.

Monroy's explanation at that time speaks to the tremendous implications for the evolution not only of data center networks, but network security: 

"Today with the explosion of micro-services, containers, and orchestration systems like Kubernetes, engineering teams are faced with securing, managing, and monitoring an increasing number of network endpoints," he wrote.  "Service mesh technology provides a solution to this problem by making the network smarter, much smarter. Instead of teaching all your services to encrypt sessions, authorize clients, emit reasonable telemetry, and seamlessly shift traffic between application versions, service mesh technology pushes this logic into the network, controlled by a separate set of management APIs."

It would turn the Internet inside out, at least insofar as its job as a provider of services is concerned.

Brendan Burns explained it this way: "The Service Mesh Interface is really more about interoperability and building an ecosystem than anything else. There's two different personas in any ecosystem: tool vendors or utility vendors, and end users. In both cases, having an abstraction between those two makes sense. You see this all over computing: We have standards so that multiple vendors can sell the same thing, and they work with the user. A good example would be USB. Every single person who makes a Bluetooth headset or keyboard can build a USB connector, and know it will work for the user. For service mesh, that's really important, because if you're building a tool that, say, knows how to do canary releases, if you have to tightly bind it to a specific service mesh implementation, then you're limiting your available customers to only those people using that service mesh. If you write a really great tool, but it only works with Linkerd, then everybody who uses Istio can't use your tool, even if they love it.

"If I'm a user, especially in a new technology world, and buying something new, it's scary if I'm wedding myself deeply to the implementation," continued Burns.

So in the near term, SMI will enable service mesh implementations to be interchangeable, making services independent from their implementations. In the longer term, it could pave a route for a universal service mesh concept to bridge the gaps between these implementations, producing a kind of network of networks. . . which is coincidentally the image Vint Cerf had in his mind when he first tried to explain his idea of Internet Protocol.

With this, a big chunk of the 20th century Web could find itself escorted out the back door.

Learn more — From the CBS Interactive Network

Elsewhere

Editorial standards