X
Business

Service mesh: What it is and why it matters so much now

It's a new and emerging class of service manager for distributed computing -- one that can work hand-in-hand with orchestrators such as Kubernetes. The service mesh would build a "Layer 7 network" exclusively for applications. And the changes it would make could reignite old debates about network architecture.
Written by Scott Fulton III, Contributor

A service mesh is an emerging architecture for dynamically linking to one another the chunks of server-side applications -- most notably, the microservices -- that collectively form an application. These can be the components that were intentionally composed as part of the same application, as well as those from different sources altogether that may benefit from sharing workloads with one another.

Real-world service meshes you can use now

Perhaps the oldest effort in this field -- one which, through its development, revealed the need for a service mesh in the first place -- is an open source project called Linkerd (pronounced "linker -- dee"), now maintained by the Cloud-Native Computing Foundation. Born as an offshoot of a Twitter project, Linkerd popularized the notion of devising a proxy for each service capable of communicating with similar proxies, over a purpose-built network. Its commercial steward, Buoyant, has recently merged a similar effort called Conduit into the project, to form Linkerd 2.0.

Meanwhile at car-sharing service Lyft, an engineer named Matt Klein devised a method for building a network that represented existing code -- even when it was bound to a legacy "monolith" -- as microservices with APIs. This became Envoy, which is now one of the components of a project that includes the work of IBM and Google, to produce a framework called Istio.

Also: Open source SDN project could let network admins duplicate production environments TechRepublic

part-of-dancer-in-a-cafe-by-jean-metzinger.jpg

A portion of "Dancer in a Cafe" [1912] by Jean Metzinger, part of the Albright-Knox Art Gallery collection, in the public domain.

Historical precedent

When it's doing its job the way it was intended, a service mesh enables potentially thousands of microservices sharing a distributed data center platform to communicate with one another, and participate together as part of an application, even if they weren't originally constructed as components of that application to begin with.

Its counterpart in the server/client and Web applications world is something you may be familiar with: Middleware. After the turn of the century, components of Web applications were being processed asynchronously (not in time with one another), so they often needed some method of inter-process communication, if only for coordination. The enterprise service bus (ESB) was one type of middleware that could conduct these conversations under the hood, making it possible for the first time for many classes of server-side applications to be integrated with one another.

A microservices application is structured very differently from a classic server/client model. Although its components utilize APIs at their endpoints, one of the hallmarks of its behavior is the ability for services to replicate themselves throughout the system as necessary -- to scale out. Because the application structure is constantly changing, it becomes more difficult over time for an orchestrator like Kubernetes to pinpoint each service's location on a map. It can orchestrate a complex containerized application, but as scale rises linearly, the effort required rises exponentially.

Suddenly, servers really need a service mesh to serve as their communications hub, especially when there are a multitude of simultaneous instances (replicas) of a service propagated throughout the system, when a component of code only needs to contact one.

Also: How the Linkerd service mesh can help businesses TechRepublic

From unknown entity to vital necessity

Most modern applications, with fewer and fewer exceptions, are hosted in a data center or on a cloud platform, and communicate with you via the Internet. For decades, some portion of the server-side logic -- often large chunks -- has been provided by reusable code, through components called libraries. The C programming language pioneered the linking of common libraries; more recently, operating systems such as Microsoft Windows provided dynamic link libraries (DLL) which are patched into applications at run time.

So obviously you've seen services at work, and they're nothing new in themselves. Yet there is something relatively new called microservices, which as we've explained here in some depth, are code components designed not only to be patched into multiple applications on-demand, but also scale out. This is how an application supports multiple users simultaneously without replicating itself in its entirety -- or, even less efficiently, replicating the virtual server in which it may be installed, which is how load balancing has worked up to now during the first era of virtualization.

A service mesh is an effort to keep microservices in touch with one another, as well as the broader application, as all this scaling up and down is going on. It is the most liberal, spare-no-effort, pull-out-all-the-stops approach to enabling a microservices architecture for a server-side application, with the aim of guaranteeing connectivity, availability, and low latency.

https://www.zdnet.com/article/to-be-a-microservice-how-smaller-parts-of-bigger-applications-could-remake-it/

Also: Why it's time to open source the service mesh TechRepublic

SDN for the very top layer

Think of a service mesh as software-defined networking (SDN) at the level of executable code. In an environment where all microservices are addressable by way of a network, a service mesh redefines the rules of the network. It takes the application's control plane -- its network of contact points, like its nerve center -- and reroutes its connections through a kind of dynamic traffic management complex. This hub is made up of several components that monitor the nature of traffic in the network, and adapt the connections in the control plane to best suit it.

SDN separates the control plane from the data plane of a network, in order that it can literally rebuild the control plane as necessary. This brings components that need each other closer together, without impacting the data plane on which the payload is bound. In the case of network servers that address each other using Layers 3 and 4 of the OSI network model, SDN routes packets along simplified paths to increase efficiency and reduce latency.

Borrowing that same idea, a service mesh such as Istio produces a kind of network overlay for Layer 7 of OSI, decoupling the architecture of the service network from that of the infrastructure. This way, the underlying network can be changed with far fewer chances of impacting service operations and microservices connectivity.

Also: What is SDN? How software-defined networking changed everything

180828-vmworld-2018-day-2-02-bahubali-shetti.jpg
[Photo by Scott Fulton]

"As soon as you install it, the beauty of Istio and all its components," remarked Bahubali Shetti, director of public cloud solutions for VMware during a recent public demonstration, "is that it automatically loads up components around monitoring and logging for you. So you don't have to load up Prometheus or Jaeger [respectively]; it comes with them already. And it gives you a couple of additional visibility tools.

"This is a service-to-service intercommunications mechanism," Shetti continued. "You can have services on GKE [Google Kubernetes Engine], PKS [Pivotal Kubernetes Service] and VKE [VMware Kubernetes Engine], all interconnected and running. It helps manage all of that."

Also: What is SDN? How software-defined networking changed everything

Complementing, not overlapping, Kubernetes

Now, if you're thinking, "Isn't network management at the application layer the job of the orchestrator (Kubernetes)?" then think of it like this: Kubernetes doesn't really want to manage the network. It has a very plain, unfettered view of the application space as multiple clusters for hosting pods, and would prefer things stay that way, whether it's running on-premises, in a hybrid cloud, or on a "cloud-native" service platform such as Azure AKS or Pivotal PKS. When a service mesh is employed, it takes care of all the complexity of connections on the back end, ensuring that the orchestrator can concentrate on the application rather than its infrastructure.

Also: What Kubernetes really is, and how orchestration redefines the data center

Key benefits

The very sudden rise of the service mesh, and particularly of the Istio framework, is important for the following reasons:

  • It helps standardize the profile of microservices-based applications. The behavior of a highly distributed application can be very dependent on the network that supports it. When such behaviors are drastically different, it can be a challenge for a configuration management system to maintain availability for an application on one network that has far fewer challenges on another network. A service mesh does all the folding, spindling, and mutilating -- it makes a unique data center look plainer and more unencumbered to the orchestrator.
  • It opens up greater opportunities for monitoring, and then potentially improving, the behavior of distributed applications. A good service mesh is designed to place highly requested components in a location on the application control plane where they can be most easily accessible -- not unlike a very versatile "speed dial." So it's already looking for components that fail health checks or that utilize resources less efficiently. This data can be charted and shared, revealing behavioral traits that developers can take note of when they're improving their builds with each new iteration.
  • It creates the potential for a new type of dynamic, policy-based security mechanism. As we explored last December in ZDNet Scale, microservices pose a unique challenge in that each one may have a very brief lifespan, making the issue of an unimpeachable identity to it almost pointless. A service mesh has an awareness of microservice instances that transcends identity -- its job is to know what's running and where. It can enforce policies on microservices based on their type and their behavior, without resorting to the rigamarole of assigning them unique identities.

Best gifts for co-workers under $50 on Amazon


Previous and related coverage:

Microservices and containers in service meshes mean less chaos, more agility

For enterprises, it's full speed ahead with microservices. This may speed up the development of chaos-proof service meshes.

To be a microservice: How smaller parts of bigger applications could remake IT

If your organization could deploy its applications in the cloud the way Netflix does, could it reap the same kinds of benefits that Netflix does? Perhaps, but its business model and maybe even its philosophy might have to be completely reformed -- not unlike jumping the chasm from movies-by-mail to streaming content.

Micro-fortresses everywhere: The cloud security model and the software-defined perimeter

A months-old security firm has become the braintrust of engineers working to build the Software-Defined Perimeter -- a mechanism for enforcing firewall and access rules on a per-user level. How would SDP remake the ancient plan of the software fortress?

More from ZDNet scale:

https://www.zdnet.com/article/purpose-built-5g-and-the-machines-that-would-move-the-edge/

Editorial standards