In cloud-native environments, where virtualization is implemented via containers and apps triggered via microservices, where is the optimal level to enforce security? A few weeks back, ZDnet colleague Steven J. Vaughan-Nichols reported on a provider that deliver it through an agent at the OS level.
API gateways have traditionally been centralized points for managing connections at the application layer. They manage traffic between clients and microservices, often referred to as north-south traffic because it often comes from clients outside the data center.
By contrast, service meshes, with their reliance of proxies, have found traction in distributed environments for managing connection at the app and transport layers. They typically handle what is known as east-west traffic within a Kubernetes cluster or data center. Properly designed, each proxy encodes rules on which microservices can talk to each other.
In some cases, there has been a comingling of the two where service meshes delegate some app connection management to API gateways that act as a sublayer of connectivity. And yes, e-books have been published on the topic.
The founders of Solo.io saw a looming issue as services meshes began gaining traction, thanks to Kubernetes platforms such as Anthos and OpenShift. The challenge they saw coming was one of scalability of management: policies or rules for each proxy had to be developed one by one. They was no way to manage across multiple proxies. For governance or security, these platforms typically require external gateways or OS-level alternatives for enforcing policies, with the same being true for the respective K8s services on each of the major clouds.
The labor-intensiveness of such efforts could undercut what is supposed to be a prime advantage of cloud-native deployment: a simplified control plane that promises more flexibility thanks to de facto standards such as K8s that can make it second nature to scale up or down cloud-native clusters.
Solo.io initially developed a product based on the envoy proxy that acts as an API gateway within an Istio service mesh for managing traffic between clients and Kubernetes clusters. They have recently expanded with a more encompassing enterprise product that can manage multiple service meshes across one or more Kubernetes clusters. The notion is that, while enforcement of policies may be distributed across multiple proxies, it can be managed centrally and consistently without having to code each proxy individually.
Gloo Mesh Enterprise adds to the control plane for Istio by enhancing observability, for monitoring and troubleshooting behaviors over time; integrate external certificate providers with existing PKI infrastructure; and support discovery of the ingress pathways for each managed service mesh.
This is some activity in the open source world for tackling this issue. The Network Service Mesh, is intended to provide common APIs for addressing connectivity, security, and observability. It would allow individual K8s pods to network securely across clusters or clouds; but this project is still at sandbox status with the Cloud Native Computing Foundation.
Our take on the emergence of K8s-based cloud platforms is that they are built on standards, meaning the organizations do not have to reinvent the wheel when it comes to building in the dynamic glue that allows clusters to scale up and down. But we also believe that for most organizations that lack the sophisticated resources to build their own private clouds, K8s development should not be done at home. The good news is that there are startups like Solo.io that are beginning to address some of the management gaps.