X
Tech

Best practices for securing microservices

With microservices, IT should abandon the old ways of securing applications and focus on the three guiding principles of traceability, visibility, and compartmentalization.
Written by Matt Asay, Contributor

The great promise of a microservices is freedom. Freedom to break up an application into distinct services that are independently deployable. Freedom to build these disparate services with different teams using their preferred programming language, tooling, database, etc. In short, freedom for development teams to get stuff done with minimal bureaucracy.

Sounds great, right? A euphoric take on, "Developers of the world unite! You have nothing to lose but your monolithic app chains!"

Except for the security part. It's awesome that a microservices architecture breaks applications down into independent services, but this also means more complexity; more services to secure. If you have a database per service (MongoDB here! Amazon Aurora for MySQL there! Redis over there!), you still have to manage them all. There are technology strategies to all of this but, really, effective microservices security starts (and ends) with people.

Yep. Those jerks again.

Microsservice security starts with people: Developers and security professionals

The people issue starts with jobs. Or, rather, the fact that different people have different jobs, with different priorities. A developer may be tasked with building an application as quickly as possible in order to catch up to or leapfrog a competitor. Meanwhile, the security and operations teams are just trying to keep that application from turning into a dumpster fire. In a world where developers build and everyone else is tasked with cleaning up after them, security is always going to be a struggle, whether we're talking about microservices or monolithic applications.

However, as developers take on more responsibility for operating their code (DevOps) and securing it (DevSecOps), it drives different behavior relative to "design, what kind of monitoring you have, what kind of tooling, how you interface with that system," according to Jason Chan, vice president of Information Security at Neflix, in his Future Stack 2015 presentation. As Chan spelled out, strong microservices security is a function of three guiding principles, all of which are ultimately tuned to making life easy for developers and security professionals: traceability and development, continuous security visibility, and compartmentalization.

Traceability

The best security happens without a lot of extra effort, which is why the first key to security is to build it into your continuous delivery workflow. Many companies try to manually catalog their service components with an application risk assessment, usually populated via survey questions sent to developers. Sound familiar? It should. This is how most companies have handled security for eons.

But it doesn't work. First, there's an entire science to answering these questionnaires that enables the developer to sidestep the questions without improving security. Even if they aren't trying to get past the security team, it's not always clear what the questions mean. In other words, these risk assessments may end up giving a "false sense of security," Chan pointed out, which is exactly the wrong approach to securing microservices (or anything else).

Instead, he continued, "[D]on't...create a bunch of new tools and new dashboards that people have to go to. [D]on't...interrupt their workflow." Instead, integrate with the workflows and tooling those developers are already using. Netflix, for example, has open sourced its continuous delivery (CD) Spinnaker tool to give developers and operations/security folks a holistic, common view of an application and its component parts, but there are plenty of others (Ansible, Jenkins, etc.).

It's less important which tool you use, and more important how you use it.

It's within the CD workflow that you insert tests, restrictions on which regions for deployment (and when). Doing so not only automates delivery of services, but it also ensures auditability. A common technique for ensuring a new service won't introduce security (or other) problems is to launch a "canary workload" - a new production workload launched to a limited set of users - and if it looks good (e.g., doesn't trigger any of the automated security alerts), the CD tool automatically rolls it into full production.

By working within the developers' chosen workflow, and automating as much as possible, security teams can keep the speed/agility benefits of microservices while still ensuring security checks are in place. Which leads to a related principle, continuous security visibility.

Continuous security visibility

A few years back Michael Nygard said something that should resonate with anyone that has tried to secure microservices: "An individual microservice fits in your head, but the interrelationships among them exceeds any human's ability. Automate your awareness." If it were just one service, in other words, it would be fairly easy to keep track of changes to it and secure the application. But the whole promise of microservices is they may be self-contained yet their value arises from their connections to other services.

No matter how big your organization, you're never going to have enough security resources to penetration test every application, or look at every line of code. Again, one of the cardinal virtues of microservices is that they accelerate change within and between applications. Going back to those developer surveys, companies have historically tried to build spreadsheets with a score attached to each application or service, but this has the problems mentioned above, as well as the difficulty (read: impossibility) of keeping it up-to-date. If you have hundreds (or even dozens) of applications that change all the time, it's unwise to rely on people volunteering information in a timely or accurate fashion.

You need automation.

By employing service discovery tools (Netflix has their Penguin Shortbread application, but there are many other options), you can gauge the riskiness of a particular service based on how many other services depend upon it (the more services that depend on it, the higher its risk score), whether it's exposed to external Internet traffic (edge services get a higher risk score), etc. This automated security risk assessment thus yields a stratification of risk, allowing relatively scarce security people to focus their attention on the microservices with the most exposure.

Which brings us to the final principle, compartmentalization.

Compartmentalization

This principle is critical for distributed systems, and the microservices that comprise them, for two reasons. The first is that given that all security is likely to fail at some point, you want to limit the blast radius of that failure. The second is about confidentiality: keeping data reserved only for those who need to know.

With a monolithic application, everything is lumped together. Developers struggle to iterate on monoliths without breaking things in the process, and every break point introduces new security issues. According to Chan, with monolithic applications "you have very sensitive systems that have a lot of inputs and outputs, [i.e.,] attack surface. [Applications with a large] attack surface [can] be secured, [but] it's more difficult to do it." It's also harder to verify that things are working as they should.

In a microservices context, however, security can be better, because the "blast radius" isn't the entire application: it's just the individual services. Many security-forward companies choose to further improve on this security model with a token vault, where sensitive data is stored. Tokens map to this sensitive data, and those tokens can be passed around the application without the data itself moving. The token service calls crypto infrastructure that sits between the sensitive data and the token, protecting the data. This approach allows you to implement much finer grained access control, Chan said, and it's much easier to trace what's going on.

People-based security (driven by machines)

Guided by these three principles - Traceability, Visibility, and Compartmentalization - security professionals can automate much of their security workload without burdening developers. Even better, as developers are increasingly asked to secure their own applications, they become willing partners in this effort. Which brings us back to people: if IT tries to secure microservices in the old way (spreadsheets, surveys, treating all applications/services more-or-less equally), the people partnership breaks down and security won't work. But through automation, people (and machines) partner to make microservices secure.

Disclosure: I work for AWS, but nothing herein relates directly or indirectly to my work there.

Read more

What do software developers want? A chance to learn, and a decent corporate culture

GitHub warns Java developers of new malware poisoning NetBeans projects

Programming languages: Developers reveal what they love and loathe, and what pays best

Five questions for Microsoft EVP of Cloud and AI Scott Guthrie

Microsoft Build 2020 goes back to developers, developers, developers

Editorial standards