Serverless computing promises to free both developers and operations people alike from the shackles of underlying hardware, systems and protocols. In making the move to a serverless architecture, the good news is that the move can often be made quickly and relatively painlessly. However, IT managers still need to pay close attention to the same components in the stack in which their current applications are built and run.
How is a serverless architecture like previous, more traditional technology architectures, and how does it differ? Despite the name, it's not a serverless architecture entirely devoid of servers: rather, it's a cloud-based environment often referred to as either Backend-as-a-Service (BaaS), in which underlying capabilities are delivered from third parties, or Function-as-a-Service (FaaS), which capabilities are spun up on demand on a short-lived basis.
In a FaaS environment, "you just need to upload your application codes to the environment provided, and the service will take care of deploying and running the applications for you," says Alex Ough, CTO architect for Sungard Availability Services.
A serverless architecture "still requires servers," says Miguel Veliz, systems engineer at Schellman and Company. "The main difference between traditional IT architecture and serverless architecture", he adds, "is that the person using the architecture does not own the physical or cloud servers, so they don't pay for unused resources. Instead, customers will load their code into a platform, and the provider will take care of executing the code or function, only charging the customer for executing time and resources needed to run."
Or, as Chip Childers, CTO of Cloud Foundry, prefers to define serverless, "computing resources that do not require any configuration of operating systems by the user."
So, with everything managed or spun up through third parties, there isn't as much a need to worry about annoying details such as storage, processing and security, right? Not quite. These are all factors in the migration from traditional development and operations settings to cloud-based serverless environments. Here are some further considerations you'll need to weigh up when developing a serverless architecture:
Before anything else is initiated in a serverless architecture development process, the business case needs to be weighed, to justify the move. The economics of serverless may be extremely compelling, but still need to be evaluated in light of architectural investments already made, and how it will serve pressing business requirements. "Serverless adoption must be a business and economic decision," says Dr. Ratinder Ahuja, CEO at ShieldX. "The presumption is that over time and across functions, paying for a slice of computing for the short period of time that a piece of logic executes is more economical than a full stack virtual machine or container that stays online for a long time. This approach should be validated before organizations embark on a serverless journey."
Migration - and blending
As serverless computing is inherently a cloud-based phenomenon, the best place to start is looking at what cloud providers have to offer. "If lock-in is not a concern, and you want to start quickly, a fully managed solution like the ones provided by the major cloud providers is one way to start," says William Marktio Olivera, senior principal product manager for Red Hat.
However as a serverless architecture expands from there, Olivera recommends additional approaches such as container technology, to assure the seamless transformation of code and applications between environments. "As soon as you start considering running your application on more than one cloud provider, or you might have a mix workloads running on-premises and on a hybrid cloud, Kubernetes becomes a natural candidate for infrastructure abstraction and workload scheduling and that's consistent across any cloud provider or on premises," he says. "If you already have Kubernetes as part of your infrastructure, it makes even more sense to simply deploy a serverless solution on top of it and leverage the operational expertise. For those cases, Knative is an emerging viable option that has the backing of multiple companies sharing the same building blocks for serverless on Kubernetes, making sure you have consistency and workload portability."
Serverless functions are running in containers, and "these containers appear ephemeral and invisible to the application designer," says Scott Davis, VP of software development at Limelight Networks and former CTO at VMware. "Under the covers there is a pool of reusable containers managed by the infrastructure provider and used on demand to execute a serverless function. When a function completes, the host container is reset to a pristine state and readied for its next assignment. Since serverless functions only live for a single API call, any persistent state must be stored externally for subsequent functions that need it."
While a transition from on-premises assets to serverless can be accomplished relatively swiftly, the move to serverless should be taken with deliberation. Not everything will be ready to go serverless at once. "Legacy software is anything you've already deployed, even if that was yesterday," Childers says. "Changes take time in any non-trivial enterprise environment, and often a rush to re-platform without rethinking or redesigning software can be a wasted effort. Software with pent-up business demand to make changes -- or new software projects -- are the logical projects to consider more modern architectures like serverless environments."
Not every workload "is a perfect candidate for serverless workloads," Olivera agrees. "Long running tasks that can be split into multiple steps or ultra-low-latency applications are good examples of workloads that may take a few years to be considered as good candidates for serverless platforms. I recommend approaching the migration as an experiment -- a new API that is being implemented with a reasonable SLA or a new microservice are good candidates. Then you can progress to single-page applications and some other backend functionalities of web application. The learning of running those experiments at scale should be enough to inform the next steps and prove the benefits of serverless architecture."
This blending of legacy environments with serverless will likely go on for some time. "Organizations should embrace a different path forward, combining their existing -- and often monolithic -- applications with modern APIs, which can be used from newer serverless components as functionality engines," says Claus Jepsen, deputy CTO and chief architect at Unit4. "Serverless architectures can complement and enrich the existing architecture by providing a new abstraction that supports building new services and offerings."
To a large extent, serverless takes many security headaches off the table. Traditional on-premises IT architectures that present fairly large attack surfaces -- such as the network, host operating systems, services, application libraries, business logic, data and the user -- notably shrink in serverless settings. Still, even serverless environments require due diligence and vigilance, says Ahuja. "Security teams must take into account the function code, what services that function can access, data access and misuse and certain types of denial-of-service attacks. The cloud provider that is hosting the function as a service is responsible for securing the underlying infrastructure."
Security worries don't necessarily go away -- they just change. "Because you will not be behind your own firewalls, you need to observe security protocols that you didn't have to worry about with on-premises computing and data storage," says David Friend, CEO of Wasabi. "Such things as protecting your encryption keys become very important. Almost all data stored in serverless cloud environments is encrypted, so even if someone hacks in, in theory they will only find useless encrypted data. But people are careless with their keys because they are not used to encryption keys being so important."
Storage is another area in which the serverless computing shifts the dynamic. Storage "is usually the trickiest part of serverless," according to Gwen Shapira, software engineer at Confluent. "Cloud providers have a large variety of storage options, some are advertised as 'serverless,' although you aren't limited to those." Scalability of serverless storage is a critical factor, she continues. "It's important to remember that the scalability of storage systems is influenced by the data model and design. So you need to choose both storage and data model that will fit your scalability expectations." The cost model is another consideration, she adds. "With serverless apps, you only pay for what you use. But storage introduces ongoing cost for the data you store, and sometimes fixed cost for provisioned capacity, so you need to take those into account and, if necessary, optimize the amounts of data you store."
Serverless requires new thinking about data storage. "This is because a serverless system needs an external storage plan to manage state and ensure data at rest is protected by other means," says Greg Arnette, technology evangelist at Barracuda. "Serverless is all ephemeral, with processes firing up and shutting down in seconds and minutes bursts of activity. Serverless functions need to read and write data from other sources that offer persistence and API access."
Ultimately, storage "becomes a service in these environments," says Childers. "Storage services are network accessible and may take the form of a fully managed database offering or a newer API-based storage capability."
Serverless architecture is following the lead of cloud architecture, and that means lower costs and greater agility. "With the exception of certain edge cases that require very specialized services, cloud-based computing and storage are both becoming commodities," says Friend. "There is little reason for anyone to run their own storage or compute if they have reasonable bandwidth connectivity. If you aren't generating your own electricity or digging up the streets to lay your own fiber, why would you want to own your own storage or compute infrastructure? Most people don't realize it yet, but IT needs to focus on the strategic uses of data and not the hardware infrastructure."
- Serverless computing growth softens, at least for now
- Google Cloud Platform launches Cloud Run, aims to bring enterprise workloads to serverless, Kubernetes
- Informatica update revolves around multi-cloud, serverless, AI support
- There's no ops like NoOps: the next evolution of DevOps
- TriggerMesh brings AWS Lambda serverless computing to Kubernetes