Serverless architecture has become a way for many developers and architects to meet ever-changing business requirements by automatically accessing the capabilities and capacity they need, without the fuss and muss of worrying about operating systems, provisioning servers, addressing storage, and other aspects of the plumbing underneath applications.
To get a clear perspective on how serverless architecture can boost the technology fit in today's enterprises, we turned to Andres Rodriguez, founder and CTO of Nasuni (NAS Unified). Before Nasuni, he was founder and CEO at Archivas, creator of an enterprise-class cloud storage system acquired by Hitachi Data Systems and is now the basis for the Hitachi Content Platform.
Q: How should the existing or legacy architecture be phased out in the move to serverless architecture? Is it an instant cut over, or do you recommend a more gradual migration?
Rodriguez: How existing or legacy architecture should be phased out depends on the unique needs of the organization and the applications involved. But one thing is certain: large organizations want to get out of the data center business altogether.
The current move to a serverless architecture - albeit a misnomer because servers are involved - is an inevitable chapter in this evolution. IT executives and application developers don't want to be in the business of provisioning, maintaining and administering servers for the same reasons they first embraced the cloud.
Yes, some organizations will - by necessity - need to make a gradual migration from their existing legacy architecture. And while not an instant cut over, even these gradual transitions will seem fast, particularly when one considers that in many cases it took decades for the infrastructure they are replacing to take shape.
Q: The rise of cloud has laid the groundwork for serverless, then.
Executives demand the unlimited scale and on-demand capabilities that only the cloud delivers, and they don't want to deal with hardware and the manual processes it requires. Automation was always the cloud's secret weapon. If it is software and it has an API, it can be automated. That's why cloud permeates every layer of the technology stack today. Software-defined compute came first and led to hyper-converged platforms. Next came software-defined networking and then, software-defined storage.
Q: How does the storage component of serverless stack up to previous architectures? Are there additional considerations required for serverless?
A serverless architecture, like software-defined storage, provides flexibility for IT consumption. We are seeing another chapter in enterprises' movement away from the data center, while embracing the cloud as the natural choice for managing, protecting, analyzing and using their data. When I first created a cloud-native global file system, UniFS, to enable enterprises to use public or private cloud object storage for primary storage, many IT leaders didn't trust the cloud enough to consider putting their infrastructure - and in our case their files and unstructured data - within it. Now, a decade later, most IT leaders will openly tell you that the cloud is inherently far more resilient than their on-premises systems.
SEE: Prepare for serverless computing (ZDNet special report) | Download the report as a PDF (TechRepublic)
That realization changed everything. With a cloud-native file system and the services it made possible, enterprises achieve the control and performance of on-premises network-attached storage with a simple solid-state or virtual appliance that replaces yesterday's monolithic storage infrastructure, caches hot data on site for immediate availability and high performance, and saves an immutable, gold copy to public or private cloud object storage - where capacity can be spun up or down in moments as needed.
These same services eliminate the need for costly backup and recovery infrastructure - and the hardware it requires - and enable true file synchronization and real collaboration with a global file lock while enabling enterprises to use object stores from Amazon, Dell EMC, Google, Hitachi, IBM, Western Digital and others as the new disk, but without any of the headaches that come with hardware.
Q: Is cloud-based compute power a concern? How can the need for back-end power be addressed in a serverless setting?
While it's true that high performance computing and efforts to solve the most complex problems typically used on-premises systems, these efforts have consistently been plagued by a lack of compute power, or in other words compute capacity. The cloud addresses that head on. In a serverless setting it simply commoditizes computer power. In that way, compute power is not so much a challenge as it is an opportunity. Enterprises will simply pay for what they need, when they need it.
Q: How do security protocols and processes differ in a serverless environment?
Depending on the use case, there are nuances that developers and IT teams need to consider when looking at their security protocols and processes in a serverless environment. In all cases, encryption is a must. Equal care should also be given to who holds the encryption keys. Only the owner of the code or data in question should possess them.
(Disclosure: I have conducted project work for Hitachi Data Systems, mentioned in this article, during the past 12 months.)