Serverless computing gives us a potential new window into compute management efficiency. We can now more intelligently position applications and their related services inside architectures where higher-level server provisioning is more dynamically delivered to the specific use case for any given workload.
Where we once had to spend a good proportion of system planning time setting up, tuning and scaling applications to work in a certain way, that responsibility is now (in theory at least) largely or wholly shouldered by the Backend-as-a-Service (BaaS) provider running our serverless architecture.
Dynamic, in the face of dynamism
But does the promise of any application with any workload on any serverless backbone always hold water? The most likely answer is: potentially, but only if we remain dynamically aware that the total environment itself is dynamic.
That statement is not intended to sound tautological; serverless gives us the opportunity to create stateless applications that run on event-based architectures. The application itself is a moving entity and the amount of function calls it will make for system resources will differ depending on user requirements at any moment.
But the backend has an element of dynamism too.
To keep this model working well, the most efficient applications will be those built with an inherent (or at least 'some') appreciation for server hardware runtimes and indeed runtime updates, the scheduling of which the application itself will not typically have sight appreciation.
Nibble, little and often
Efficiently segmented and more componentized applications that are more easily able to 'nibble' little and often will fare better in a serverless world as compared to any notion of monolithic application with hugely complex workflows.
Of course not every instance of every cloud for every app for every workflow for every collection of data types is equal. So keeping an Agile (Caps A, as in methodology), nimble, dynamic and altogether fast-paced mindset to the fore when building applications for serverless will serve us better in the long game.
These data management nuances and niggles mean that serverless is not an open and shut case. Serverless evidently offers a clear route to server savings because we're not paying for idle time. But to use it efficiently, enterprises will need to define which parts of their IT stack can be classified as stateless event-driven applications with extremely variable demand requirements.
Can serverless computing be perfected? Some say yes, some say these challenges make perfection something of a pipe dream. Developers tapping serverless will still need to determine and allocate a defined amount of RAM and CPU power for any given function, but they will never have a precise view into the server itself to be able to ascertain RAM and clock speed.
It doesn't stop there; almost all connected applications in the modern world of cloud will have external dependencies and configurations that they will need to draw upon for their core functions. Settings for these aspects of the application typically involve system-level access, something that doesn't exist in serverless, so they will need to be packaged into the application itself.
There is a lot of complexity to shoulder here (even though serverless itself is a route to lower backend complexity), but pre-configuration of defined application functions and automation to direct our use of libraries, APIs and other resources are being developed all the time.
Good deal, or no deal?
So it's a trade-off, but the difference is, for some applications it's a trade-off worth making if we really can't profitably justify the time spent maintaining, debugging, and monitoring the infrastructure we need to run our applications.
Most of us agree that serverless computing provides flexibility and potential cost savings, but it needs to be undertaken with an appreciation for the control and visibility trade-off that makes it possible.
Serverless computing -- or rather, efficiently deployed serverless computing for the right application and data workload use cases -- gives us the chance to free up developer creativity time. More programming focus can be directed towards the front end (User Interface, data visualisations, core functionality enhancements etc.) than the backend.
At this stage we can safely say that serverless computing can potentially make us happier users, but we need to navigate our way around the dynamic data management nuances and niggles that will typically characterise any contemporary enterprise IT stack.
Once that's done, we can relax, but only until the next function call -- this is always-on continuous computing, remember?