X
Innovation

Event-driven cloud computing: How and when it makes sense for your organization

Planning your cloud deployment in terms of computational tasks rather than computational capacity can lead to cost savings. See how to tap into the power and scalability of event-driven cloud services.
Written by James Sanders, Contributor
panumas-nikomkaiistock-858312664.jpg

Image: iStock/:panumas nikomkai

The advent of traditional cloud computing services -- like Amazon EC2 -- presented a business case for offloading maintenance of physical servers onto a third party, eliminating upfront costs of provisioning physical hardware. In so doing, organizations gained the ability to scale their computing capacity seamlessly to meet their computational demands. For specially engineered workloads, event-driven computing promises to do the same. Rather than maintain long-lived EC2 instances for computational tasks, event-driven computing allows for functions to be executed on arbitrary servers when triggered, and companies are billed only for the duration of time it takes for the function to complete.

Using event-driven computing effectively

Obviously, event-driven computing services like AWS Lambda, Google Cloud Functions, and Microsoft Azure Functions are not suitable for mission-critical, always-on tasks. Core business operations such as mail or CMIS servers and websites, among others, are poor fits for event-driven computing.

However, for routine computational tasks that are not greatly time sensitive -- making thumbnails of images, structuring streamed data from IoT devices, using OCR to extract text from an image, for example -- event-driven computing can be effectively utilized without leaving an EC2 instance idling as it awaits the next task. From an engineering standpoint, this reduces the amount of structural support needed to perform the task, as it negates the need for a queuing system. A minor security benefit also accompanies event-driven computing: As the instance performing the computation is deactivated rather than left idling after completing a given task, the potential attack surface is reduced.

Particularly long-lived tasks, such as video transcoding, database maintenance, and complex report generation, are not well-suited to transitioning to event-driven computing services, as limitations exist on the runtime length of functions. For Lambda, the default is three seconds, though this can be extended to five minutes. For Google Cloud Functions, the limitation is nine minutes. For tasks intended to run longer than these limitations, programmatically spinning up an EC2 instance is a better strategy, particularly as Amazon has recently moved to per-second billing for EC2 instances.

SEE: Research: Cloud vs. data center adoption rates, usage, and migration plans (Tech Pro Research)

Keeping an eye on cloud costs

Major event-driven cloud computing vendors use a billing system that relies on two factors: the number of computations performed and the time it takes to compute each task, in units of 100ms. Event-driven computing can easily scale to meet the demands of a given computational workload, so it can be used to decrease the cost of cloud deployments in cases where variable workloads can be offloaded to Lambda, reducing cases where high capacity instances are idling.

For extremely variable workloads -- imagine a task that would be performed only at a specific time of day, but that would suddenly need to be done hundreds of thousands of times in succession so that the computed result would be quickly available for consumption -- both Google and AWS have a default safety throttle for concurrent use that can be disabled on request. Nominally, this is intended to prevent processes from using large amounts of resources for extended periods of time, thereby limiting the ability of malformed apps to run up large service bills.

Roadblocks to implementation

The 'stateless' attribute of on-demand computing leads to some limitations in how individual compute tasks can be customized. Because of the time-limited nature of event-driven computational tasks, it's not possible to install custom packages on these machines. As such, tasks that have dependencies on specific libraries may need to be re-engineered to remove these dependencies or deployed on traditional cloud services like EC2.

For Lambda, functions are separated into hot and cold. The VM instance used to execute the function is deactivated if a function has not run for 10 minutes, requiring spin-up time for a VM to be created to run a cold function. Relatedly, because there is no guarantee that each execution of a task will occur in the same environment as the last task, environmental differences can occur between each run of a function.

SEE: Special report: The art of the hybrid cloud (TechRepublic PDF)

Is event-driven computing right for your organization?

For existing projects, utilizing event-driven computing likely requires extensive modifications to existing code, as functions must be converted for external processing with your cloud provider of choice. Assessing which tasks can be usefully offloaded and weighing the potential cost savings of event-driven computing versus the programmer time to implement these changes should be the first step.

For new projects, event-driven computing can be a powerful tool for increasing the scalability of your applications, when used in appropriate contexts.

Read more on event-driven computing

Editorial standards