Writing serverless code: The programming languages and everything else you need to know

A guide to the rewards and pitfalls when developing an application to run on a serverless platform
Written by Nick Heath, Contributor

If you've heard about serverless computing and are considering which programming languages to use, the good news is you're increasingly spoilt for choice. The bad news is you'll probably need to learn a new way of structuring your code.

Serverless computing is the name given to a model of cloud computing where the developers have almost no overhead of managing infrastructure.

Instead, code is uploaded and activated in response to certain events -- an image resizing function triggered when a user uploads an image to an online service, for example. Developer are only billed for when their code is running, down to the nearest 100 milliseconds. All the work of managing servers, containers or other infrastructure -- including scaling with demand -- is taken care of: the developer simply writes the function that's triggered by the event.

That's not to say these small functions are restricted to simple tasks such as image resizing. A wide range of parallelisable tasks are suited to the serverless model, such as transforming data fed into analytics engines and orchestrating calls to autoscaling engines. Take Amazon's AWS Lambda serverless offering, for example. A Lambda@Edge function could be used to select the type of content a web application should return to a user, based on their location and the type of device they're using. Another possible use could be a Lambda function that glues AWS services together, perhaps triggered by Auto Scaling events or CloudWatch alarms, and in turn calling other AWS services.

SEE: Prepare for serverless computing (ZDNet special report) | Download the report as a PDF (TechRepublic)

Which programming languages to use?

Some programming languages are widely supported across all the major serverless platforms (AWS Lambda  , Google Cloud Functions , Microsoft Azure Functions), and are also well-suited to the event-driven paradigm of serverless computing.

"The ones that make sense are probably Node.js and possibly something like Python; there's also support for Java and .NET in all of these environments at this point as well," said Fintan Ryan, research director with analyst firm Gartner.

Increasingly, developers writing for serverless computing platforms can use the language of their choice. AWS Lambda functions can be written in Java, Go, PowerShell, Node.js JavaScript, C#, Python, and Ruby. Google Cloud Functions supports Node.js JavaScript, Python, and Go, but allows for unlimited execution time for functions. Microsoft's Azure Functions supports a wider range of languages including Bash, Batch, C#, F#, Java, JavaScript (Node.js), PHP, PowerShell, Python, and TypeScript, and has similar pricing to Lambda. There's also IBM Cloud Functions, whose language support includes JavaScript (Node.js) and Swift, and Cloudflare Workers, which runs functions written in JavaScript or any language that can be compiled to WebAssembly.

The newly announced Google Cloud Run broadens language support even further, supporting any language you can run in a container, while AWS Lambda Layers allows developers to pull in additional code written in other languages.

James Hall is director of the agency Parallax, which used AWS Lambda to build a web app for the Euro 2016 football (soccer) tournament and more recently used a serverless platform to ingest data for a smart street lighting project. He believes serverless platforms are increasingly language agnostic.

"Lots of people are using different languages for runtimes, so I'd say the requirement to use Node.js has long gone," Hall said.

Working with the limitations of serverless

What's more important than the choice of language in serverless computing is how code is structured, with code generally broken into small blocks called functions that handle single tasks.

The recent UC Berkeley report Serverless Computing: One Step Forward, Two Steps Back set out what it saw as the limitations of the types of tasks that can be carried out using what it called the Function-as-a-Service (FaaS) model.

"In short, current FaaS solutions are attractive for simple workloads of independent tasks -- be they embarrassingly parallel tasks embedded in Lambda functions, or jobs to be run by the proprietary cloud services."

Due to the nature of the serverless model, these functions are only triggered by an event or HTTP request, so can't rely on having persistent access to data each time a function is triggered. As such, if a developer wants these stateless functions to have access to data, or to send data between functions, it's necessary to rely on external storage. Doing so adds latency, which means that serverless isn't necessarily going to be a good choice if your application needs persistent access to data.

"They tend to be smaller, isolated nanoservices. You can write larger applications -- it just takes a little more work. But by and large they just deal with one quick thing," said Hall, adding that serverless platforms are good at scaling to meet spiky demand, where users rapidly rise and fall.

Gartner's Ryan agrees there are limitations to the serverless platform that developers should bear in mind.

"On a high-level you're better off matching things to the limitations of the platform. If you have something that is highly latency dependent or that has an awful lot of I/O, these are things that are better suited to other deployment architectures," he said.

"You can code around them, but it's not necessarily something you want to spend your time doing."

Very long-running computational tasks are also not suited to running on AWS Lambda at present as each function is limited to running for a maximum of 15 minutes, although Google's serverless offering doesn't have this restriction. A Lambda function can also only be executed 1000 times concurrently, although this limit can be raised on request.

This difference in data access and how long functions can run for is one of the reasons why you can't generally just port existing applications from a more traditional platform -- a virtual machine running in the cloud, for example -- over to a serverless platform. The underlying platform and its limitations are different, so the code must be structured to compensate for those quirks.

"A lot of applications written over the past couple of decades will expect to have a fixed disk sitting there with some filesystem on it, and maybe a connection to a database," said Hall.

"They're written in a way where they can't just boot themselves up and shut themselves down again, so you have to break them up into little bits," he added.

Building bigger applications on a serverless platform

That said, there are examples of more complex applications being run on serverless platforms, if developers are willing to work around their limitations.

The Serverless Computing: One Step Forward, Two Steps Back report cites PyWren as an example that provides a platform for running existing Python code "at massive scale" via AWS Lambda, using AWS S3 object storage for event continuation.

Pywren is not alone in attempting to expand the boundaries of what's possible with the serverless model. Gartner's Ryan points to the training website acloud.guru, which he says is entirely run on a serverless platform.

"The entire website, the whole thing and the video-streaming platform behind it, is all done using serverless," Ryan said.

One example of how these more complex applications can be pieced together is using function compositions, as outlined in the UC Berkeley report.

In these compositions, simple serverless functions transform data and then feed it into another serverless function, glued together by other cloud services that help funnel data from one function to the next. These functions generally manipulate state in some way, and are designed as event-driven workflows of Lambda functions, stitched together via queueing systems (such as SQS) or object stores (such as S3). The downside of this approach is the inherent latency when sharing data between the functions.

Parallax's Hall agrees that it's perfectly possible to build more complex applications using serverless platforms, given the right approach.

"There's a little engineering overhead when developing a serverless app, but it is possible to write larger applications with it," Hall said.

"You're just using a little bit more time in advance to make the infrastructure as code, and after that point you don't have to look after the infrastructure anymore, so you get the cost savings back a bit later on."

He gives the example of the website Bustle, which is entirely run on serverless architecture.

The question as to whether applications will eventually be redesigned top-to-bottom to run on serverless platforms is still open, but in the interim, Gartner's Ryan expects certain components within services will run on a serverless platform and work alongside larger, more complex applications hosted on traditional platforms.

"For traditional types of applications, if you're rearchitecting something you're unlikely to go all the way -- you're likely to end up with components in different places," Ryan said.

"For new applications it's feasible to develop things in an almost 100 percent serverless manner, but that involves a lot of composing things from other services," he added.

Developers don't only have to learn a new way of writing applications, but also of debugging them, according to Hall, who says the tools available for debugging serverless can be difficult to use and more limited than competing solutions for debugging traditional software. "It adds another layer of complexity in debugging between your functions," he said.

"A lot of companies seem to be racing towards finding that ideal way of visualizing and debugging between lots of different microservices," Hall continued. "For example, let's say you had a request that comes into AWS API Gateway and runs a Lambda function, which then needs to run five different Lambda functions in order to complete a job. When something goes wrong, it's quite difficult to introspect and see what's happened. You need to gather and monitor logs from lots of different functions, and potentially other third-party services, aggregate them together, and unpick it." The debugging tooling "isn't quite there yet," Hall added.

Vendor lock-in

Another key consideration is the risk of vendor lock-in, which stems from each serverless offering on the major cloud platforms being designed to work alongside a network of related services on the same platform -- using AWS API Gateway as a trigger for an AWS Lambda function, for example. The propensity for the serverless model to increase the risk of vendor lock-in was one of the criticisms raised by UC Berkeley in its Serverless Computing: One Step Forward, Two Steps Back paper, where it said: "Current serverless infrastructure, intentionally or otherwise, locks users into either using proprietary provider services or maintaining their own servers."

However, Parallax's Hall says lock-in isn't inevitable when building software for a serverless platform. He points to the Serverless Framework as providing a way for developers to write serverless applications that can be deployed across multiple cloud platforms.
"What they'll allow you to do is to write applications in a way that they can be deployed to different cloud providers," said Hall.
"It takes the base level of what's possible and makes the format for doing that standard."

As the serverless architecture starts to mature, new tools are also emerging that simplify the process of deploying applications on specific serverless platforms, such as the AWS Lambda-focused tools Claudia.js for Node.js JavaScript and Zappa for Python.

According to Hall, serverless is here to stay, and understanding how to build applications for serverless platforms will be part-and-parcel of a developer's skillset in the years to come.

"Serverless will start to make its way into all sorts of organisations, whether they knowingly choose serverless or not. It'll become a requirement for developers in new roles in the next few years."

Also See

Editorial standards