Special Feature
Part of a ZDNet Special Feature: The Art of the Hybrid Cloud

How ‘cloud-native’ applications are transforming IT, and why it took so long

Since the turn of the century, a movement in the software development field has gradually, deliberately upset the balance of power in computing. If you haven’t noticed it, you’re not alone.

For a major application such as an industrial control system (ICS), a content management system (CMS), or a hospital management system (HMS) to be "cloud-native," its entire lifecycle must exist on a cloud platform. It's developed or assembled there, it's staged and tested there, it's deployed there, debugged there, and continuously updated there. It's not "installed" on a server in a data center as some sort of permanent residency. And it's not converted into a virtual machine image just to make it portable across servers. It is designed for the cloud, which not only mandates fundamental changes to its architecture, but of the entire IT economy that supports it.

Also: IT jobs: Tackling the looming digital skills gap

In a way, the evolution of server-side applications is reverting to the course it had been taking prior to the PC era, when x86 processors pushed mainframes and minicomputers out of the data center. A cloud-native application is made for the systems that host it, rather than having to be converted or staged in a virtual environment that hides the nature of the cloud from it.

The return of time-sharing

"On our present system about 35 users can get good simultaneous service, each with the illusion that he has complete control of the machine. Each user sits at a teletype typewriter, types out his program, and keeps entering corrections until his program finally works. This makes it both convenient and pleasant to use the computer."

                    -John G. Kemeny and Thomas E. Kurtz
                           The Dartmouth Time-sharing Computing System, April 1967

Since the dawn of computing, software has been fashioned for the machines designated to run it. That shouldn't surprise anyone. Dartmouth's John Kemeny and Thomas Kurtz essentially invented modern computing by devising a language meant to withstand trial-and-error programming: BASIC. The first true cloud computing platforms, and the most successful such platforms today, are direct descendants of their work. Their principle was that the programs that can make the best use of the machine they run on, should be nurtured and developed inside those machines, rather than hashed out beforehand on paper and compiled separately.

the-triumph-of-virtue-and-nobility-over-ignorance-by-tiepolo.jpg

"The Triumph of Virtue and Nobility over Ignorance" by Giovanni Tiepolo, approx. 1745.  From the Norton Simon Museum.

Made available through Creative Commons CC0 1.0

Platforms as services

"Cloud-native" computing is this same principle, extended to encompass cloud platforms. If we're being honest (now would be a good time) then we should acknowledge that "the cloud" is a single machine, at least from the perspective of the applications that run there. It's not really a foggy or other-worldly environment, but a cluster of processors linked by high-speed networks that just happens to span the planet. The languages designed for engineering cloud-native applications on this huge class of machine, are direct descendants of Dartmouth's BASIC.

Also: Using the cloud as a platform for digital transformation

First and foremost, cloud nativity renders the issue of where an organization chooses to host its applications, a perennially open question. Cloud applications platforms are designed for portability. A cloud infrastructure today often comes with an applications platform such as Cloud Foundry (stewarded by Pivotal), Apprenda, or Red Hat OpenShift.

Soon the very phrase "cloud-native" may fall into disuse, like the tag on 1990s and early 2000s TV shows that read, "Filmed in high definition!"

Making sense of all the abstractions

Since the advent of C++ and high-level programming languages, the design of software became separated from that of its underlying hardware by one or more layers of abstraction.  Programmers -- now known as developers -- typically have never had to consider the architecture of the hardware or the infrastructure supporting it.

Also: Linux now dominates Azure

"The cloud" (which is way too late to rename) is a machine, albeit one that spans the planet. Some of the very first cloud services, including the original "Windows Azure" back in 2008, were established as a means of staging new software, designed with the intent of running there. Despite what the original branding implied, software whose intent is to be distributed and not centralized -- in other words, to be run over the Web rather than a PC acting as a server -- takes on its own design.

Docker Engine

The Docker revolution of 2013 enabled three ideal situations simultaneously:

  • Docker decoupled applications from the servers that ran them, by engineering a standard class of portable, software-based container.
  • It established a self-service deployment mechanism that made it feasible for developers to literally build locally, then act globally -- to stage a distributed application on a single machine, then push it to the cloud in a simple, automated fashion.
  • It forever altered the architecture of the server-based application, creating at least one style of programming, perhaps more, that were specifically suited for deployment in the cloud: the cloud-native application.

Is this the same "cloud" we've talked about before?

Before we move forward with that phrase, let's settle the matter of what we think "the cloud" is today. We who cover technology for a living have a tendency to draw a hard boundary between enterprise data centers and cloud service providers, designating what an organization may own and operate for itself as "here" and the cloud as "over there."

This is not what the cloud concept actually means, or has ever truly meant. Your organization may be capable of hosting its own cloud-native applications, if it owns or even leases the servers that run them, on premises that it either owns or leases from a colocation provider such as Equinix or Digital Realty. What we've been calling "hybrid cloud," as though it were a strange but viable alternative, is now any cloud platform whose services may be part-owned and part-leased from a service provider such as Amazon, Microsoft, Google, or IBM, wherever those services are located.

Also: Enterprise developers keep reaching for the cloud 


Let's try to put this another way, in terms that make sense from the outset before someone redefines them to mean something else:  A cloud may be any combination of resources, located anywhere on Earth, whose network connectivity enables them to function in concert as a single assembly of servers. An organization can own and operate its own cloud in its entirety, though it typically does not.  Commercial cloud service providers (the three largest of which are Amazon AWS, Microsoft Azure, and Google Cloud) offer an organization the resources it may need to stage any or all of its applications in the space we now call the public cloud. Some enterprises do not own or operate any of its own computing resources, opting instead to lease those resources from public cloud service providers entirely.

So when we say an application is "native" to this type of cloud, what we mean is not only that it was constructed for deployment there, but that it is portable throughout any part of the space that this cloud encompasses. The cloud stitches together multiple staging areas for applications and their resources, including databases, into a single landscape. The cloud-native application perceives this landscape as its own. More importantly, it does not have to dig deeper into the infrastructures of the servers or data centers where portions of that landscape are being hosted.

How the home for a cloud-native application is constructed

Say for example that an enterprise manages a VMware vSphere environment. Originally, vSphere was intended to host virtual machines (VMs) that behave like physical servers, but rendered as software. Now, by means of an extension product called vSphere Cloud Foundation, it can also host and manage the new breed of containerized applications, which are far more portable and which exist outside of VMs.

180827-vmworld-2018-day-1-15-pat-gelsinger.jpg

VMware CEO Pat Gelsinger explains VMware Pivotal Container Service to attendees of VMworld 2018 in Las Vegas.

Scott Fulton

As VMware announced last November, such containerized applications will be able to make use of resources such as computing power, storage, and database services from Amazon AWS. VMware made a similar pact with Google Cloud the previous year. As a result, an organization's hybrid cloud may consist of resources gathered together from the public provider and its own data centers. VSphere perceives these resources as a single space in which to manage applications.

Also: 8 steps to becoming a 'cloud-native' enterprise 


So an application designed for such an environment, and perhaps within this very environment, would be "cloud-native."  As opposed to a server-based application written for a stand-alone Windows Server or Linux environment, the cloud-native version would be capable of deployment anywhere in this space at any time.

This kind of arrangement is by no means exclusive to vSphere. If an organization had its own OpenStack cloud on-premises, it could integrate its own private resources to some degree with AWS, Microsoft Azure, or Google Cloud. (Whether it does so with simplicity is a matter of open debate, but at least it's open.)  An organization with Microsoft's Azure Stack can build and deploy applications using Visual Studio (or its outstanding open source counterpart, VS Code) on its own servers in its own data centers, and integrate those resources with those available from the public Azure cloud as necessary.

And based on the information we have on hand today, we believe that an enterprise that subscribes to AWS Outposts should be capable of building an application geared for Amazon EC2, and deploying it to AWS servers located on the customers' premises. This will be a cloud-native application, but not necessarily in the public cloud unless and until the customer moves it there, or part of it there, intentionally.

Why cloud nativity suddenly matters now

must read

What is cloud computing? Everything you need to know from public and private cloud to software as a service

An introduction to cloud computing from IaaS and PaaS to hybrid, public and private cloud.

Read More

Here's the concept in a nutshell:  A cloud-native application is at the very least assembled, and at most entirely composed, on the cloud computing platform on which it is intended to run. Its entire lifecycle, which includes all the procedures and services involved in managing it, take place there. Its databases are housed there. Its network connectivity is enabled there. And the people who build it and maintain it may very well be the customers of the cloud service provider rather than its own employees or contractors.


Also: 
Best cloud services for small businesses CNET

The cloud-nativity of a server-side application is especially important today for the following reasons:

  • It alters the definition of an application's "version."  Everyone experienced with Windows 8, Windows 8.1, and Windows 10 is familiar with the arbitrary, often haphazard, way in which software versions have been numbered. "Evolution," from the perspective of older applications, takes place in fits and starts -- and in a few instances, giant leaps of faith. A truly cloud-native application can evolve the way a smartphone app evolves: gradually, incrementally, as often as several times per day, without the user having to care one whit about its build number.
  • It changes the entire computing landscape.  The cloud-native model enables a complete and thorough reconsideration of what constitutes a computer program. There's no clear reason -- not even an economic one -- why any class of application must be installed on a PC or a mobile device, except perhaps to accommodate a lack of connectivity, which almost no one today actually has.
  • It opens up a new prospective market for telecommunications companies seeking to build cloud computing centers (a necessary component of the 5G Wireless model) and which need supplemental sources of revenue to help pay for them. A telco could offer services from a "nearby cloud" to its enterprise customers -- perhaps a lower-cost platform for smaller businesses to stage, deploy, and manage custom applications.
  • It resuscitates a lost art and re-energizes the people who may best contribute to it. Before the arrival of the IBM PC, software for microcomputers was developed by amateur programmers, who shared their code with one another through users' groups, years before the advent of the BBS, the online service, and the web. The open source nature of cloud-native computing, at least for now and for as long as this situation can be maintained, encourages practitioners to educate each other in how to build better systems, functions, and databases.
  • It presses the reset button for the software industry.  The world where operating systems reigned supreme, and the one where "the Web is the platform," are as different from a computing ecosystem built around the cloud as World War II was from the Revolutionary War and the Civil War.

Comparing native applications to migrant applications

Most server-side software ever produced (i.e., not the apps that run on a PC or a phone, which are "client-side") was devised to be run on a conventional machine -- a box with a processor, its own memory, some local storage, and a PC operating system. When that older class of software runs in the cloud, a virtual machine (VM) is created for it, which is itself a kind of program. A VM pretends to be a conventional machine for the sake of the application, which oftentimes doesn't know the difference. But the VM is portable and easily managed. Should it become corrupted, another instance of it can be cloned from an earlier image to take its place ("instantiated").

Also: Why native apps aren't really doomed, for now TechRepublic

A cloud-native application doesn't need to pretend. If it does run in a VM, as many do, then it is capable of becoming aware of the cloud platform that is running it, which gives it more control over the manner in which it's managed and distributed throughout a network.

This gives rise to two schools of thought, both of which are equally valid premises for cloud-native methodologies:

  • At last, the application can be keenly aware of its environment.  As a guest of a virtual machine, an application never really knows the details of the true nature of the infrastructure supporting it. As a result, it cannot learn how to improve its own performance. Now, by way of components serving as remote agents inside containers (one prominent example being NGINX Plus) a component of a running application may acquire live data about certain aspects, some of them admittedly esoteric, of its configuration and performance. With that data, at least theoretically, the application could make certain decisions about its configuration and further distribution, orchestrating some of its own functions and, in so doing, evolving itself to suit changing purposes and situations.
  • At last, the application doesn't need to know a whit about its environment.  There's a more vocal school of thought in recent months that touts the benefits of developers building their applications while solely focused on the needs of the program and the interests of its user, while leaving the management of the underlying hardware to the environment, and the distribution of the software to the orchestrator (most often these days, Kubernetes). These are the advocates of so-called serverless architecture. The first time-sharing systems, they say, abstracted the details of computer operation from the functions of the program, and that abstraction may be just as valid and necessary today.

Do you need microservices to be truly cloud-native?

Today, the latter school of thought is, by far, the most vocal, though its core philosophy has yet to thoroughly gel. At the heart of the serverless value proposition is the message that abstractions free developers to think and work entirely in the realm of the problems they're trying to solve. This goes against the notion that an organization's IT infrastructure is at the heart of its business, and sets the pace for its business model.  No longer is all this "strategic alignment" between the business and technology wings of the organization necessary.

Also: 8 ways to make sure you really need microservices

But from there, advocates go on to argue in favor of the decomposition of monolithic applications in favor of a microservices-based model, where individual components coalesce toward common objectives. Specifying what those common objectives are, requires exactly the type of strategic alignment that serverless advocates say they eschew, so that stakeholders can get together on the same page.

Some advocates have also gone on record as supporting a concept they call "cloud-native DevOps," which would align the DevOps (developers + operations professionals) movement with the move toward both serverless and microservices architectures. The key problem with this whole idea is the lack of any evidence whatsoever that the Ops part of that movement has signed onto this idea. If developers are, as serverless advocates described them, "freed" to pursue their own ideas and on their own timetables, then such a separation would go against the notion of coalition with Ops, whose responsibilities include setting timetables, and making sure developers are mindful of their infrastructure.

The evolution to cloud-native in the real world

Let's stop talking about all these things in the abstract, and take a more practical look at what this actually means, in the context of the history of one of the most common classes of server-side application:

A content management system (CMS) is a fairly sophisticated database manager disguised, at least partly, as a word processor. Originally it stored and rendered Web pages as static documents. But as consumers needed the Web to be more functional than archival, the CMS architecture became centered around two processing engines:

  • One that retrieved elements of content from a repository and assembled them into HTML components, at one point called the content delivery application; and,
  • Another which enabled administrators and editors to create the core components or their prototypes, as well as create the styles which these components would follow, called the content management application.

Models of monolithic architecture

The first genuine CMS systems automated the generation of Web pages, based on elements in the repository that were being continually updated, replaced, and amended. The "portals" for such systems -- for instance, Vignette -- were installed on the PCs of the people tasked as users and administrators. As a result, when the system as a whole was sluggish or anemic, its users were the first to suffer, and fixing these ailments required users to deal directly with the IT department, in hand-holding sessions where it was never clear whose hand was leading whose.

Also: Cloud computing: Five key business trends to look out for

In retrospect, the architecture of these first CMS systems, as well as the "knowledge management systems" they inspired, has been called monolithic. When an application resides entirely on a PC, its developers can ensure all the pieces fit together properly before the application is distributed. In a networked system, the repository is behind a server, and in-between the server and its clients, one encounters a lot of middleware. So the pieces don't always fit very well.

With any monolithic application, innovation takes place in very highly planned, coordinated steps. One example of a well-coordinated product plan is Kentico, a CMS for marketing and e-commerce. Having first emerged on the scene in 2004, Kentico soon adopted and maintained a major release cadence of about one version per year. This has been to Kentico's great credit, as its customer base perceived this as emblematic of the system's continuity. As one blogger wrote in late 2017, "Each new release is worth talking about.  I can say that not just because I am a Kentico enthusiast, but really because each major release of Kentico tends to add something that community is demanding."

The history of release strategy

In the era of client/server architecture, the timing of major releases had become an art form unto itself. As veteran analyst Kurt Bittner and consultant Ian Spence advised in 2006 for their book Managing Iterative Software Development Projects, the developer of an application should map out its release cadence in the early stages of its business planning, among other reasons to minimize risk by spreading out evolution over time. Bittner and Spence wrote:

The number of evolutions (major releases) required is usually dictated by business concerns, balancing the rate at which the business can absorb new capabilities against the need for new capabilities. . . Each major release provides a clear end-point toward which everyone works, one that is often missing if the development is planned as an undifferentiated series of ongoing iterations.

If releases are planned too frequently, they warned, developers could run the risk of introducing so much new overhead so soon that users wouldn't be capable of appreciating the value these features added to their own organizations. Release planning, at this period in history, was a finely calculated affair, since it was clear to most everyone that the engines of these applications -- as is certainly the case with a CMS -- are the core of their businesses.

The economic risks of potential downtime, and the certainties of continuity issues, are too great for any organization to undertake unless and until the potential value of forthcoming features -- however long they may have waited in the wings -- outweighs them. (Remind me to tell you sometime about how many years one publisher waited until it felt confident enough to make the switch to a new system that enabled it to boldface its first paragraphs.)  This is why Spence and Bittner warned that the planning of release cycles should be carefully timed to the needs of the business.

Also: What is DevOps? An executive guide

What these authors presumed, however, was that each instance of a CMS could be tuned to the unique needs of its exclusive customer -- which is not how the market ended up working.

For a plethora of reasons, including a sale of its parent company and a massive renaming and relaunch of the product, the gap between Vignette version 7 and version 8 of its successor product, Open Text, was about seven years. But during that time, a surprising number of its customers held tight -- not happily by any means, and in some cases, it seemed, under duress. Once major publishers had committed themselves to as many as seven major releases, some believed there was no way to adopt an alternative platform. As this 2011 AdWeek story entitled "The Trouble with Back-Ends" chronicled, publishers were abandoning their brand-name CMS suites, building their own platforms instead around the open source Drupal framework. . . and discovering success.

In some cases, the effort to maintain the stability older Vignette instances made the jump to version 8 far too great a risk. As Government Technology reported in 2012, the Georgia Technology Authority characterized the exodus as a "force fit."

Attack of the headless hybrid

At about this time, the rise of Web architecture and the onset of HTML5 addressed the monolith issue with the introduction of so-called "RESTful design."  Here, portals are replaced with browser-based front ends that communicate with servers on the back end by means of API calls. We could spend volumes about how this methodology altered front-end architecture. What we would miss is what happened on the back end:  The system no longer needed two (or more) engines to process the input from two (or more) classes of user. Instead, authentication and rights services can validate, filter, and route API calls to the appropriate handler. What's more, using a sophisticated reverse proxy such as NGINX, these API call handlers may be multiplexed and distributed among server nodes, enabling the CMS to respond better to varying workloads.

Also: Three ways DevOps will be more finely tuned in 2019

A new and intriguing CMS architecture arose at this point. Called "headless design" (quite a risky moniker, if you ask me) it eliminated the portal altogether, presenting in its place a single engine that has no standard interface between itself and its control programs. This would free developers to build any kind of front end they needed, over the Web or elsewhere, and continue the development track for their front end independently of the back end. This way, conceivably, features having to do with manageability and productivity could be implemented at a faster pace, without waiting for the next major release of the CMS' repository, or what headless architects call the "content hub."

151204-british-museum.jpg

Inside the British Museum in London

Scott Fulton

Finally, the cloud-native model emerges

Yet moving to a headless model is not a seamless transition but an exodus. Since we've established that the next evolutionary leap forward is over a rather large gulf anyway, organizations are questioning the relative value of moving their existing CMS methods and assets into a headless model and staging that model on a cloud platform, compared to completely rewriting their model using a cloud-native framework.

The latter path would give organizations freedom to experiment with serverlessness, which would appear to align closely enough with headlessness. And by adopting microservices, organizations could also move forward with another concept that has been gaining significant traction: continuous integration and continuous delivery (CI/CD). Under such systems, organizations have timed their release cadences to be as much as 2,500 times faster (yes, those are zeroes, not a % mark that was misread) than companies following the classical Bittner and Spence methodology, with marked improvements in productivity, profitability, and even pleasure in their work.

In an effort to address these emerging questions, new companies including Contentful are producing what they describe as "composable modules" -- components that may be assembled like building on a cloud platform. Think of these modules as pre-mixed, pre-measured ingredients for an organization to build its own cloud-native CMS -- or rather, a system for managing its publications that replaces the CMS as we know it.

One recently published case study [PDF] describes how The British Museum hired a software development firm to assemble Contentful's modules into a single entity that all of the Museum's publishing divisions could use, each in its own way -- as though each arm, including webcasts and print, had its own CMS. The Contentful system points the way toward a new method of assembling and evolving applications, based largely on the needs of the users at the time, and implemented in an expedient fashion rather than a cautious one.

Also: Six steps to DevOps success, analyzed

Reconnoiter

This is how the cloud-native application model is changing the discussion, and has begun to change the data center:

  • Automating the deployment of features and components, no matter how trivial or how extensive they may be, eliminates the risk factors traditionally involved in implementing version updates.
  • With these risks no longer in the picture, organizations are free to think further forward -- to take charge of what they want their information management systems to be and wish they would behave.
  • Now, organizations can afford to hire small teams of developers to make contributions and amendments to big projects, giving them all the benefits of "rolling their own" applications suites without investing in a complete reinvention of the wheel each time.
  • With a cloud platform extended across public and private premises, organizations have the freedom to lease or own as much or as little of their own data center assets as they can manage at the time, and to extend their entire applications environment across both plateaus.
  • Yes, yes, organizations can try out serverless, microservices, and these wonderful new concepts that made Netflix the dynamic organization it has become. But the cloud in itself is already an alien landscape for most enterprises, and such concepts as microservices may as well be other-worldly intelligences speaking unknown languages. Conceivably, the delivery pipeline model introduced by CI/CD can give these enterprises the wide berth they need to try new things at their own pace, iterating rapidly, but in small bites.

Related stories:

Learn more -- From the CBS Interactive Network

Elsewhere