Never mind the risk of lock-in: However much you want to go all in on a particular cloud vendor, the rest of your organization...does not. Or has not. This is the ugly truth of all enterprise architecture, a truth that the cloud has not improved: It's a mess. That mess isn't a product of incompetence, but rather of enterprise infrastructure springing up to solve particular needs at particular times.
As such, although CIOs may dream of unified infrastructure standardized on one or two strategic vendors, the reality of enterprise infrastructure is that applications will be split between disparate public clouds, old-school on-prem resources, and private clouds. While managing this morass of apparently conflicting infrastructure can seem daunting, there is hope for CIOs living the multi-cloud dream/nightmare, one that can deliver "diversity in [an enterprise's] underlying fabric but uniformity at the app layer," as Red Hat CMO Tim Yeaton explained it to me.
At least, that's the promise.
Brokering the hype
Multi-cloud is a strategy for some, perhaps as a way to improve disaster recovery/failover, or perhaps as a way to optimize workloads based on a particular cloud's strengths, but it's a reality for all. In a world driven by developers, no CIO can dictate a monogamous cloud relationship. So IT is left to minimize collateral damage and try to unify infrastructure resources.
Into this maelstrom of resources step the cloud brokers, whether technical (delivering a coherent view of performance across clouds) or business (managing billing, paperwork, and so on, associated with multiple vendors). Early on, cloud brokers were touted as a "must have" for enterprises. Today, that recommendation smells a bit fusty and overly idealistic. As Danny Bradbury has written, "It's a utopia to imagine that a cloud customer will be shifting containers around on a minute-by-minute basis between AWS and Rackspace."
Even if it were possible/feasible, Redmonk analyst James Governor has cautioned, "If organisations are adopting multi-cloud for portability reasons, rather than to take advantage of the respective strengths of particular clouds for particular apps and workloads, they're going to have a tough job justifying the management overhead for anything outside the most basic Infrastructure as a Service workloads." That management overhead is real, and it is brutal. Small wonder, then, that popular blogger Cloud Opinion calls the notion of moving workloads between different clouds based on shifting dynamics like pricing "a pipe dream and vendor marketing."
Compounding this problem is the increasing richness of cloud services, following on Governor's point. In a world awash with basic compute and storage, it's relatively easy to move workloads between providers. This, however, isn't the world we live in.
Enterprises increasingly embrace different clouds optimized for different services. While each of the clouds is a solid choice for machine learning, for example, Google generally gets the nod as the frontrunner. Many enterprises will turn to Google for machine learning, AWS for Lambda, Microsoft Azure to modernize their legacy applications, and so on.
Such cloud differentiation makes the likelihood of multi-cloud management ever harder. As cloud luminary Bernard Golden told me, "While it appears attractive to use a management tool that encapsulates the individual cloud providers and provides a single management framework, since it promises to reduce costs by amortizing training and employee costs across a greater breadth of applications, in practice it typically means using a lowest-common denominator application management approach, which often forfeits use of functionality that resides within a provider's IaaS/PaaS offerings."
In other words, if you want the best of AWS, Microsoft Azure, and Google Cloud, it's going to be hard to manage that 'best' in a central, cross-cloud tool. This leaves enterprises in the semi-portable workload world they inhabited long before cloud promised to fulfill their wildest dreams.
Abandon portability hope, all ye who enter the clouds...?
Brother, can you spare some portability?
Not necessarily. At least, not completely. As Rishidot co-founder Krishnan Subramanian told me, "Often a right platform abstraction can help to tame the complexity in multi-cloud environments. Right abstraction not only reduces the ops overhead but also plays a critical role in developer productivity."
One way to achieve this abstraction is through a PaaS tool like Red Hat OpenShift or Pivotal's CloudFoundry. Indeed, Yeaton showcased the ability to give developers a unified app platform even as the underlying infrastructure gets abstracted away. In a subsequent discussion with Red Hat product management director Chris Morgan, he advised, "You need to have the means of abstracting away the things that make your code unique. I shouldn't have to care where the infrastructure is coming from."
The problem, however, is that some things aren't easily abstracted away. For example, each of the major public cloud vendors has introduced services that are exclusive to them, even if the general idea (AWS Lambda, for example) can be found on rival platforms. "With great power comes great lock in," Governor rightly warned, hitting on the idea that the more developers embrace the unique aspects of a cloud platform, the tighter they're wedded to it.
Along the way, they'll build data silos that are arguably more difficult than the services lock-in to escape. As developer Bryan Leroux told me, "Lock-in [is] not at the function level. (Shim is trivial.) [It] happens at the data layer: DynamoDB, etcetera are inherently non-portable."
Which brings us back to the essential difficulty of successfully navigating a multi-cloud world: data gravity. Unique services encourage developers to build on multiple clouds, and the costs associated with moving data between clouds, or even between different regions within the same cloud provider, make it prohibitive to dig oneself out of the multi-cloud hole.
Service Broker Savior
This is a hard problem to solve, with no magical solution. If I have app nodes running across clouds but in the same cluster, I have two different networks. How do you resolve that? How do I ensure an app is using data local to the nearest cloud resources? In our conversation, Morgan suggested that the right answer is to "Focus on how the community is resolving these problems."
By embracing this API, Red Hat, for one, hopes to expand the reach and richness of OpenShift, a platform it already characterizes as "the new Red Hat Enterprise Linux." In other words, as Red Hat's Daniel Riek has posited, "By expanding RHEL from the traditional binary application runtime on a single server into a scalable platform for orchestrated, multi-container applications and micro-service architecture, OpenShift delivers the common runtime for traditional and cloud-native containerized applications across the hybrid cloud infrastructure options."
Take Red Hat out of the description, however, and you get a sense of how the open-source community wants to tackle this multi-cloud problem (and opportunity). Open source is all about fostering choice, not limiting it. As such, the idea of neutering rich and unique services from AWS, Microsoft Azure, and Google Cloud is not a winning strategy. Rather, the open-source community is trying to provide common APIs and platforms on top of this rich and variegated infrastructure, whether open or closed, so that developers can get more done.
"Convenience," Governor says, "is the killer app." To date, that convenience has come from building on individual, siloed clouds. Going forward, the open-source world wants to bring all these clouds together with existing on-prem and private cloud resources, making the convenient...even more convenient.