What makes Google Cloud Platform unique compared to Azure and Amazon

Since the 1980s, there’s been a rule that three major competitors don’t last long in a technology market. For Google to stay relevant and remain in contention, it has to keep innovating and changing the rules.
Written by Scott Fulton III, Contributor

There is one very significant, very often overlooked, difference between today's cloud services market and yesterday's PC software market: Software depended on a sales channel, comprised of independent organizations that ensured the availability of software packages through resellers. With the modern cloud, the service provider is the channel provider. The applications and infrastructure you or your enterprise uses are cultivated, deployed, and delivered by the same organization. So perhaps it is no wonder that Google -- the dominant web search provider for most of this century -- is among the world's top three cloud service providers.

Google Cloud WhitepapersInside the API Product Mindset | Guide to Data Analytics and Machine Learning The Future of Cloud Computing | A Faster Path to the Cloud TechRepublic


"Progress (The Advance of Civilization)" by Asher B. Durand, 1853.  Photograph in the public domain.

 "Today, most of the world's enterprise computing still happens on-premise," stated Sundar Pichai, Google's CEO, during his keynote address to the Google Cloud Next 2019 conference in April (though he means "on-premises").  "It hasn't moved to the cloud yet, because the path forward is complex and daunting, and full of difficult decisions. How do you modernize in-place without having to jump completely to the cloud? How do you bridge incompatible architectures while you transition? And how do you maintain flexibility and avoid lock-in?"


The questions Pichai outlined here speak to Google's entire business strategy for cloud computing, and why it focuses its efforts on certain services more than others:

Google Cloud WhitepapersInside the API Product Mindset Guide to Data Analytics and Machine Learning The Future of Cloud Computing A Faster Path to the Cloud

It's too difficult for enterprises to move their IT assets to the cloud

When cloud computing began, the cloud was presented to enterprises and the public at large as the destination point of a massive exodus. As it turns out, the cloud is not really a place. That is to say, no enterprise will "live" in Google Cloud, or AWS, or anywhere else in the information space. Established businesses don't really perceive weaning themselves off of their own data centers, and relocating their systems in someone else's cloud, as a pragmatic or cost-effective goal.

So the business model for GCP has focused not so much on virtual machines or replicated servers (where Amazon got its start) as on services and distributed applications. It offers a variety of relational and non-relational database engines, file and object data stores, and highly automated application deployment services. All of these products are geared toward businesses interested in building new applications in the cloud, or replacing some of their existing data center applications with cloud-based apps. In other words, not a destination but a transition.

In so doing, Google has drawn its main competitors, AWS and Azure, away from their comfort zones in virtual servers and productivity systems, and has successfully made them compete on terms more amenable to Google.

Modernization can take place on-premises as well as in the cloud

Google Cloud Platform is very clearly componentized. It leases the use of parts of its grand machine. Though it would love for enterprises to adopt all these parts together as one bunch, Google knows from experience (including the experiences of others, one of whom will remain nameless because we don't want to upset Microsoft) that the best evangelism with respect to this grand vision will come from intermediate successes. If enterprises can build new applications organically, blending the assets it owns with those it leases from GCP, it can experience success along the way -- and that success could lead to affinity for Google in a market that's becoming more multi-cloud each month.

Also: What Google Cloud Platform is and why you'd use it 

Only very recently has AWS begun offering hybridized services, which take on-premises and hybrid deployments into account. Microsoft had some early success with Azure Stack, a hybrid cloud platform which utilizes the same software and management system as Azure in the cloud. Google's entire cloud philosophy from the start, it would appear, has been to encourage hybridized development on a component-by-component basis (although Azure Stack did upstage both Google and AWS in delivering on-premise, cloud-like management of the entire platform). In other words, Google didn't have to switch tracks.

Transitioning can still bridge incompatible architectures

You can spot the enterprise that wrestles with incompatible architectures in its data centers (both physical and virtual) by the degree to which it regularly monitors them. Monitoring should be a regular part of data center management. In practice, it's a rigorous procedure in which enterprises tend to invest only when they're trying to isolate and remove bottlenecks.

The operations model that Google had been driving for at GCP's inception, is something it calls "NoOps" -- essentially the ability for workloads to sustain themselves on the platform without the customer having to monitor them. To accomplish this, Google has sought to build a network that works on its own terms, and in its own context. That is to say, Google doesn't want to replicate existing data centers, but rather stage workloads in a more modern, distributed, automated way, based on the lessons Google learned from re-deploying its search engine and other services in its own facilities.

Over the past few years, public cloud customers have responded to this methodology by saying, "But we're not Google." Quite smartly, Google has responded to this assertion by pointing out, to be honest, no one is. The tools Google created to address the problems its own data center had, are scalable both down and up. In fact, the brilliance of Google's engineering was due to the fact that its creators conceived a system that was far beyond even Google's requirements at the time, and scaled it down to meet its initial demands.

Enterprise IT didn't come all this way just to become locked in again

If it's anything that capitalists hate in a successfully capitalist economy, it's successful capitalists. Google's business philosophy has, from its inception, taken on cleverly socialist overtones. It adopts the principle of openness, for instance, which it defines as leaving open all avenues of choice and decision for customers, including the decision to undo a decision.


"The reality is that managing hybrid or multi-cloud compute can be incredibly challenging and complex," remarked Google vice president of engineering Eyal Manor, at Google Cloud Next 2019. "Today, 80 percent of workloads are still not in the cloud. There is a very real risk of being locked in, by investing too much in a single cloud."

Annual analysts' surveys are confirming what we've been suspecting for a few years now: For those enterprises that have deployed some of their workloads onto public cloud platforms, the vast majority use more than one platform simultaneously.

A multi-cloud strategy wouldn't make much sense for enterprises that want to ensure interoperability, unless those workloads on disparate platforms were capable of communicating and interoperating with one another. These are features which Google has been touting more heavily in recent months, especially with the rollout of a component called Anthos: awareness of, and adaptability to, multi-cloud deployments. If Google is to ensure a place for itself in the market over the next five years, it has to maintain its toehold in enterprises where a toehold may be all it has.

How Google could rebuild the channel in the cloud

The software sales channel of the 1980s, '90s, and early 2000s was a network of people who ensured the delivery of software and services to business customers. This was when information processing was a retail business, and sales were conducted more by live human beings than by warmed-over "content." What folks often forget was that the channel was a two-way street. It facilitated the publishing of software for developers, just as much as it made that software available to enterprises.

If the Amazon cloud model has a weakness, its that AWS' many services lack a personal connection to their customers. Of course, one of the factors that makes cloud computing inexpensive is that it reduces the number of (salaried) people in the delivery formula. But Google may yet find an adequate replacement: a way for automation to fill the gap left behind when software development stopped being a retail business,

Anthos and the acknowledgement of multicloud

Last April at Google Cloud Next, the division's CEO, Thomas Kurian, unveiled Google's first multi-cloud deployment platform. Called Anthos, it's not only hybrid cloud (on-premises and off) but also AWS-based and Azure-based assets, managed collectively with GCP on a Google-based system.

Anthos was described during its unveiling as though it constructed a pool of resources from all three major clouds -- a union of GCP, AWS, and Azure infrastructures. That's not exactly accurate, nor even is it generally accurate. More to the point, Anthos is a deployment mechanism for containerized applications that involves Kubernetes, the distributed orchestration system created within Google but released into the open source community and made independent. The starting point for that transition may be a variety of clouds, and in the future, the destination point may be a variety of clouds. But all routes lead through Google Kubernetes Engine.

Also: What is Kubernetes? How orchestration redefines the data center 

"We take workloads running on bare metal, on virtual machines -- whether it's on-premises or on a different cloud," explained Lucien Avramov, a Google Cloud product manager, during Google Cloud Next 2019, "and we take those workloads (Windows or Linux) over to GCP. When it's Linux, today, we actually move them into containers. So it's a one-step process to get you directly into the container journey, and to get you to run on GKE."

Among other key features, Anthos utilizes automation techniques to render the applications that are already deployed within first-generation virtual machines (for instance, in VMware vSphere) as containerized applications. This frees them from the constraints of being managed by an operating system environment that is still led to believe it's running a computer all to itself.

Those transformed containers, formerly VMs, reside in Google Kubernetes Engine. That doesn't sound very much like multi-cloud, at least not at this stage. But it is here, in the context of a container platform, that the application has the freedom to interact with containerized services from other platforms. There isn't really much sense in restricting such interaction, so Google decided instead to facilitate it.


"Most companies already have a multi-cloud and hybrid strategy," acknowledged Jennifer Lin, Google's engineering director, "but Anthos is the only platform that lets you actually achieve that."

Google's next step with the serverless model

Also at Google Cloud Next 2019, the company announced a streamlined deployment platform for containerized applications called Cloud Run. Its name is taken from the old "RUN" command that early microcomputer programmers used to stage and run programs from the command line.

Although GCP has offered so-called serverless functions to developers before, Cloud Run is Google's next step in serverless development -- meaning a more modern way to stage and run applications without having to consider the provisioning and management of servers. Essentially, the customer can pretend to forget the server exists.

Easily the most difficult part of running application code designed for the cloud is making the configuration work. Since developers build code locally (on their own systems), what they write must be equally capable of running in the cloud as it is locally, with minimal (or preferably no) changes to the code itself.

In a serverless environment, the cloud configuration takes place completely in the background. Cloud Run's infrastructure is designed to adapt itself to the changing resource requirements of application code as it evolves. As Google developer advocate Bret McGowen remarked in a recent company video, "If you choose serverless, you don't have to worry about infrastructure. Your app scales up and down, and you only pay when it's running."

To enable the flexibility that a Cloud Run app would require, it should utilize a web programming framework geared to "listen" for function calls over one of the typically designated web ports (80 or 443). Cloud Run does not require a framework specifically geared for use on its platform.

So is Kubernetes a Google product or not?

Kubernetes is a containerized application orchestrator platform whose development is stewarded by an independent group called the Cloud-Native Computing Foundation (CNCF), of which Google, Microsoft, and Amazon are members. Although Kubernetes was created by Google engineers who had worked on an internal application management system called Borg, Google no longer has control over Kubernetes.

Google maintains a sophisticated public relations team, and a sharp marketing agenda, all of whom are strongly focused on Kubernetes' success, to help you realize and acknowledge Kubernetes' independence from Google. No binding ties exist, we are reminded, and often. Shudder to think that Google would dare use Kubernetes as a proxy for injecting Google technology into other companies' platforms, even if only to level the playing field between competitors.

"The massive resource requirements of our own cloud services," remarked Google CEO Pichai during the first seven minutes of his program, "with Search, Maps, and Gmail, demanded every ounce of computing power from our servers. But we needed to maintain flexibility to adapt to shifting user demands, and we really wanted it to be easy to shift between jobs. This led to our early experimentation with containers. And we developed our own internal cluster management tool, Borg. And as we developed Google Cloud, we wanted everyone to be able to use Borg for their computing needs."

Borg was released into open source, noted Pichai, and in the process became Kubernetes, "the industry standard for managing containers."

Put another way, Google built Borg into Kubernetes so that folks would deploy containerized workloads into Google Cloud. But let's steer clear, by all means, of the obvious conclusion. Kubernetes is an independently managed component, and we are advised not to think otherwise.

What's the outlook for Google Cloud Platform?

It is the dream scenario for any of the world's major tech companies, to be able not only to suggest the right solutions for customers at the right times in history, but also to define and perhaps even create the problems that these solutions would address. Apple has been in this position several times since the 1970s. Google successfully created and implemented the only massively successful, reasonably dependable, business model for distributed computing services that the web has ever seen. While many ponder whether social media can survive in its current state over the next three to five years, few dispute that advertising will be the driver of web computing over the foreseeable future. Google has no real challengers in this field -- zero.

But the cloud is not the web, nor is Google the undisputed leader in cloud computing. As the #3 player in this market in terms of revenue, by analysts' estimates, it's up to Google to challenge and shake up this market, by asserting new principles for cloud services management, and offering products that adhere to those principles in clever and, where applicable, ingenious ways.

LEARN MORE — From the CBS Interactive Network


Editorial standards