Google Cloud Platform continued its targeting of enterprise customers on Thursday, promoting recent investments and advancements in data centers, security, and container technology as part of its day two keynote at the GCP Next conference in San Francisco.
Google's Greg DeMichillie opened the keynote, explaining that Google viewed these three areas as "the most critical in the evolution of cloud."
DeMichillie then introduced data center head Joe Kava, who walked through Google's data center strategy. According to Kava, the core principles of Google's approach to data centers are availability, security, and performance.
Kava explained the company's 24x7x365, and strong approach to both physical and virtual security. In terms of performance, he addressed Google's proprietary hardware designs, high efficiency cooling, and full integration through the stack.
Listed in Kava's presentation slides was a new OCP (Open Compute Project) announcement, a rack specification that includes 48V power distribution and a new form factor to allow OCP racks to fit into Google data centers.
While discussing monitoring, Kava noted that the root cause for 70% of failure on average is human error. Google, he said, had a much smaller instance of human error and it caused 0% downtime.
One of the most impassioned aspects of Kava's address was his talk on sustainability, calling out research firm Gartner for not including it as a measurement in their Magic Quadrant.
Niels Provos, a distinguished engineer at Google, then took the stage to discuss security at Google, noting in his presentation that "Trust is created through transparency, not just technology."
Provos started by explaining that base eight security layers that Google addresses as a whole.
- OS and IPC
He then broke down how these layers affect the Google Cloud Platform and the key investments that the company will be making in the future. First off, Provos explained that a key management system that allows you to manage your own keys, and other additional capabilities will be coming later on in the year.
While Google Cloud Platform benefits from Google's existing investments in the security layers, he said, there will be some future investments in specific layers for Google Cloud Platform.
At the networking layer, the company is working on outbound firewalls. For applications, Provos said that Google is improving vulnerability scanning and will be offering source code security capabilities if you store your source code with Google. At the deployment layer, Google is adding an unphishable HW 2nd factor. Finally, for usage, Provos said Google plans on giving strong authorization and additional app use signals for when you develop your application.
SEE: Google Cloud Platform signs up enterprise giants, how does it compare to AWS? (TechRepublic)
Containers was the final piece of the puzzle, and it was addressed by Eric Brewer, a vice president of infrastructure at Google.
Brewer emphasized the growth of the cloud-native era, and how containers play into that growth. Brewer explained that Kubernetes, Google's container cluster manager, is 20 months old and is already in the top 0.01% of GitHub projects. He explained the difference between Kubernetes and Docker by saying that Docker makes containers accessible, while Kubernetes makes containers production ready.
Brewer also announced Kubernetes 1.2, even though it had been out for a few days, explaining that the latest release allows users up to 1000 nodes, 30,000 pods, and flexible auto scaling.
Kubernetes 1.2 also includes ingress rules for inbound connections, the ability to create pods and run them to completion in batch. DaemonSet, another features, allows users the ability to run daemons on a set of cluster nodes, and an additional API to manage deployment and the ConfigMap feature were added.
Brewer closed by announcing Google Deployment Manager, a resource definition and management framework. Users will be able to use parameterized templates, create their own combination of Google Cloud Platform primitives and share them.