The company is also offering sustained use discounts for both the K80 and P100 GPUs, which means that if you use a virtual machine for 50 percent of the month, you get an effective discount of 10 percent, rising to 30 percent for those who use it for 100 percent of the month.
According to Google, using P100 GPUs can accelerate workloads by up to 10 times compared to using K80 GPUs.
The P100 and K80 GPUs will be offered in four regions worldwide.
One user, Shazam, is impressed with the Nvidia GPU offerings on the Google Compute Platform.
"For certain tasks, [NVIDIA] GPUs are a cost-effective and high-performance alternative to traditional CPUs," says Ben Belchak, Head of Site Reliability Engineering at Shazam. "They work great with Shazam's core music recognition workload, in which we match snippets of user-recorded audio fingerprints against our catalog of over 40 million songs. We do that by taking the audio signatures of each and every song, compiling them into a custom database format and loading them into GPU memory. Whenever a user Shazams a song, our algorithm uses GPUs to search that database until it finds a match. This happens successfully over 20 million times per day."
According to Google, Cloud GPUs provide an unparalleled combination of flexibility, performance and cost-savings compared to traditional solutions:
Flexibility: Google's custom VM shapes and incremental Cloud GPUs provide the ultimate amount of flexibility. Customize the CPU, memory, disk and GPU configuration to best match your needs.
Fast performance: Cloud GPUs are offered in passthrough mode to provide bare-metal performance. Attach up to 4 P100 or 8 K80 per VM (we offer up to 4 K80 boards, that come with 2 GPUs per board). For those looking for higher disk performance, optionally attach up to 3TB of Local SSD to any GPU VM.
Low cost: With Cloud GPUs you get the same per-minute billing and Sustained Use Discounts that you do for the rest of GCP's resources. Pay only for what you need!
Cloud integration: Cloud GPUs are available at all levels of the stack. For infrastructure, Compute Engine and Google Container Engineer allow you to run your GPU workloads with VMs or containers. For machine learning, Cloud Machine Learning can be optionally configured to utilize GPUs in order to reduce the time it takes to train your models at scale with TensorFlow.