Short for Tensor Processing Unit, TPU's are designed for machine learning and tailored for Google's open-source machine learning framework, TensorFlow. The specialized chips can provide 180 teraflops of processing to support training machine learning algorithms, and have been powering Google datacenters since 2015.
"We designed Cloud TPUs to deliver differentiated performance per dollar for targeted TensorFlow workloads and to enable ML engineers and researchers to iterate more quickly," Google wrote in a Cloud Platform blog.
"Over time, we'll open-source additional model implementations. Adventurous ML experts may be able to optimize other TensorFlow models for Cloud TPUs on their own using the documentation and tools we provide."
Google said a limited quantity of TPUs are available today with per-second billing at the rate of $6.50 per Cloud TPU per hour.
The company also announced that GPUs in Kubernetes Engine are in beta and available today in the latest release of the Kubernetes Engine.
According to Google, GPUs in the Kubernetes Engine will help speed up compute-intensive applications like machine learning, image processing and financial modeling. Both the NVIDIA Tesla P100 and K80 GPUs are available as part of the beta, and V100s are said to be on the way.