Google Cloud announced Monday that Cloud TPUs are available in beta on Google Cloud Platform.
Short for Tensor Processing Unit, TPU's are designed for machine learning and tailored for Google's open-source machine learning framework, TensorFlow. The specialized chips can provide 180 teraflops of processing to support training machine learning algorithms, and have been powering Google datacenters since 2015.
"We designed Cloud TPUs to deliver differentiated performance per dollar for targeted TensorFlow workloads and to enable ML engineers and researchers to iterate more quickly," Google wrote in a Cloud Platform blog.
"Over time, we'll open-source additional model implementations. Adventurous ML experts may be able to optimize other TensorFlow models for Cloud TPUs on their own using the documentation and tools we provide."
Google said a limited quantity of TPUs are available today with per-second billing at the rate of $6.50 per Cloud TPU per hour.
According to Google, GPUs in the Kubernetes Engine will help speed up compute-intensive applications like machine learning, image processing and financial modeling. Both the NVIDIA Tesla P100 and K80 GPUs are available as part of the beta, and V100s are said to be on the way.
PREVIOUS AND RELATED COVERAGE
Google's second-generation Tensor Processing Units Pods can deliver 11.5 petaflops of calculations.
Google shared details about the performance of the custom-built Tensor Processing Unit (TPU) chip, designed for machine learning.
The Internet giant revealed it's built its own hardware that's been powering data centers for the past year with significant results.
READ MORE ON AI
- Intel announces self-learning AI chip Loihi
- Microsoft shows off Brainwave 'real-time AI' platform on FPGAs
- AI training needs a new chip architecture: Intel
- Apple developing dedicated AI chip for iPhone, iPad: Report
- Huawei unveils Kirin 970 chipset with AI
- Nvidia's AI creates fake celebrity photos so real it's scary (CNET)
- Microsoft's next HoloLens will have dedicated AI chip, boosting on-board processing(TechRepublic)