Nvidia makes Kubernetes on GPUs available

Video: Nvidia smashes estimates with strong Q1 results
Featured
At the CVPR (Computer Vision and Pattern Recognition) conference on Tuesday, Nvidia announced new deep learning tools for both researchers and developers, including a release candidate version of Kubernetes on Nvidia GPUs that's available to developers for feedback and testing.
Read also: Kubernetes 1.10: Improving storage, security, and networking
Kubernetes on Nvidia GPUs lets developers and DevOps engineers build and deploy GPU-accelerated deep learning training or inference applications on multi-cloud GPU clusters, at scale. It enables the automation of deployment, maintenance, scheduling and operation of GPU-accelerated application containers. This should help developers handle the growing number of AI-powered applications and services, Nvidia noted.
Additionally, Nvidia announced the general availability of TensorRT 4, the latest version of its deep learning inferencing software, which optimizes performance. With integration into TensorFlow, it speeds up the inference performance of speech, audio and recommender apps. While it was in beta, engineers using TensorRT 4 set two new inference speed records, Nvidia noted -- one in latency and one in throughput
Nvidia is also using the conference to demonstrate an early release of APEx, an open source extension that enables researchers using PyTorch to maximize deep learning training performance on Volta GPUs. Now available in beta on GitHub, it automatically enables mixed precision training that's easy to use.
Read also: How to quickly install Kubernetes on Ubuntu (TechRepublic)
Lastly, Nvidia is making available Nvidia DALI, an open source library researchers can use to optimize data pipelines of deep learning frameworks. Nvidia's data scientists used it to fine-tune DGX-2, the AI supercomputer, to achieve a record-breaking 15,000 images per second in training.
Hands-on with Azure Data Lake: How to get productive fast
Related stories: