Google brings distributed computing to TensorFlow machine learning system

TensorFlow 0.8 adds distributed computing support to speed up the learning process for Google's machine learning system.
Written by Larry Dignan, Contributor

Google's TensorFlow machine learning system can now be distributed across multiple machines in an update, TensorFlow 0.8.

The machine learning software is already distributed across hundreds of machines to speed up the learning process. Distributed TensorFlow was one of the top requests by users.

Google open sourced TensorFlow in November. Since then TensorFlow has been the most forked project on GitHub for 2015 and is the most popular machine learning framework.

In a blog post, Google noted that TensorFlow 0.8 should speed up its training process for some models from weeks to hours.

TensorFlow 0.8 is distributed via the high-performance gRPC library and is designed to run shotgun with Google Cloud Machine learning.

Google has also published a distributed trainer to accelerate the learning process. The 0.8 release includes libraries to define distributed models as well as ones for Python.


Editorial standards