X
Innovation

'One machine learning model to rule them all': Google open-sources tools for simpler AI

Google is taking a modular approach to accelerating deep-learning research.
Written by Liam Tung, Contributing Writer
google-office-thumb.jpg

Google hopes its Tensor2Tensor library will help accelerate deep-learning research.

Image: Google

Google researchers have created what they call "one model to learn them all" for training AI models in different tasks using multiple types of training data.

The researchers and the AI-focused Google Brain Team have packaged up the model along with other tools and modular components in its new Tensor2Tensor library, which they hope will help accelerate deep-learning research.

The framework promises to take some of the work out of customizing an environment to enable deep-learning models to work on various tasks.

As they note in a new paper called 'One model to learn them all', deep learning has had success in speech recognition, image classification and translation, but each model needs to be tuned specifically for the task at hand.

Also, models are often trained on tasks from the same "domain", such as translation tasks being trained with other translation tasks.

Together, these factors slow down deep-learning research and also don't follow how the human brain works, which is capable of taking lessons from one challenge and applying it to solving a new task.

The model it created is trained on a variety of tasks, including image recognition, translation tasks, image captioning, and speech recognition.

They claim that the single model can concurrently learn a number of tasks from multiple domains and that the model is capable of transferring knowledge. It is able to learn from tasks with a large amount of training data and apply that knowledge to tasks where data is limited.

The Tensor2Tensor library, which is maintained by Google Brain researchers and engineers, offers a set of open-source tools for training deep-learning models in TensorFlow. The library "strives to maximize idea bandwidth and minimize execution latency", according to its description on GitHub.

"T2T facilitates the creation of state-of-the art models for a wide variety of ML applications, such as translation, parsing, image captioning and more, enabling the exploration of various ideas much faster than previously possible," explains Łukasz Kaiser, a senior research scientist from Google Brain Team and lead author of the paper.

It also includes a library of datasets and models drawn from recent papers by Google Brain researchers.

Kaiser has posted results of benchmarking tests using BLEU for machine translation, which show its best T2T model offering state-of-the-art results with fewer GPUs and in much less time than previous models without T2T.

"Notably, with T2T you can approach previous state-of-the-art results with a single GPU in one day," said Kaiser.

The library also contains relevant datasets, model architectures, optimizers, learning rate decay schemes, hyperparameters, and so forth, as well as a standard interface between these components.

More on Google and AI

Editorial standards