Facebook on Friday said it will open source deep learning modules for the Torch artificial intelligence project in a move that could speed up technology and make it more accessible to developers and ultimately companies.
Specifically, Facebook is contributing the following:
Of those aforementioned items, Facebook's biggest contribution is the GPU layer code that speeds up the training of convolutional networks, also known as ConvNets. Facebook explained:
Since improving training time of these models translates to faster R&D, we've spent considerable engineering effort to improve the GPU convolution layers. The work has produced notable results, achieving speedups of up to 23.5x compared to the fastest publicly available code. As far as we can tell, our code is faster than any other publicly available code when used to train popular architectures such as a typical deep ConvNets for object recognition on the ImageNet data set.
Facebook provided links to various papers about its technology, but the plain English takeaway is that faster training of neural networks will now be widely available. That move should make Torch more relevant to more enterprises. Today, Torch is used by academic labs and companies that have a role in AI processing such as Google, Twitter, Nvidia, AMD and Intel to name a few.
In the enterprise, companies such as IBM have touted AI as a way to improve analytics. While Watson isn't AI exactly and more cognitive computing, the upshot is that computers will learn and transform multiple industries. Facebook may have helped accelerate that AI movement a bit.