Underneath your Facebook kid photos and political arguments, there's an innovative hardware and software platform. While best known for its open-source server hardware approach, Open Compute Project, Facebook does far more than rationalize data centers. Recently, Facebook has open-sourced its machine learning (ML) and artificial intelligence (AI) framework: PyTorch. Now, it's matured to the point that a beta of PyTorch 1.0 has been released.
The Python-based PyTorch 1.0 provides developers with the power to seamlessly move from research to production in a single framework. PyTorch 1.0 integrates PyTorch's research-oriented aspects with the modular, production-focused capabilities of Caffe2, a popular deep learning framework, and ONNX (Open Neural Network Exchange), an open format to represent deep learning models.
This isn't just good ideas and a half-baked open-source project. PyTorch is already used in many of Facebook's products. The best known example of PyTorch in action is how Facebook now uses AI on neural networks to perform six billion translations a day.
Since Facebook open-sourced PyTorch, the project has gained many supporters. With deeper cloud service support from Amazon Web Services (AWS), Google Cloud, and Microsoft Azure, and tighter integration with technology providers ARM, Intel, IBM, NVIDIA, and Qualcomm, developers can easily deploy PyTorch's ecosystem of compatible software, hardware, and developer tools.
For example, Amazon SageMaker, AWS's fully managed platform for training and deploying ML models at scale, now provides preconfigured PyTorch 1.0 environments, which include automatic model tuning. Simultaneously, Google Cloud Platform's Deep Learning VM has a new PyTorch 1.0 VM image. This comes with NVIDIA drivers and tutorials preinstalled. Google also offers cloud tensor processing units (TPUs), which are custom-developed ML Application-Specific Integrated Circuits (ASIC)s for ML.
And, finally Microsoft Azure Machine Learning service now allows developers to seamlessly move from training PyTorch models on a local server to scaling out on the Azure cloud.
In this new release, PyTorch boasts a new hybrid front end, which enables tracing and scripting models from eager mode into graph mode for bridging the gap between exploration and production deployment. It also now has a revamped torch.distributed library that allows for faster training across Python and C++ environments, and a beta eager mode C++ interface for performance-critical research.
This just touches the surface of PyTorch's new features and its broad acceptance and integration by other technology companies. This AI/ML framework is being adopted rapidly by almost everyone working in the AI space.
- Facebook open-sources PyTorch 1.0 AI framework
- Fast.ai's software could radically democratize AI
- Startup uses AI and machine learning for real-time background checks
- Facebook advances computer vision using hashtagged pictures
- 10 ways AI will impact the enterprise in 2018 TechRepublic
- AI means a lifetime of training CNET