Microsoft is furthering its support of PyTorch and has detailed how PyTorch 1.2 can be used in the Azure platform. In a blog post this week, the company discussed how the latest version of the machine learning framework functions in Azure. Furthermore, Microsoft said how it plans to continue support for the library.
PyTorch is a popular open-source machine learning framework that works as a tool for training and creating learning models. Completed models can be used across instances, typically within natural language recognition and computer vision.
Microsoft has been offering full support of PyTorch in Azure since last year. Developers within the company are part of the Torch community and Microsoft allows customers to access the framework within its own AI services.
In a blog post, Microsoft says the Torch-based library is now available in several of its services.
Among them are Azure Notebooks and Azure Machine Learning. Minna Xiao Program Manager II, Machine Learning Platform, explains how PyTorch works in services:
- “Azure Machine Learning service – Azure Machine Learning streamlines the building, training, and deployment of machine learning models. Azure Machine Learning’s Python SDK has a dedicated PyTorch estimator that makes it easy to run PyTorch training scripts on any compute target you choose, whether it’s your local machine, a single virtual machine (VM) in Azure, or a GPU cluster in Azure. Learn how to train Pytorch deep learning models at scale with Azure Machine Learning.
- Azure Notebooks – Azure Notebooks provides a free, cloud-hosted Jupyter notebook server with PyTorch 1.2 pre-installed. To learn more, check out the PyTorch tutorials and examples.
- Data Science Virtual Machine – Data Science Virtual Machines are pre-configured with popular data science and deep learning tools, including PyTorch 1.2. You can choose a variety of machine types to host your Data Science Virtual Machine, including those with GPUs. To learn more, refer to the Data Science Virtual Machine documentation.”
Microsoft has also revealed how it is working to make PyTorch better. Specifically, the company wants to make moving models from training to release more efficient. Microsoft recommends the use of the Open Neural Network Exchange (ONNX).
ONNX creates a standard open platform for AI models that will work across frameworks.
Developed by Microsoft, Facebook and Amazon, the platform is growing in popularity. By its full launch in 2017 Facebook said several major tech companies have joined. Among them are AMD, ARM, IBM, Intel, Huawei, NVIDIA, and Qualcomm.
Numerous frameworks are already adopting the ecosystem. They include Microsoft’s own Cognitive Toolkit, as well as Caffe 2, Apache MXNet, PyTorch and NVIDIA’s TensorRT.
Microsoft has added the following ONNX features to PyTorch 1.2:
- Support for a wider range of PyTorch models, including object detection and segmentation models such as mask RCNN, faster RCNN, and SSD
- Support for models that work on variable length inputs
- Export models that can run on various versions of ONNX inference engines
- Optimization of models with constant folding
- End-to-end tutorial showing export of a PyTorch model to ONNX and running inference in ONNX Runtime