Back in September, Microsoft and Facebook partnered to create the Open Neural Network Exchange (ONNX) format. The open source format gives the AI framework ecosystem interoperability. Amazon Web Services (AWS) has announced it now supports ONNX.
The world’s largest cloud service provider has released ONNX-MXNet, an open source package based on Python. It is designed to import ONNX deep learning models into the Apache MXNet.
By leveraging the ONNX format in MXNet, developers can create and train intelligent models. Support for other frameworks include Microsoft Cognitive Toolkit, Caffe2, and PyTorch. These models can be imported into MXNet to run on the platform’s optimized scalable engine.
Amazon Web Services also revealed it is working with Microsoft and Facebook to enhance and develop the ONNX format.
One of the benefits of ONNX is the format’s ability to bring a definition of extensible computation graph models. Importantly, the format can also act as a standard. Before its release, there was no way to use AI models across multiple frameworks.
ONNX offers that ability, creating a standard open platform for AI models that will work across frameworks. Its main features include:
- “Framework interoperability: Developers can more easily move between frameworks and use the best tool for the task at hand. Each framework is optimized for specific characteristics such as fast training, supporting flexible network architectures, inferencing on mobile devices, etc. Many times, the characteristic most important during research and development is different than the one most important for shipping to production. This leads to inefficiencies from not using the right framework or significant delays as developers convert models between frameworks. Frameworks that use the ONNX representation simplify this and enable developers to be more agile.
- Shared optimization: Hardware vendors and others with optimizations for improving the performance of neural networks can impact multiple frameworks at once by targeting the ONNX representation. Frequently optimizations need to be integrated separately into each framework which can be a time-consuming process. The ONNX representation makes it easier for optimizations to reach more developers.”