From 5ec61fbb5d97a21d7ac9bcfb9381e7ac4bcf5ce6 Mon Sep 17 00:00:00 2001 From: zha0q1 Date: Thu, 13 May 2021 19:55:21 +0000 Subject: [PATCH 1/6] update readme --- python/mxnet/onnx/README.md | 27 +++++++++++++++++---------- 1 file changed, 17 insertions(+), 10 deletions(-) diff --git a/python/mxnet/onnx/README.md b/python/mxnet/onnx/README.md index 118af59372b8..69c8810c87e3 100644 --- a/python/mxnet/onnx/README.md +++ b/python/mxnet/onnx/README.md @@ -17,25 +17,25 @@ # ONNX Export Support for MXNet ### Overview -[ONNX](https://onnx.ai/), or Open Neural Network Exchange, is an open source deep learning model format that acts as a framework neutral graph representation between DL frameworks or between training and inference. With the ability to export models to the ONNX format, MXNet users can enjoy faster inference and a wider range of deployment device choices, including edge and mobile devices where MXNet installation may be constrained. Popular hardware-accelerated and/or cross-platform ONNX runtime frameworks include Nvidia [TensorRT](https://github.com/onnx/onnx-tensorrt), Microsoft [ONNXRuntime](https://github.com/microsoft/onnxruntime), Apple [CoreML](https://github.com/onnx/onnx-coreml) and [TVM](https://tvm.apache.org/docs/tutorials/frontend/from_onnx.html), etc. +[ONNX](https://onnx.ai/), or Open Neural Network Exchange, is an open source deep learning model format that acts as a framework neutral graph representation between DL frameworks or between training and inference. With the ability to export models to the ONNX format, MXNet users can enjoy faster inference and a wider range of deployment device choices, including edge and mobile devices where MXNet installation may be constrained. Popular hardware-accelerated and/or cross-platform ONNX runtime frameworks include Nvidia [TensorRT](https://github.com/onnx/onnx-tensorrt), Microsoft [ONNXRuntime](https://github.com/microsoft/onnxruntime), Apple [CoreML](https://github.com/onnx/onnx-coreml), etc. ### ONNX Versions Supported -ONNX 1.7 -- Fully Supported -ONNX 1.8 -- Work in Progress +ONNX 1.7 & 1.8 ### Installation -From the 1.9 release and on, the ONNX export module has become an offical, built-in module in MXNet. You can access the module at `mxnet.onnx`. +From MXNet 1.9 release and on, the ONNX export module has become an offical, built-in feature in MXNet. You can access the module at `mxnet.onnx`. If you are a user of earlier MXNet versions and do not want to upgrade MXNet, you can still enjoy the latest ONNX suppor by pulling the MXNet source code and building the wheel for only the mx2onnx module. Just do `cd python/mxnet/onnx` and then build the wheel with `python3 -m build`. You should be able to find the wheel under `python/mxnet/onnx/dist/mx2onnx-0.0.0-py3-none-any.whl` and install it with `pip install mx2onnx-0.0.0-py3-none-any.whl`. You should be able to access the module with `import mx2onnx` then. ### APIs +The main API is `export_model`, which, as the name suggests, exports an MXNet model to the ONNX format. + ```python mxnet.onnx.export_model(sym, params, in_shapes=None, in_types=np.float32, onnx_file_path='model.onnx', verbose=False, dynamic=False, dynamic_input_shapes=None, run_shape_inference=False, input_type=None, input_shape=None) ``` -Exports the MXNet model file into ONNX model. Parameters: @@ -70,19 +70,22 @@ Returns: Onnx file path #### Model with Multiple Input -When the model has multiple input, all the input shapes and dtypes should be provided with `in_shapes` and `in_dtypes`. Note that the shape/dtype in `in_shapes`/`in_dtypes` must follow the same order as in the MXNet model symbol file. If `in_dtypes` is provided as a single data type, the type will be applied to all input nodes. +When the model has multiple input, all the input shapes and dtypes must be provided with `in_shapes` and `in_dtypes`. Note that the shape/dtype in `in_shapes`/`in_dtypes` must follow the same order as in the MXNet model symbol file. If `in_dtypes` is provided as a single data type, then that type will be applied to all input nodes. #### Dynamic Shape Input -By setting up optional flags in export_model API, users have the control of partially/fully dynamic shape input export. The flag `dynamic` is set to switch on dynamic shape input export, and `dynamic_input_shapes` is used to specify which dimensions are dynamic `None` or any string variable can be used to represent a dynamic shape dimension. +We can set `dynamic=True` to turn on support for dynamic input shapes. Note that even with dynamic shapes, a set of static input shapes still need to be specified in `in_shapes`; on top of that, we'll also need to specify which dimensions of the input shapes are dynamic in `dynamic_input_shapes`. We can simply set the dynamic dimensions as `None`, e.g. `(1, 3, None, None)`, or use strings in place of the `None`'s for better understandability in the exported onnx graph, e.g. `(1, 3, 'Height', 'Width')` ```python -# The first input dimension will be dynamic in this case +# The batch dimension will be dynamic in this case +in_shapes = [(1, 3, 224, 224)] dynamic_input_shapes = [(None, 3, 224, 224)] mx.onnx.export_model(mx_sym, mx_params, in_shapes, in_dtypes, onnx_file, dynamic=True, dynamic_input_shapes=dynamic_input_shapes) ``` ### Operator Support Matrix +We have implemented export logics for a wide range of MXNet operators, and thus supported most CV and NLP use cases. Below is our most up-to-date operator support matrix. + |MXNet Op|ONNX Version| |:-|:-:| |Activation|1.7 1.8 | @@ -234,7 +237,9 @@ mx.onnx.export_model(mx_sym, mx_params, in_shapes, in_dtypes, onnx_file, |where|1.7 1.8 | |zeros_like|1.7 1.8 | -### [GluonCV Pretrained Model Support Matrix](https://cv.gluon.ai/model_zoo/index.html) +### [GluonCV](https://cv.gluon.ai/model_zoo/index.html) Pretrained Model Support Matrix +GluonCV is a popular CV toolkit built on top of MXNet. Below is the model support matrix for GluonCV (v0.10.0) models. + |Image Classification| |:-| |alexnet| @@ -391,7 +396,9 @@ mx.onnx.export_model(mx_sym, mx_params, in_shapes, in_dtypes, onnx_file, |inceptionv3_kinetics400| |inceptionv3_ucf101| -### [GluonNLP Pretrained Model Support Matrix](https://nlp.gluon.ai/model_zoo/catalog.html) +### [GluonNLP](https://nlp.gluon.ai/model_zoo/catalog.html) Pretrained Model Support Matrix] +GluonNLP is a popular NLP toolkit built on top of MXNet. Below is the model support matrix for GluonNLP (v0.10.0) models. + |NLP Models| |:-| |awd_lstm_lm_600| From 114ebe886bab73f0025ce7fa9bf10b0ffa4590aa Mon Sep 17 00:00:00 2001 From: zha0q1 Date: Thu, 13 May 2021 22:07:47 +0000 Subject: [PATCH 2/6] revise tutorial --- .../python/tutorials/deploy/export/onnx.md | 57 +++++++++---------- python/mxnet/onnx/README.md | 2 +- 2 files changed, 29 insertions(+), 30 deletions(-) diff --git a/docs/python_docs/python/tutorials/deploy/export/onnx.md b/docs/python_docs/python/tutorials/deploy/export/onnx.md index c727b7e70d44..acb6100524d6 100644 --- a/docs/python_docs/python/tutorials/deploy/export/onnx.md +++ b/docs/python_docs/python/tutorials/deploy/export/onnx.md @@ -17,17 +17,17 @@ # Exporting to ONNX format -[Open Neural Network Exchange (ONNX)](https://github.com/onnx/onnx) provides an open source format for AI models. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. MXNet-ONNX export coverage and features are updated since MXNet 1.9.0. Visit the [ONNX operator coverage](https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#operator-support-matrix) page for the latest information. +[Open Neural Network Exchange (ONNX)](https://github.com/onnx/onnx) provides an open source format for AI models. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. The MXNet-to-ONNX export module (mx2onnx) has been updated with new features such as dynamic input shapes and better operator and model coverages in the MXNet 1.9 release. Please visit the [ONNX Export Support for MXNet](https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#onnx-export-support-for-mxnet) page for more information. -In this tutorial, we will learn how to use MXNet to ONNX exporter on pre-trained models. +In this tutorial, we will learn how to use the mx2onnx exporter on pre-trained models. ## Prerequisites -To run the tutorial you will need to have installed the following python modules: -- [MXNet >= 1.6.0](/get_started) +To run the tutorial we will need to have installed the following python modules: +- [MXNet >= 1.9.0](/get_started) _OR_ an earlier mxnet version + [the mx2onnx wheel](https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#installation) - [onnx >= 1.7.0](https://github.com/onnx/onnx#installation) -*Note:* MXNet-ONNX exporter works with ONNX opset version later than 12, which comes with ONNX v1.7.0 +*Note:* The latest mx2onnx exporting module is tested with ONNX op set version 12 or later, which corresponds to ONNX version 1.7 or later. Use of ealier ONNX versions may still work on some simple models, but again this is not tested. ```python @@ -37,10 +37,10 @@ import logging logging.basicConfig(level=logging.INFO) ``` -## Downloading a model from the MXNet model zoo +## Download a model from the MXNet model zoo -We download the pre-trained ResNet-18 [ImageNet](http://www.image-net.org/) model from the [MXNet Model Zoo](/api/python/docs/api/gluon/model_zoo/index.html). -We will also download synset file to match labels. +We can download a pre-trained ResNet-18 [ImageNet](http://www.image-net.org/) model from the [MXNet Model Zoo](/api/python/docs/api/gluon/model_zoo/index.html). +We will also download a synset file to match the labels. ```python # Download pre-trained resnet model - json and params by running following code. @@ -50,11 +50,9 @@ path='http://data.mxnet.io/models/imagenet/' mx.test_utils.download(path+'synset.txt')] ``` -Now, we have downloaded ResNet-18 symbol, params and synset file on the disk. +## MXNet to ONNX exporter (mx2onnx) API -## MXNet to ONNX exporter API - -Let us describe the MXNet's `export_model` API. +Now let's check MXNet's `export_model` API. ```python help(mx.onnx.export_model) @@ -108,27 +106,26 @@ export_model(sym, params, in_shapes=None, in_types=, onnx This method is available when you ``import mxnet.onnx`` ``` -`export_model` API can accept the MXNet model in one of the following ways. +`export_model` API can accept a MXNet model in one of the following ways. 1. MXNet's exported json and params files: * This is useful if we have pre-trained models and we want to convert them to ONNX format. 2. MXNet sym, params objects: - * This is useful if we are training a model. At the end of training, we just need to invoke the `export_model` function and provide sym and params objects as inputs with other attributes to save the model in ONNX format. The params can be either a single object that contains both argument and auxiliary parameters, or a list that includes arg_parmas and aux_params objects - + * This is useful if we are training a model. At the end of training, we just need to invoke the `export_model` function and provide the sym and params objects as inputs to save the model in ONNX format. The params can be either a single object that contains both argument and auxiliary parameters, or a list that includes arg_parmas and aux_params objects -Since we have downloaded pre-trained model files, we will use the `export_model` API by passing the path for symbol and params files. +Since we have downloaded pre-trained model files, we will use the `export_model` API by passing in the paths of the symbol and params files. -## How to use MXNet to ONNX exporter API +## Use mx2onnx to eport the model -We will use the downloaded pre-trained model files (sym, params) and define input variables. +We will use the downloaded pre-trained model files (sym, params) and define a few extra parameters. ```python # Downloaded input symbol and params files sym = './resnet-18-symbol.json' params = './resnet-18-0000.params' -# Standard Imagenet input - 3 channels, 224*224 -input_shape = [(1,3,224,224)] +# Standard Imagenet input - 3 channels, 224 * 224 +input_shape = [(1, 3, 224, 224)] input_dtypes = [np.float32] # Path of the output file @@ -142,10 +139,10 @@ We have defined the input parameters required for the `export_model` API. Now, w converted_model_path = mx.onnx.export_model(sym, params, input_shape, input_dtypes, onnx_file) ``` -This API returns path of the converted model which you can later use to import the model into other frameworks. Please refer to [mx2onnx](https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#apis) for more details about the API. +This API returns the path of the converted model which you can later use to run inference with or import the model into other frameworks. Please refer to [mx2onnx](https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#apis) for more details about the API. -### Dynamic Shape Input -MXNet to ONNX export also supports dynamic input shapes. By setting up optional flags in `export_model`, users have the control of partially/fully dynamic shape input export. For example, setting the batch dimension to dynamic enables dynamic batching inference; setting the width and height dimension to dynamic allows inference on images with different shapes. Below is a code example for dynamic shape on batch dimension. The flag `dynamic` is set to switch on dynamic shape input export, and `dynamic_input_shapes` is used to specify which dimensions are dynamic. `None` or any string variable can be used to represent a dynamic shape dimension. +## Dynamic input shapes +The mx2onnx module also supports dynamic input shapes. We can set `dynamic=True` to turn it on. Note that even with dynamic shapes, a set of static input shapes still need to be specified in `in_shapes`; on top of that, we'll also need to specify which dimensions of the input shapes are dynamic in `dynamic_input_shapes`. We can simply set the dynamic dimensions as `None`, e.g. `(1, 3, None, None)`, or use strings in place of the `None`'s for better understandability in the exported onnx graph, e.g. `(1, 3, 'Height', 'Width')` ```python # The first input dimension will be dynamic in this case @@ -154,21 +151,23 @@ mx.onnx.export_model(mx_sym, mx_params, in_shapes, in_dtypes, onnx_file, dynamic=True, dynamic_input_shapes=dynamic_input_shapes) ``` -## Check validity of ONNX model +## Validate the exported ONNX model -Now we can check validity of the converted ONNX model by using ONNX checker tool. The tool will validate the model by checking if the content contains valid protobuf: +Now that we have the converted model, we can validate its correctness with the ONNX checker tool. ```python from onnx import checker import onnx -# Load onnx model +# Load the ONNX model model_proto = onnx.load_model(converted_model_path) -# Check if converted ONNX protobuf is valid +# Check if the converted ONNX protobuf is valid checker.check_graph(model_proto.graph) ``` -If the converted protobuf format doesn't qualify to ONNX proto specifications, the checker will throw errors, but in this case it successfully passes. +Now that the model passes the check (hopefully :)), we can run it with inference frameworks or import it into other deep learning frameworks! + +## Simplify the exported ONNX model -This method confirms exported model protobuf is valid. Now, the model is ready to be imported in other frameworks for inference! Users may consider to further optimize the ONNX model file using various tools such as [onnx-simplifier](https://github.com/daquexian/onnx-simplifier). +Okay, we already have the exporeted ONNX model now, but it may not be the end of the story. Due to the differences in MXNet's and ONNX's operator specifications, sometimes helper operartors/nodes will need to be created to help construct the ONNX graph from the MXNet blueprint. In that sense, we recommend our users to checkout [onnx-simplifier](https://github.com/daquexian/onnx-simplifier), which can greatly simply the exported ONNX model by techniques such as constant folding, operator fussion and more. diff --git a/python/mxnet/onnx/README.md b/python/mxnet/onnx/README.md index 69c8810c87e3..5380f924a8ba 100644 --- a/python/mxnet/onnx/README.md +++ b/python/mxnet/onnx/README.md @@ -25,7 +25,7 @@ ONNX 1.7 & 1.8 ### Installation From MXNet 1.9 release and on, the ONNX export module has become an offical, built-in feature in MXNet. You can access the module at `mxnet.onnx`. -If you are a user of earlier MXNet versions and do not want to upgrade MXNet, you can still enjoy the latest ONNX suppor by pulling the MXNet source code and building the wheel for only the mx2onnx module. Just do `cd python/mxnet/onnx` and then build the wheel with `python3 -m build`. You should be able to find the wheel under `python/mxnet/onnx/dist/mx2onnx-0.0.0-py3-none-any.whl` and install it with `pip install mx2onnx-0.0.0-py3-none-any.whl`. You should be able to access the module with `import mx2onnx` then. +If you are a user of earlier MXNet versions and do not want to upgrade MXNet, you can still enjoy the latest ONNX suppor by pulling the MXNet source code and building the wheel for only the mx2onnx module. Just do `cd python/mxnet/onnx` and then build the wheel with `python3 -m build`. You should be able to find the wheel under `python/mxnet/onnx/dist/mx2onnx-0.0.0-py3-none-any.whl` and install it with `pip install mx2onnx-0.0.0-py3-none-any.whl`. You should can then access the module with `import mx2onnx`. The `mx2onnx` namespace is equivalent to `mxnet.onnx`. ### APIs The main API is `export_model`, which, as the name suggests, exports an MXNet model to the ONNX format. From 9b6d15e070b1b7487bdd06ae9d2702c39b221efd Mon Sep 17 00:00:00 2001 From: zha0q1 Date: Thu, 13 May 2021 22:24:17 +0000 Subject: [PATCH 3/6] fixes --- .../python/tutorials/deploy/export/onnx.md | 18 +++++++++--------- example/onnx/cv_model_inference.py | 6 +++--- python/mxnet/onnx/README.md | 4 ++-- python/mxnet/onnx/mx2onnx/_export_model.py | 2 +- 4 files changed, 15 insertions(+), 15 deletions(-) diff --git a/docs/python_docs/python/tutorials/deploy/export/onnx.md b/docs/python_docs/python/tutorials/deploy/export/onnx.md index acb6100524d6..8309d9b950d4 100644 --- a/docs/python_docs/python/tutorials/deploy/export/onnx.md +++ b/docs/python_docs/python/tutorials/deploy/export/onnx.md @@ -17,14 +17,14 @@ # Exporting to ONNX format -[Open Neural Network Exchange (ONNX)](https://github.com/onnx/onnx) provides an open source format for AI models. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. The MXNet-to-ONNX export module (mx2onnx) has been updated with new features such as dynamic input shapes and better operator and model coverages in the MXNet 1.9 release. Please visit the [ONNX Export Support for MXNet](https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#onnx-export-support-for-mxnet) page for more information. +[Open Neural Network Exchange (ONNX)](https://github.com/onnx/onnx) provides an open source format for AI models. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. In the MXNet 1.9 release, the MXNet-to-ONNX export module (mx2onnx) has received a major update with new features such as dynamic input shapes and better operator and model coverages. Please visit the [ONNX Export Support for MXNet](https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#onnx-export-support-for-mxnet) page for more information. In this tutorial, we will learn how to use the mx2onnx exporter on pre-trained models. ## Prerequisites To run the tutorial we will need to have installed the following python modules: -- [MXNet >= 1.9.0](/get_started) _OR_ an earlier mxnet version + [the mx2onnx wheel](https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#installation) +- [MXNet >= 1.9.0](/get_started) _OR_ an earlier MXNet version + [the mx2onnx wheel](https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#installation) - [onnx >= 1.7.0](https://github.com/onnx/onnx#installation) *Note:* The latest mx2onnx exporting module is tested with ONNX op set version 12 or later, which corresponds to ONNX version 1.7 or later. Use of ealier ONNX versions may still work on some simple models, but again this is not tested. @@ -93,7 +93,7 @@ export_model(sym, params, in_shapes=None, in_types=, onnx If True will run shape inference on the model input_type : data type or list of data types This is the old name of in_types. We keep this parameter name for backward compatibility - in_shapes : List of tuple + input_shape : List of tuple This is the old name of in_shapes. We keep this parameter name for backward compatibility Returns @@ -106,7 +106,7 @@ export_model(sym, params, in_shapes=None, in_types=, onnx This method is available when you ``import mxnet.onnx`` ``` -`export_model` API can accept a MXNet model in one of the following ways. +The `export_model` API can accept a MXNet model in one of the following ways. 1. MXNet's exported json and params files: * This is useful if we have pre-trained models and we want to convert them to ONNX format. @@ -117,7 +117,7 @@ Since we have downloaded pre-trained model files, we will use the `export_model` ## Use mx2onnx to eport the model -We will use the downloaded pre-trained model files (sym, params) and define a few extra parameters. +We will use the downloaded pre-trained model files (sym, params) and define a few more parameters. ```python # Downloaded input symbol and params files @@ -125,8 +125,8 @@ sym = './resnet-18-symbol.json' params = './resnet-18-0000.params' # Standard Imagenet input - 3 channels, 224 * 224 -input_shape = [(1, 3, 224, 224)] -input_dtypes = [np.float32] +in_shapes = [(1, 3, 224, 224)] +in_types = [np.float32] # Path of the output file onnx_file = './mxnet_exported_resnet18.onnx' @@ -136,7 +136,7 @@ We have defined the input parameters required for the `export_model` API. Now, w ```python # Invoke export model API. It returns path of the converted onnx model -converted_model_path = mx.onnx.export_model(sym, params, input_shape, input_dtypes, onnx_file) +converted_model_path = mx.onnx.export_model(sym, params, in_shapes, in_types, onnx_file) ``` This API returns the path of the converted model which you can later use to run inference with or import the model into other frameworks. Please refer to [mx2onnx](https://github.com/apache/incubator-mxnet/tree/v1.x/python/mxnet/onnx#apis) for more details about the API. @@ -170,4 +170,4 @@ Now that the model passes the check (hopefully :)), we can run it with inference ## Simplify the exported ONNX model -Okay, we already have the exporeted ONNX model now, but it may not be the end of the story. Due to the differences in MXNet's and ONNX's operator specifications, sometimes helper operartors/nodes will need to be created to help construct the ONNX graph from the MXNet blueprint. In that sense, we recommend our users to checkout [onnx-simplifier](https://github.com/daquexian/onnx-simplifier), which can greatly simply the exported ONNX model by techniques such as constant folding, operator fussion and more. +Okay, we already have the exporeted ONNX model now, but it may not be the end of the story. Due to differences in MXNet's and ONNX's operator specifications, sometimes helper operartors/nodes will need to be created to help construct the ONNX graph from the MXNet blueprint. In that sense, we recommend our users to checkout [onnx-simplifier](https://github.com/daquexian/onnx-simplifier), which can greatly simply the exported ONNX model by techniques such as constant folding, operator fussion and more. diff --git a/example/onnx/cv_model_inference.py b/example/onnx/cv_model_inference.py index 3b3d2ce25871..f28ab5b90827 100644 --- a/example/onnx/cv_model_inference.py +++ b/example/onnx/cv_model_inference.py @@ -64,14 +64,14 @@ def preprocess_image(imgfile, resize_short=256, crop_size=224, # list of shape for all inputs in_shapes = [in_shape] # list of data type for all inputs -in_dtypes = [in_dtype] +in_types = [in_dtype] # export onnx model -mx.onnx.export_model(mx_sym, mx_params, in_shapes, in_dtypes, onnx_file) +mx.onnx.export_model(mx_sym, mx_params, in_shapes, in_types, onnx_file) # # example for dynamic input shape (optional) # # None indicating dynamic shape at a certain dimension # dynamic_input_shapes = [((None, 3, 224, 224))] -# mx.onnx.export_model(mx_sym, mx_params, in_shapes, in_dtypes, onnx_file, +# mx.onnx.export_model(mx_sym, mx_params, in_shapes, in_types, onnx_file, # dynamic=True, dynamic_input_shapes=dynamic_input_shapes) # download and process the input image diff --git a/python/mxnet/onnx/README.md b/python/mxnet/onnx/README.md index 5380f924a8ba..897fd3a9fc51 100644 --- a/python/mxnet/onnx/README.md +++ b/python/mxnet/onnx/README.md @@ -61,7 +61,7 @@ Parameters: If True will run shape inference on the model input_type : data type or list of data types This is the old name of in_types. We keep this parameter name for backward compatibility - in_shapes : List of tuple + input_shape : List of tuple This is the old name of in_shapes. We keep this parameter name for backward compatibility Returns: @@ -79,7 +79,7 @@ We can set `dynamic=True` to turn on support for dynamic input shapes. Note that # The batch dimension will be dynamic in this case in_shapes = [(1, 3, 224, 224)] dynamic_input_shapes = [(None, 3, 224, 224)] -mx.onnx.export_model(mx_sym, mx_params, in_shapes, in_dtypes, onnx_file, +mx.onnx.export_model(mx_sym, mx_params, in_shapes, in_types, onnx_file, dynamic=True, dynamic_input_shapes=dynamic_input_shapes) ``` diff --git a/python/mxnet/onnx/mx2onnx/_export_model.py b/python/mxnet/onnx/mx2onnx/_export_model.py index ad33c2aec7c8..e0fc71cb9459 100644 --- a/python/mxnet/onnx/mx2onnx/_export_model.py +++ b/python/mxnet/onnx/mx2onnx/_export_model.py @@ -81,7 +81,7 @@ def export_model(sym, params, in_shapes=None, in_types=np.float32, If True will run shape inference on the model input_type : data type or list of data types This is the old name of in_types. We keep this parameter name for backward compatibility - in_shapes : List of tuple + input_shape : List of tuple This is the old name of in_shapes. We keep this parameter name for backward compatibility Returns From 9b5c9fc769cd4e79cd9fa3e97983038b46dae0fa Mon Sep 17 00:00:00 2001 From: zha0q1 Date: Thu, 13 May 2021 22:36:48 +0000 Subject: [PATCH 4/6] fixes --- docs/python_docs/python/tutorials/deploy/export/onnx.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/python_docs/python/tutorials/deploy/export/onnx.md b/docs/python_docs/python/tutorials/deploy/export/onnx.md index 8309d9b950d4..5b7a8ed88f45 100644 --- a/docs/python_docs/python/tutorials/deploy/export/onnx.md +++ b/docs/python_docs/python/tutorials/deploy/export/onnx.md @@ -147,8 +147,8 @@ The mx2onnx module also supports dynamic input shapes. We can set `dynamic=True` ```python # The first input dimension will be dynamic in this case dynamic_input_shapes = [(None, 3, 224, 224)] -mx.onnx.export_model(mx_sym, mx_params, in_shapes, in_dtypes, onnx_file, - dynamic=True, dynamic_input_shapes=dynamic_input_shapes) +converted_model_path = mx.onnx.export_model(sym, params, in_shapes, in_types, onnx_file, + dynamic=True, dynamic_input_shapes=dynamic_input_shapes) ``` ## Validate the exported ONNX model From f0dbc60135049413726d287d7eaffbf51d018b1e Mon Sep 17 00:00:00 2001 From: zha0q1 Date: Thu, 13 May 2021 23:33:15 +0000 Subject: [PATCH 5/6] spellings --- docs/python_docs/python/tutorials/deploy/export/onnx.md | 4 ++-- python/mxnet/onnx/README.md | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/python_docs/python/tutorials/deploy/export/onnx.md b/docs/python_docs/python/tutorials/deploy/export/onnx.md index 5b7a8ed88f45..4e74fd73f9fd 100644 --- a/docs/python_docs/python/tutorials/deploy/export/onnx.md +++ b/docs/python_docs/python/tutorials/deploy/export/onnx.md @@ -115,7 +115,7 @@ The `export_model` API can accept a MXNet model in one of the following ways. Since we have downloaded pre-trained model files, we will use the `export_model` API by passing in the paths of the symbol and params files. -## Use mx2onnx to eport the model +## Use mx2onnx to export the model We will use the downloaded pre-trained model files (sym, params) and define a few more parameters. @@ -170,4 +170,4 @@ Now that the model passes the check (hopefully :)), we can run it with inference ## Simplify the exported ONNX model -Okay, we already have the exporeted ONNX model now, but it may not be the end of the story. Due to differences in MXNet's and ONNX's operator specifications, sometimes helper operartors/nodes will need to be created to help construct the ONNX graph from the MXNet blueprint. In that sense, we recommend our users to checkout [onnx-simplifier](https://github.com/daquexian/onnx-simplifier), which can greatly simply the exported ONNX model by techniques such as constant folding, operator fussion and more. +Okay, we already have the exported ONNX model now, but it may not be the end of the story. Due to differences in MXNet's and ONNX's operator specifications, sometimes helper operators/nodes will need to be created to help construct the ONNX graph from the MXNet blueprint. In that sense, we recommend our users to checkout [onnx-simplifier](https://github.com/daquexian/onnx-simplifier), which can greatly simplify the exported ONNX model by techniques such as constant folding, operator fusion and more. diff --git a/python/mxnet/onnx/README.md b/python/mxnet/onnx/README.md index 897fd3a9fc51..299dd5cc20b8 100644 --- a/python/mxnet/onnx/README.md +++ b/python/mxnet/onnx/README.md @@ -25,7 +25,7 @@ ONNX 1.7 & 1.8 ### Installation From MXNet 1.9 release and on, the ONNX export module has become an offical, built-in feature in MXNet. You can access the module at `mxnet.onnx`. -If you are a user of earlier MXNet versions and do not want to upgrade MXNet, you can still enjoy the latest ONNX suppor by pulling the MXNet source code and building the wheel for only the mx2onnx module. Just do `cd python/mxnet/onnx` and then build the wheel with `python3 -m build`. You should be able to find the wheel under `python/mxnet/onnx/dist/mx2onnx-0.0.0-py3-none-any.whl` and install it with `pip install mx2onnx-0.0.0-py3-none-any.whl`. You should can then access the module with `import mx2onnx`. The `mx2onnx` namespace is equivalent to `mxnet.onnx`. +If you are a user of earlier MXNet versions and do not want to upgrade MXNet, you can still enjoy the latest ONNX support by pulling the MXNet source code and building the wheel for only the mx2onnx module. Just do `cd python/mxnet/onnx` and then build the wheel with `python3 -m build`. You should be able to find the wheel under `python/mxnet/onnx/dist/mx2onnx-0.0.0-py3-none-any.whl` and install it with `pip install mx2onnx-0.0.0-py3-none-any.whl`. You can then access the module with `import mx2onnx`. The `mx2onnx` namespace is equivalent to `mxnet.onnx`. ### APIs The main API is `export_model`, which, as the name suggests, exports an MXNet model to the ONNX format. @@ -70,7 +70,7 @@ Returns: Onnx file path #### Model with Multiple Input -When the model has multiple input, all the input shapes and dtypes must be provided with `in_shapes` and `in_dtypes`. Note that the shape/dtype in `in_shapes`/`in_dtypes` must follow the same order as in the MXNet model symbol file. If `in_dtypes` is provided as a single data type, then that type will be applied to all input nodes. +When the model has multiple inputs, all the input shapes and dtypes must be provided with `in_shapes` and `in_dtypes`. Note that the shape/dtype in `in_shapes`/`in_dtypes` must follow the same order as in the MXNet model symbol file. If `in_dtypes` is provided as a single data type, then that type will be applied to all input nodes. #### Dynamic Shape Input We can set `dynamic=True` to turn on support for dynamic input shapes. Note that even with dynamic shapes, a set of static input shapes still need to be specified in `in_shapes`; on top of that, we'll also need to specify which dimensions of the input shapes are dynamic in `dynamic_input_shapes`. We can simply set the dynamic dimensions as `None`, e.g. `(1, 3, None, None)`, or use strings in place of the `None`'s for better understandability in the exported onnx graph, e.g. `(1, 3, 'Height', 'Width')` From 9bf62b40409d588c320e6853ccc5279c1619df6d Mon Sep 17 00:00:00 2001 From: Zhaoqi Zhu Date: Thu, 13 May 2021 21:03:47 -0700 Subject: [PATCH 6/6] Update environment.yml --- docs/python_docs/environment.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/python_docs/environment.yml b/docs/python_docs/environment.yml index 7856a076889b..6c4a5beebe66 100644 --- a/docs/python_docs/environment.yml +++ b/docs/python_docs/environment.yml @@ -26,6 +26,7 @@ dependencies: - sphinx==2.4.0 - matplotlib - notebook +- Jinja2==2.11.3 - pip: - nbconvert==5.6.1 - nbsphinx==0.4.3