ONNX version 1.3 Released

Joseph Spisak edited this page Aug 31, 2018 · 8 revisions

We are excited to announce the v1.3 release of ONNX is now available! For those who aren't aware of or know about the ONNX, you can learn more about the project, who is involved and what tools are available at the onnx.ai (http://onnx.ai/) site.

Huge shout out to Junjie(@bddppq) for owning this release and Raymond (@raymondxyang) for solving all of the nasty segfault issues!!

TL;DR

  • This release added several big features including ONNXIFI 1.0 (a C-Backend for accelerators), an updated Operator set (v8) that now includes Control Flow (coming out of the Control Flow working group), experimental support for Functions (composable operators), enhanced shape inference, additional optimization passes, added backend tests and a new Opset converter to help users convert models across different Operator sets like a time machine! (Shout out to Akshay(@Ac2zoom) and Lu (@HouseRoad)!)
  • All told this release included 147 merged pull requests (with the most prominent ones called out in the details below)

How do I get the latest ONNX?

You can simply pip upgrade using the following command or of course build from source from the latest on GitHub:

pip install onnx --upgrade

Most prominent PRs for those who want the deets:

  • ONNXIFI
    • 556 - Dynamic loading library for ONNXIFI implementations
    • 1154 - Wrapper for vendor-specific ONNXIFI libraries
    • 1156 - [ONNXIFI] Clarify how to get output shape with data-dependent operators
    • 1169 - Update possible values for onnxTensorDescriptor.dataType
    • 1170 - Fix comment refencing ONNXIFI_SYNCHRONIZATION_DEFAULT
    • 1185 - [ONNXIFI] Use event handle in the memory fence structure rather than …
    • 1192 - Onnxifi dummy backend
    • 1199 - Clarify that string properties in ONNXIFI are not locale-sensitive
    • 1203 - Load ONNXIFI libraries with RTLD_LOCAL and remove suffix from functions
    • 1204 - Document ONNXIFI_STATUS_INVALID_POINTER return code
    • 1345 - Specify which ONNXIFI info queries are mandatory/optional
    • 1347 - Update ONNXIFI documentation
    • 1353 - Add capability values for variable batch size
    • 1354 - Add backend queries for OpenCL platform ID and device ID
    • 1356 - Add init properties for CUDA stream and OpenCL context
    • 1224 - Add version tags to tensor descriptor and memory fence structures
    • 1225 - Add ONNXIFI_BACKEND_PROPERTY_LOG_LEVEL backend initialization property
    • 1227 - Clarify behaviour of onnxSetGraphIO w.r.t in-flight graph executions
    • 1228 - Clarify supported synchronization primitives in onnxRunGraph
    • 1229 - Clarify uniqueness of backend ID for the life-time of the process
    • 1231 - Add info queries for ONNXIFI version, ONNX IR version, and opset version
    • 1232 - Add error code for invalid/unsupported memory types in ONNXIFI
    • 1254 - Multiple documentation fixes in ONNXIFI error codes
    • 1255 - Add onnxGetEventState function in ONNXIFI
    • 1256 - Add auxiliary property list argument to onnxInitGraph
    • 1281 - Add onnxGetEventState in onnxifi dummy backend
  • Operator Tests
    • 886 - Add conv transpose test cases
    • 903 - Tests for LRN operator
    • 978 - Update avgpool test cases
    • 1049 - add arg test plus a few minor fixes
    • 1077 - Add retry logic to model downloading
    • 1110 - Add stats on ONNX node tests
    • 1115 - Add Node Tests for Dropout
    • 1117 - Add Node Tests for BatchNormalization
    • 1118 - Add Node Tests for InstanceNormalization
    • 1132 - Add is_compatible method in python backend
    • 1136 - New Test Cases: Clip
    • 1158 - Convtranspose output_shape testcases
    • 1221 - add maxpool_with_argmax test cases
  • Operators Spec
    • 1089 - Add new operator "expand" to onnx
    • 1124 - add broadcasting support for min/max/sum/mean
    • 1128 - Make Axes in Squeeze Optional
    • 1193 - Fix the Spec for Gemm
    • 1200 - Fix an error in GRU formula
    • 1355 - minor fix to scan documentation
    • 1287 - Clarify the spec for convolution and convolution transpose when group > 1.
    • 1206 - Support output indices in MaxPool op
    • 1243 - Fix LpPool document.
    • 1296 - Scan operation
    • 1297 - ControlFlow operators update
  • Opset Converter
    • 1148 - Opset Version Converter
    • 1323 - Test for Version Conversion Adapters that Insert Unsqueeze Nodes
    • 1284 - Broadcast Version Conversion Adapters (Add, Mul, Gemm)
    • 1285 - Remove extraneous unary plus operators
    • 1286 - Relu Version Conversion Adapters
    • 1288 - Batchnorm Version Conversion Adapters
    • 1289 - Concat Adapters
    • 1291 - Reshape Adapters
    • 1292 - Misc Adapters (Sum, AveragePool, MaxPool, Dropout)
  • Infra (Build, Test, Tooling)
    • 1037 - Re-enable mypy, Fix releasing from Windows
    • 1047 - Generate protoc type hints on Windows
    • 1068 - Add a hook for doing post-processing on protobuf generated header files
    • 1138 - Some changes to enable Clang-cl build for Windows
    • 1164 - Let the post-processing hook also handle pb.cc file
    • 1183 - pybind11 extension code should be compiled with hidden visibility
    • 1198 - Expose onnx-operators protobuf structs in Python
    • 1341 - Export ONNX cmake targets so other projects can use them as public dependencies
    • 1315 - Check pybind version
    • 1318 - Use target_compile_definitions instead of add_definitions when setting ONNX_NAMESPACE
    • 1326 - Add support for building with protobuf-lite
  • Optimizer, Shape Inference Engine and Checker
    • 860 - Eliminate unused initializer
    • 1050 - Extract constant to initializer
    • 1078 - fuse consecutive squeezes
    • 1098 - fix optimizer does not set ir_version bug
    • 1106 - Optimization pass to fuse batch normalization operator with convolution operator
    • 1134 - Document Shape Inference and Optimizer utilities
    • 1238 - shape inference: add initializers and inference for reshape operator
    • 1244 - fix shape errors in optimizer test cases
    • 1248 - Shape inference support for conv with groups
    • 1250 - Add function to list all available optimization passes
    • 1307 - [Optimization Pass] Add optimization pass eliminating nop Pad
    • 1320 - Add shape inference function of Upsample operator.
    • 1342 - Fix handling of 0-element tensors in checker
  • IR
    • 1075 - fix Node::isBefore
    • 1086 - add string tensor type mapping
    • 1088 - Graph should only have one (input) kParam node
    • 1146 - Use ONNX_ASSERT(M) instead of throwing exception
    • 1174 - Add a function for accessing the underlying data pointer in the Tensor class
    • 1310 - Domain exists in GraphProto but not in Node
    • 1226 - adding has_sizes to IR

Cheers!

The ONNX Team

You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.