Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ONNX release 1.12.0 #11640

Closed
etiotto opened this issue May 26, 2022 · 18 comments · Fixed by #11924
Closed

ONNX release 1.12.0 #11640

etiotto opened this issue May 26, 2022 · 18 comments · Fixed by #11924
Assignees
Labels
contributions welcome lower priority issues for the core ORT teams

Comments

@etiotto
Copy link

etiotto commented May 26, 2022

We are releasing ONNX 1.12.0. A release branch is created (https://github.com/onnx/onnx/tree/rel-1.12.0). The planned release date is the week of 2022-06-06.

It is important to integrate ONNX release branch into ORT ASAP so that any issues and incompatibilities can be detected and resolved before the ONNX release. Please follow instructions at (https://github.com/microsoft/onnxruntime/blob/master/docs/How_To_Update_ONNX_Dev_Notes.md) to integrate with the ONNX release branch. Please implement CPU kernels for new and updated ONNX ops. A list of new and updated ops can be found at (https://github.com/onnx/onnx/wiki/Logistics-for-ONNX-Release-1.12.0).

Changes in ONNX are documented in Logistics-for-ONNX-Release-1.12.0 wiki (or in the 1.12.0 release note).

In case a bug in ONNX is detected during integration of ONNX 1.12.0, please contact the ONNX Release Manager @etiotto so that the bug is fixed in the ONNX release branch for the Integration to continue.

@snnn snnn added the contributions welcome lower priority issues for the core ORT teams label May 26, 2022
@snnn
Copy link
Member

snnn commented May 26, 2022

The last time it took 3 months. I hope you don't need to wait on this.

@RyanUnderhill
Copy link
Member

Just a note that the release branch/release date above is showing the previous release.

garymm added a commit to garymm/onnxruntime that referenced this issue May 26, 2022
@garymm
Copy link
Contributor

garymm commented May 26, 2022

@snnn asking for help on these instructions from the "How to Update ONNX" page:

please tell Changming deploying the new test data along with other test models to our CI build machines

I'm telling you :-) LMK if there's anything I need to do.

manually queue a build for every packaging pipeline for your branch.

How do I do that?

@snnn
Copy link
Member

snnn commented May 27, 2022

Below are all the pipelines. You can find them at https://aiinfra.visualstudio.com/Lotus/_build

Pipeline name filename
DML Nuget Pipeline nuget/gpu-esrp-pipeline.yml
Linux GPU(cuda10.2) Validation Pipeline linux-gpu-cuda-11-pipeline.yml
Linux Multi GPU CI Pipeline linux-multi-gpu-ci-pipeline.yml
Linux Multi GPU TensorRT CI Pipeline linux-multi-gpu-tensorrt-ci-pipeline.yml
Npm Packaging Pipeline npm-packaging-pipeline.yml
Nuget WindowsAI Pipeline nuget/templates/windowsai.yml
OLive-pipeline py-package-build-pipeline.yml
ONNX Runtime Web Pipeline web-packaging-pipeline.yml
Python Packaging Pipeline (Training CPU) orttraining-py-packaging-pipeline-cpu.yml
Python Packaging Pipeline (Training Cuda 10.2) orttraining-py-packaging-pipeline-cuda102.yml
Python Packaging Pipeline (Training Cuda 11.3) orttraining-py-packaging-pipeline-cuda113.yml
Python Packaging Pipeline (Training Cuda 11.5) orttraining-py-packaging-pipeline-cuda115.yml
Python packaging pipeline py-packaging-pipeline.yml
Windows GPU(cuda10.2) Validation Pipeline win-gpu-cuda-10-2-pipeline.yml
Zip-Nuget-Java-Nodejs Packaging Pipeline c-api-noopenmp-packaging-pipelines.yml
onnxruntime-ios-packaging-pipeline mac-ios-packaging-pipeline.yml

You can also use https://github.com/microsoft/azure-devops-python-api to write a script for doing this. I had one, but I couldn't find it anymore.

garymm added a commit to garymm/onnxruntime that referenced this issue May 31, 2022
garymm added a commit to garymm/onnxruntime that referenced this issue Jun 1, 2022
@garymm
Copy link
Contributor

garymm commented Jun 2, 2022

@garymm
Copy link
Contributor

garymm commented Jun 3, 2022

Status update before I head off for the weekend:

I figured out how to run the ONNX conformance tests via the nodeJS API. The tests that were initially failing were all for newly added ops.

This is what I've been using to build and test:

./build.sh --cmake_generator Ninja --config Debug --skip_submodule_sync --parallel --enable_onnx_tests --enable_transformers_tool_test --build_nodejs --update --build
cd js/node
npm test -- --parallel --grep 'test_(blackmanwindow|blackmanwindow|dft|hammingwindow|hannwindow|layer_normalization|melweightmatrix|stft)'

After registering op implementations in #11733, the remaining test failures are in this file:
test-out.txt

Some of the actual vs expected are quite close and I am hoping we can configure the test runner to ignore the diffs.
Some of them are quite far and I fear there're bugs in either ORT or the function defs or the test data.

CC @xadupre @smk2007

@snnn
Copy link
Member

snnn commented Jun 4, 2022

@fs-eire any comment to the above?

@smk2007
Copy link
Member

smk2007 commented Jun 4, 2022

Status update before I head off for the weekend:

I figured out how to run the ONNX conformance tests via the nodeJS API. The tests that were initially failing were all for newly added ops.

This is what I've been using to build and test:

./build.sh --cmake_generator Ninja --config Debug --skip_submodule_sync --parallel --enable_onnx_tests --enable_transformers_tool_test --build_nodejs --update --build
cd js/node
npm test -- --parallel --grep 'test_(blackmanwindow|blackmanwindow|dft|hammingwindow|hannwindow|layer_normalization|melweightmatrix|stft)'

After registering op implementations in #11733, the remaining test failures are in this file: test-out.txt

Some of the actual vs expected are quite close and I am hoping we can configure the test runner to ignore the diffs. Some of them are quite far and I fear there're bugs in either ORT or the function defs or the test data.

CC @xadupre @smk2007

Hi, I took a look.

Window Functions Tests (Hann, Hamming, Blackman)
The issue here is that we added the periodic attribute to the function late in the CR process.
Now the tests have ONNX tests have 2 issues.

  1. They are generating functions that do not take into account the periodic attribute in the function definition.
  2. The default behavior in ONNX runtime has not been updated to match, and is missing the periodic attribute the contrib op definition always generated the "periodic" (window size instead of window size -1 ) window and not the "symmetric" window. The ONNX test has a few issues in the window functions (constants are incorrectly set).

DFT Tests
The DFT seems like it just needs to have precision in the tests updated. Where do I do that?

STFT Tests
The STFT test is also wrong because it hasnt taken into consideration that the default output will be onesided, and so it is made the expected dimension 16, instead of half+1 (9). I can update this test as well.

I added this PR to address: onnx/onnx#4249

@gramalingam
Copy link
Contributor

gramalingam commented Jun 5, 2022

@wschin : are the layer-normalization error differences expected? (Please see: https://github.com/microsoft/onnxruntime/files/8836128/test-out.txt ). I wonder if there is some subtle difference in the ONNX reference implementation/function and actual? Error margin is around 1% but more in one case. It also looks like there is a shape error in one case?

@garymm
Copy link
Contributor

garymm commented Jun 7, 2022

After incorporating onnx/onnx#4249, I still see these failures (there are other failures but I don't expect those to have been fixed by the PR):

test_blackmanwindow_symmetric_expanded
[ERR_ASSERTION]: Expected values to be strictly deep-equal:
+ actual - expected

  [
+   9
-   10
  ]
      + expected - actual

       [
      -  9
      +  10
       ]

test_hammingwindow_symmetric_expanded
[ERR_ASSERTION]: Expected values to be strictly deep-equal:
+ actual - expected

  [
+   9
-   10
  ]
      + expected - actual

       [
      -  9
      +  10
       ]

test_hannwindow_symmetric_expanded
[ERR_ASSERTION]: Expected values to be strictly deep-equal:
+ actual - expected

  [
+   9
-   10
  ]
      + expected - actual

       [
      -  9
      +  10
       ]
      
test_stft
[ERR_ASSERTION]: actual[269]=0.00008392586460104212, expected[269]=-12.363648414611816

@smk2007 @xadupre can you PTAL?

@gramalingam
Copy link
Contributor

From what Sheil mentioned, the ORT implementation also needs a change, to support the newly added "periodic" attribute?

@garymm
Copy link
Contributor

garymm commented Jun 7, 2022

@gramalingam I don't think that should apply to the _expanded tests which expand the function body before passing it to ORT. Please correct me.

@smk2007
Copy link
Member

smk2007 commented Jun 7, 2022

@gramalingam I don't think that should apply to the _expanded tests which expand the function body before passing it to ORT. Please correct me.

@gramalingam @garymm
Window functions fix
Looked into it. For the symmetric case, the functions are using "Size_FP" when they should be using the original size passed in ("Periodic_Size_FP") to generate the range of values...

STFT fix
The error was in the test again, and is related to incorrect slicing of the input signal. Effectively the input signal was being sliced, and then the dft was being performed, rather than vice-verse. I have validated the output from the onnx test with the ORT implementation:

image

The fix for STFT has been added to & the fixes for the symmetric tests are here: onnx/onnx#4256

@garymm
Copy link
Contributor

garymm commented Jun 7, 2022

These are different tests right? test_dft vs test_stft?

Oops, deleted comment

@garymm
Copy link
Contributor

garymm commented Jun 7, 2022

There's currently no way to specify a tolerance for a specific test. I'm looking into adding that.
It'd be ideal if we could find some small change to the implementation to make the output match the reference output more closely so we don't have to relax the tolerance, so if you have any ideas there LMK.

@smk2007
Copy link
Member

smk2007 commented Jun 7, 2022

There's currently no way to specify a tolerance for a specific test. I'm looking into adding that. It'd be ideal if we could find some small change to the implementation to make the output match the reference output more closely so we don't have to relax the tolerance, so if you have any ideas there LMK.

Not sure here, taking a look.

garymm added a commit to garymm/onnxruntime that referenced this issue Jun 7, 2022
Prior to this every test shared the same global tolerances. This meant
that if an ONNX test failed due to a small but acceptable difference in
output, the only alternative was to disable the test entirely.

In op set 17, the DFT operator is being added. Without this change, the
tests for that operator fail because the output is off by about 5e-5.
It's better to keep test coverage for this new op rather than disable
the test entirely.

Also prior to this change, the global tolerances were not shared between
JavaScript and Python tests. Now they are.

Unblocks microsoft#11640.
@garymm
Copy link
Contributor

garymm commented Jun 7, 2022

I put up a PR to support per-test tolerances: #11775

Does anyone know if the C++ onnx_test_runner runs in CI? If so then I'll need to update the PR to cover the C++ as well. So far I haven't found it actually being run in CI.

garymm added a commit to garymm/onnxruntime that referenced this issue Jun 9, 2022
Unlike the previous code, this handles version strings like "1.12.0rc3".

Unblocks microsoft#11640.
garymm added a commit to garymm/onnxruntime that referenced this issue Jun 9, 2022
garymm added a commit to garymm/onnxruntime that referenced this issue Jun 9, 2022
Note code is mostly being moved, not added. These ops were previously
only registered as Microsoft contrib ops and only built if
`BUILD_MS_EXPERIMENTAL_OPS=1`. They've been added to the ai.onnx
standard op set in version 17.

Main components of this change:

* Move the kernels from the conrib_ops directory to the
  core directory.
* Add function bodies for ms experimental ops. This will allow
  old models that use the contrib ops to continue to function.
  All the function bodies consist of a single op (the
  new standard op), so performance overhead should be minimal.

Minor clean-up also in this change:

* De-duplicate get_scalar_value_from_tensor: put it in a new utils.h.
* Fix some bugs that caused compilation errors with the experimental
  ops. Tested with `build.sh --ms_experimental`
* Fix some spelling errors and lint violations.
* Replace a couple of switch statements with `MLTypeCallDispatcher`.
* Use `InlineVector` instead of `std::vector`.

Unblocks microsoft#11640
garymm added a commit to garymm/onnxruntime that referenced this issue Jun 9, 2022
Note code is mostly being moved, not added. These ops were previously
only registered as Microsoft contrib ops and only built if
`BUILD_MS_EXPERIMENTAL_OPS=1`. They've been added to the ai.onnx
standard op set in version 17.

Main components of this change:

* Move the kernels from the conrib_ops directory to the
  core directory.
* Add function bodies for ms experimental ops. This will allow
  old models that use the contrib ops to continue to function.
  All the function bodies consist of a single op (the
  new standard op), so performance overhead should be minimal.

Minor clean-up also in this change:

* De-duplicate get_scalar_value_from_tensor: put it in a new utils.h.
* Fix some bugs that caused compilation errors with the experimental
  ops. Tested with `build.sh --ms_experimental`
* Fix some spelling errors and lint violations.
* Replace a couple of switch statements with `MLTypeCallDispatcher`.
* Use `InlineVector` instead of `std::vector`.

Unblocks microsoft#11640
@wschin
Copy link
Contributor

wschin commented Jun 10, 2022

Sorry. My original test script has a bug. This PR, onnx/onnx#4263, should unblock layer normalization tests.

garymm added a commit to garymm/onnxruntime that referenced this issue Jun 10, 2022
Note code is mostly being moved, not added. These ops were previously
only registered as Microsoft contrib ops and only built if
`BUILD_MS_EXPERIMENTAL_OPS=1`. They've been added to the ai.onnx
standard op set in version 17.

Main components of this change:

* Move the kernels from the conrib_ops directory to the
  core directory.
* Add function bodies for ms experimental ops. This will allow
  old models that use the contrib ops to continue to function.
  All the function bodies consist of a single op (the
  new standard op), so performance overhead should be minimal.

Minor clean-up also in this change:

* De-duplicate get_scalar_value_from_tensor: put it in a new utils.h.
* Fix some bugs that caused compilation errors with the experimental
  ops. Tested with `build.sh --ms_experimental`
* Fix some spelling errors and lint violations.
* Replace a couple of switch statements with `MLTypeCallDispatcher`.
* Use `InlineVector` instead of `std::vector`.

Unblocks microsoft#11640
garymm added a commit that referenced this issue Jun 14, 2022
Unlike the previous code, this handles version strings like "1.12.0rc3".

Unblocks #11640.
garymm added a commit that referenced this issue Jun 14, 2022
Prior to this every test shared the same tolerances. This meant
that if an ONNX test failed due to a small but acceptable difference in
output, the only alternative was to disable the test entirely.

In op set 17, the DFT operator is being added. Without this change, the
tests for that operator fail because the output is off by about 5e-5.
It's better to keep test coverage for this new op rather than disable
the test entirely.

Also prior to this change, the global tolerances were not shared between
C++, JavaScript, and Python tests. Now they are.

Also fix various minor issues raised by linters.

Unblocks #11640.
garymm added a commit to garymm/onnxruntime that referenced this issue Jun 15, 2022
garymm added a commit to garymm/onnxruntime that referenced this issue Jun 16, 2022
Note code is mostly being moved, not added. These ops were previously
only registered as Microsoft contrib ops and only built if
`BUILD_MS_EXPERIMENTAL_OPS=1`. They've been added to the ai.onnx
standard op set in version 17.

Main components of this change:

* Move the kernels from the conrib_ops directory to the
  core directory.
* Add function bodies for ms experimental ops. This will allow
  old models that use the contrib ops to continue to function.
  All the function bodies consist of a single op (the
  new standard op), so performance overhead should be minimal.

Minor clean-up also in this change:

* De-duplicate get_scalar_value_from_tensor: put it in a new utils.h.
* Fix some bugs that caused compilation errors with the experimental
  ops. Tested with `build.sh --ms_experimental`
* Fix some spelling errors and lint violations.
* Replace a couple of switch statements with `MLTypeCallDispatcher`.
* Use `InlineVector` instead of `std::vector`.

Unblocks microsoft#11640
garymm added a commit to garymm/onnxruntime that referenced this issue Jun 17, 2022
garymm added a commit to garymm/onnxruntime that referenced this issue Jun 17, 2022
Note code is mostly being moved, not added. These ops were previously
only registered as Microsoft contrib ops and only built if
`BUILD_MS_EXPERIMENTAL_OPS=1`. They've been added to the ai.onnx
standard op set in version 17.

Main components of this change:

* Move the kernels from the conrib_ops directory to the
  core directory.
* Add function bodies for ms experimental ops. This will allow
  old models that use the contrib ops to continue to function.
  All the function bodies consist of a single op (the
  new standard op), so performance overhead should be minimal.

Minor clean-up also in this change:

* De-duplicate get_scalar_value_from_tensor: put it in a new utils.h.
* Fix some bugs that caused compilation errors with the experimental
  ops. Tested with `build.sh --ms_experimental`
* Fix some spelling errors and lint violations.
* Replace a couple of switch statements with `MLTypeCallDispatcher`.
* Use `InlineVector` instead of `std::vector`.

Unblocks microsoft#11640
garymm added a commit to garymm/onnxruntime that referenced this issue Jun 20, 2022
garymm added a commit to garymm/onnxruntime that referenced this issue Jun 20, 2022
garymm added a commit that referenced this issue Jun 22, 2022
Follow-ups that need to happen after this and before the next ORT release:
* Support SequenceMap with #11731
* Support signal ops with #11778

Follow-ups that need to happen after this but don't necessarily need to happen before the release:
* Implement LayerNormalization kernel for opset version 17: #11916

Fixes #11640
garymm added a commit to garymm/onnxruntime that referenced this issue Jun 22, 2022
Note code is mostly being moved, not added. These ops were previously
only registered as Microsoft contrib ops and only built if
`BUILD_MS_EXPERIMENTAL_OPS=1`. They've been added to the ai.onnx
standard op set in version 17.

Main components of this change:

* Move the kernels from the conrib_ops directory to the
  core directory.
* Add function bodies for ms experimental ops. This will allow
  old models that use the contrib ops to continue to function.
  All the function bodies consist of a single op (the
  new standard op), so performance overhead should be minimal.

Minor clean-up also in this change:

* De-duplicate get_scalar_value_from_tensor: put it in a new utils.h.
* Fix some bugs that caused compilation errors with the experimental
  ops. Tested with `build.sh --ms_experimental`
* Fix some spelling errors and lint violations.
* Replace a couple of switch statements with `MLTypeCallDispatcher`.
* Use `InlineVector` instead of `std::vector`.

Unblocks microsoft#11640
garymm added a commit to garymm/onnxruntime that referenced this issue Jun 24, 2022
Note code is mostly being moved, not added. These ops were previously
only registered as Microsoft contrib ops and only built if
`BUILD_MS_EXPERIMENTAL_OPS=1`. They've been added to the ai.onnx
standard op set in version 17.

Main components of this change:

* Move the kernels from the conrib_ops directory to the
  core directory.
* Add function bodies for ms experimental ops. This will allow
  old models that use the contrib ops to continue to function.
  All the function bodies consist of a single op (the
  new standard op), so performance overhead should be minimal.

Minor clean-up also in this change:

* De-duplicate get_scalar_value_from_tensor: put it in a new utils.h.
* Fix some bugs that caused compilation errors with the experimental
  ops. Tested with `build.sh --ms_experimental`
* Fix some spelling errors and lint violations.
* Replace a couple of switch statements with `MLTypeCallDispatcher`.
* Use `InlineVector` instead of `std::vector`.

Unblocks microsoft#11640
skottmckay pushed a commit that referenced this issue Jun 27, 2022
* Register signal ops for op set 17

Note code is mostly being moved, not added. These ops were previously
only registered as Microsoft contrib ops and only built if
`BUILD_MS_EXPERIMENTAL_OPS=1`. They've been added to the ai.onnx
standard op set in version 17.

Main components of this change:

* Move the kernels from the conrib_ops directory to the
  core directory.
* Add function bodies for ms experimental ops. This will allow
  old models that use the contrib ops to continue to function.
  All the function bodies consist of a single op (the
  new standard op), so performance overhead should be minimal.

Minor clean-up also in this change:

* De-duplicate get_scalar_value_from_tensor: put it in a new utils.h.
* Fix some bugs that caused compilation errors with the experimental
  ops. Tested with `build.sh --ms_experimental`
* Fix some spelling errors and lint violations.
* Replace a couple of switch statements with `MLTypeCallDispatcher`.
* Use `InlineVector` instead of `std::vector`.

Unblocks #11640
RandySheriffH pushed a commit that referenced this issue Jun 29, 2022
Follow-ups that need to happen after this and before the next ORT release:
* Support SequenceMap with #11731
* Support signal ops with #11778

Follow-ups that need to happen after this but don't necessarily need to happen before the release:
* Implement LayerNormalization kernel for opset version 17: #11916

Fixes #11640
RandySheriffH pushed a commit that referenced this issue Jun 29, 2022
* Register signal ops for op set 17

Note code is mostly being moved, not added. These ops were previously
only registered as Microsoft contrib ops and only built if
`BUILD_MS_EXPERIMENTAL_OPS=1`. They've been added to the ai.onnx
standard op set in version 17.

Main components of this change:

* Move the kernels from the conrib_ops directory to the
  core directory.
* Add function bodies for ms experimental ops. This will allow
  old models that use the contrib ops to continue to function.
  All the function bodies consist of a single op (the
  new standard op), so performance overhead should be minimal.

Minor clean-up also in this change:

* De-duplicate get_scalar_value_from_tensor: put it in a new utils.h.
* Fix some bugs that caused compilation errors with the experimental
  ops. Tested with `build.sh --ms_experimental`
* Fix some spelling errors and lint violations.
* Replace a couple of switch statements with `MLTypeCallDispatcher`.
* Use `InlineVector` instead of `std::vector`.

Unblocks #11640
RandySheriffH pushed a commit that referenced this issue Jun 29, 2022
Follow-ups that need to happen after this and before the next ORT release:
* Support SequenceMap with #11731
* Support signal ops with #11778

Follow-ups that need to happen after this but don't necessarily need to happen before the release:
* Implement LayerNormalization kernel for opset version 17: #11916

Fixes #11640
RandySheriffH pushed a commit that referenced this issue Jun 29, 2022
* Register signal ops for op set 17

Note code is mostly being moved, not added. These ops were previously
only registered as Microsoft contrib ops and only built if
`BUILD_MS_EXPERIMENTAL_OPS=1`. They've been added to the ai.onnx
standard op set in version 17.

Main components of this change:

* Move the kernels from the conrib_ops directory to the
  core directory.
* Add function bodies for ms experimental ops. This will allow
  old models that use the contrib ops to continue to function.
  All the function bodies consist of a single op (the
  new standard op), so performance overhead should be minimal.

Minor clean-up also in this change:

* De-duplicate get_scalar_value_from_tensor: put it in a new utils.h.
* Fix some bugs that caused compilation errors with the experimental
  ops. Tested with `build.sh --ms_experimental`
* Fix some spelling errors and lint violations.
* Replace a couple of switch statements with `MLTypeCallDispatcher`.
* Use `InlineVector` instead of `std::vector`.

Unblocks #11640
RandySheriffH pushed a commit that referenced this issue Jul 6, 2022
Follow-ups that need to happen after this and before the next ORT release:
* Support SequenceMap with #11731
* Support signal ops with #11778

Follow-ups that need to happen after this but don't necessarily need to happen before the release:
* Implement LayerNormalization kernel for opset version 17: #11916

Fixes #11640
RandySheriffH pushed a commit that referenced this issue Jul 6, 2022
* Register signal ops for op set 17

Note code is mostly being moved, not added. These ops were previously
only registered as Microsoft contrib ops and only built if
`BUILD_MS_EXPERIMENTAL_OPS=1`. They've been added to the ai.onnx
standard op set in version 17.

Main components of this change:

* Move the kernels from the conrib_ops directory to the
  core directory.
* Add function bodies for ms experimental ops. This will allow
  old models that use the contrib ops to continue to function.
  All the function bodies consist of a single op (the
  new standard op), so performance overhead should be minimal.

Minor clean-up also in this change:

* De-duplicate get_scalar_value_from_tensor: put it in a new utils.h.
* Fix some bugs that caused compilation errors with the experimental
  ops. Tested with `build.sh --ms_experimental`
* Fix some spelling errors and lint violations.
* Replace a couple of switch statements with `MLTypeCallDispatcher`.
* Use `InlineVector` instead of `std::vector`.

Unblocks #11640
RandySheriffH added a commit that referenced this issue Jul 7, 2022
* Update ONNX to 1.12 (#11924)

Follow-ups that need to happen after this and before the next ORT release:
* Support SequenceMap with #11731
* Support signal ops with #11778

Follow-ups that need to happen after this but don't necessarily need to happen before the release:
* Implement LayerNormalization kernel for opset version 17: #11916

Fixes #11640

* Dll version fix ovep4.1 (#11953)

* Setting default version values for ovep dlls as well

* Update backend_manager.cc

Co-authored-by: mayavijx <mayax.vijayan@intel.com>
Co-authored-by: mohsin <mohsinx.mohammad@intel.com>

* Optimize t5 encoder in beam search (#11926)

* ooptimize t5 encoder

* update

* update

* update

* refactor expand impl

* cuda tests passed

* update

* alignment

* more alignments

* review comments

* Allow saving on CPU usage for infrequent inference requests by reducing thread spinning (#11841)

Introduce Start/Stop threadpool spinning switch
Add a session config option to force spinning stop at the end of the Run()

* Restructure function inliner (#11731)

* Add nested function call tests

* Add overload for Specialize

* Pass symboltable to onnx shape inference

* Avoid renaming empty names

* Enable sequence_map tests which failed before this change

* Deprecate APIs returning raw ptrs and provide replacements (#11922)

Provider better documentation

* register signal ops for opset 17 (#11778)

* Register signal ops for op set 17

Note code is mostly being moved, not added. These ops were previously
only registered as Microsoft contrib ops and only built if
`BUILD_MS_EXPERIMENTAL_OPS=1`. They've been added to the ai.onnx
standard op set in version 17.

Main components of this change:

* Move the kernels from the conrib_ops directory to the
  core directory.
* Add function bodies for ms experimental ops. This will allow
  old models that use the contrib ops to continue to function.
  All the function bodies consist of a single op (the
  new standard op), so performance overhead should be minimal.

Minor clean-up also in this change:

* De-duplicate get_scalar_value_from_tensor: put it in a new utils.h.
* Fix some bugs that caused compilation errors with the experimental
  ops. Tested with `build.sh --ms_experimental`
* Fix some spelling errors and lint violations.
* Replace a couple of switch statements with `MLTypeCallDispatcher`.
* Use `InlineVector` instead of `std::vector`.

Unblocks #11640

* Include opset 15 in Conv+BatchNormalization fusion (#11960)

* Fix WinML Tests are still targetting deprecated (deleted) experimental signal op definitions (#12006)

* fix winml tests

* remove legacy test

* switch idft -> dft+inverse attr

* upgrade opset 13->17 for signal ops tests

* [C# Tests] Add support for double tensor output in TestPreTrainedModels. (#12008)

Add support for double tensor output in TestPreTrainedModels.

* DML EP ResNet50 opset 15 fails in ONNX checker for FusedBatchNormalization lacking training_mode attribute (#12010)

FusedBatchNormalization include training_mode attribute

* Generalize native op creation (#11539)

* create op from ep

* read input count from context

* create holder to host nodes

* fix typo

* cast type before comparison

* throw error on API fail

* silence warning from minimal build

* switch to unique_ptr with deleter to host nodes

* fix typo

* fix build err for minimal

* fix build err for minimal

* add UT for conv

* enable test on CUDA

* add comment

* fix typo

* use gsl::span and string view for Node constructor

* Added two APIs - CopyKernelInfo and ReleaseKernelInfo

* pass gsl::span by value

* switch to span<NodeArg* const> to allow for reference to const containers

* fix typo

* fix reduced build err

* fix reduced build err

* refactoring node construction logic

* rename exceptions

* add input and output count as arguments for op creation

* refactor static member

* use ORT_CATCH instead of catch

* cancel try catch

* add static value name map

* format input definition and set err code

* fix comments

* fix typo

* [DML EP] Pad operator: Handle negative pad counts (#11974)

* Pad fallback to CPU

* Added queryPad in operatorRegistration.cpp

* Acknowledged PR comments

* Used any_of

* used none_of instead of any_of

Co-authored-by: Sumit Agarwal <sumitagarwal@microsoft.com>

* Add warning about future computation change for ConvTranspose with auto_pad (#11984)

* Add warning about future computation change for Convtranspose with auto_pad

* improve msg

* update TODO to make lint happy

* update more contents for warning and add if

* valid was not infected

* move it into kernel registration

* parse auto_pad myself

* try to use conv_transpose_attrs_.auto_pad directly

* update roialign cuda impl to onnx opset16 (#12036)

* roialign opset16

* fix

* fix

* Fix windows eager build break by pinning to torch version 1.11.0 (#12033)

Fix windows and linux eager build to torch 1.11.0.

* Skip Constant Folding for ops producing an optional type output (#11839)

* Disable sequence-type tests since C# infra doesn't support well (#12037)

* Extend lifetime of KernelDef when creating a standalone op (#12057)

place tmp kernel def as local variable to cover the lifetime of kernel creation

* Add targets files for new .net6 frameworks (#12016)

* Add net6 targets.
Remove maccatalyst as we don't have a native build targetting that.

* Set platform in macos targets

* Add targetFramework entries

* Move NativeLib.DllName definition and set using preprocessor values for simplicity. Couldn't get it to build with the preprocessor based setup when it was in a separate file.

Update the nuspec generation to set platform version for .net6 targets. TODO: Validate versions. I copied them from the managed nuget package the packaging pipeline generated prior to adding targets. Possibly w could/should lower some of the versions.

Hopefully the need to specify a version goes away when the release version of VS2022 supports .net6.

* Try android 31.1 as https://github.com/actions/virtual-environments/blob/main/images/win/Windows2022-Readme.md suggests that should be available on the CI machines

* Fix patch version mismatch
Add some extra debug info in case it helps

* Debug nuget location in CI

* Add workspace entry back in

* Add steps

* One more attempt with hardcoded nuget.exe path and original android31.0 version

* Better fix - found explicit nuget download and updated version there.

* flake8 fixes

* Fix black complaints.

* Exit Microsoft_ML_OnnxRuntime_CheckPrerequisites for net6 iOS.

* Removed outdated comment

* Fix DML custom operators which set descriptor heap to command list (#12059)

* Make C# runtest.sh automatically set latest opset (#12039)

* Update C# runtest.sh for opset 17

Should have been part of #11924

* get appropriate opset version from onnx doc

* use absolute rather than relative path

* fix typo in var name

* Disable DML command list reuse for Xbox (#12063)

disable cl reuse for xbox

* Add data type check in ConvAddRelu fusion (#12058)

* Add undocumented attribute to disable generation of Java bindings from the Android AAR. (#12075)

The generated bindings causes C# build errors that require workaround code. Disabling generation should avoid the need for any workarounds.

As the user has the C# ORT package with the C# to C bindings there's no need for binding generation that calls the ORT Java API (which is C# -> Java ->C).

* enable the extensions custom build for java and android (#11823)

* generate quantization parameter for outputs (#12089)

* DML EP Update to DML 1.9 (#12090)

* Update to DML 1.9

* Appease obnoxious Python formatting tool

* Fix orttraining-linux-ci-pipeline - Symbolic shape infer (#11965)

fix symbolic shape error due to upgraded numpy + legacy sympy

* check consumers of dq node before swap dq and transpose (#12099)

* check consumers of dq node before swap dq and transpose

* add unit test

Co-authored-by: Gary Miguel <garymiguel@microsoft.com>
Co-authored-by: Preetha Veeramalai <preetha.veeramalai@intel.com>
Co-authored-by: mayavijx <mayax.vijayan@intel.com>
Co-authored-by: mohsin <mohsinx.mohammad@intel.com>
Co-authored-by: Ye Wang <52801275+wangyems@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: G. Ramalingam <grama@microsoft.com>
Co-authored-by: Dwayne Robinson <dwayner@microsoft.com>
Co-authored-by: Sheil Kumar <smk2007@gmail.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Co-authored-by: sumitsays <sumitagarwal330@gmail.com>
Co-authored-by: Sumit Agarwal <sumitagarwal@microsoft.com>
Co-authored-by: Chun-Wei Chen <jacky82226@gmail.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: Wil Brady <25513670+WilBrady@users.noreply.github.com>
Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: Wei-Sheng Chin <wschin@outlook.com>
Co-authored-by: Scott McKay <skottmckay@gmail.com>
Co-authored-by: Jeff Bloomfield <38966965+jeffbloo@users.noreply.github.com>
Co-authored-by: Justin Stoecker <justoeck@microsoft.com>
Co-authored-by: Wenbing Li <10278425+wenbingl@users.noreply.github.com>
Co-authored-by: Yufeng Li <liyufeng1987@gmail.com>
Co-authored-by: pengwa <pengwa@microsoft.com>
siweic0 pushed a commit to siweic0/onnxruntime-web that referenced this issue May 9, 2024
Follow-ups that need to happen after this and before the next ORT release:
* Support SequenceMap with microsoft#11731
* Support signal ops with microsoft#11778

Follow-ups that need to happen after this but don't necessarily need to happen before the release:
* Implement LayerNormalization kernel for opset version 17: microsoft#11916

Fixes microsoft#11640
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
contributions welcome lower priority issues for the core ORT teams
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants