Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement a more stable softmax #2715

Merged
merged 3 commits into from
Jan 6, 2020
Merged

Implement a more stable softmax #2715

merged 3 commits into from
Jan 6, 2020

Conversation

yufenglee
Copy link
Member

Description: Describe your changes.

Motivation and Context

  • Why is this change required? What problem does it solve?
  • If it fixes an open issue, please link to the issue here.

@yufenglee yufenglee requested a review from a team as a code owner December 21, 2019 00:14
fs-eire
fs-eire previously approved these changes Dec 21, 2019
Copy link
Contributor

@fs-eire fs-eire left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changes in CPU implementation looks good to me

}
for (int i = 0; i < D; i++) {
y[i] = expf(x[i] - max);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it be more performant to use Eigen for reduce max and computation?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Softmax is very cheap, it's hard to see any difference.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, it is slower with Eigen.

using BlockReduce = cub::BlockReduce<float, TPB>;
__shared__ typename BlockReduce::TempStorage tmp_storage;

__shared__ float reverse_z;
__shared__ float sum_reverse_block;
__shared__ float max_block;

float thread_data(0);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thread_data [](start = 8, length = 11)

If all inputs are negative, 0 is not a good init value, should be something like -1e38

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If all inputs are negative, e^x won't have instability issue, thus it is good to subtract 0 (max) from the value list.

@@ -238,8 +238,13 @@ Status Attention<T>::Compute(OpKernelContext* context) const {
float* x = reinterpret_cast<T*>(scratch_data) + j * D;
float* y = x;

for (int i = 0; i < D; i++)
y[i] = expf(x[i]);
float max = 0.f;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another way to get max (assume D > 0):
max = x[0];
for (i = 1; i < D; i++) {...
Otherwise, you might need add comments for case that all x[i] is negative.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If all inputs are negative, e^x won't have instability issue, thus it is good to subtract 0 (max) from the value list.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add some comments in code to avoid confusion.

@yufenglee yufenglee merged commit 72bdfc8 into master Jan 6, 2020
RyanUnderhill pushed a commit that referenced this pull request Jan 16, 2020
* Implement a more stable SoftMax
 e^x is represented as infinity if x is large enough, like 100.f. Infinity divided by Infinity is a NAN. Thus, softmax gets a NAN if one or more item are large enough.
A math transform as below is leveraged to get a stable softmax:
e^xi/(e^x1 + ...e^xn) = e^(xi - max) / (e^(x1 - max) + ... + e^(xn - max))

And for convenience, force max to 0.f if all xi are negative
RyanUnderhill pushed a commit that referenced this pull request Jan 17, 2020
* Implement a more stable SoftMax
 e^x is represented as infinity if x is large enough, like 100.f. Infinity divided by Infinity is a NAN. Thus, softmax gets a NAN if one or more item are large enough.
A math transform as below is leveraged to get a stable softmax:
e^xi/(e^x1 + ...e^xn) = e^(xi - max) / (e^(x1 - max) + ... + e^(xn - max))

And for convenience, force max to 0.f if all xi are negative
zhangxiang1993 added a commit that referenced this pull request Jan 21, 2020
* Packaging pipeline changes for VS 2019 (#2711)

* Tiny fix to codegen

* Simplify cache implementation and avoid static variables that may carry over between models

* Extend DML kernels (#2641)

* Additional DML operators

* Check unsupported attributes and inputs

* Address PR comments

* Add kernel capability function used for partitioning, and re-enable stride-based int64 support based on value range

* Fix test failures

* Build fix

* PR comments

* Update Nuphar tutorial notebook (#2721)

1. Reflect int8 GEMV improvements for multi-threading from #2696
2. Add notes on multi-threading control using OpenMP
3. Add samples of running multi-isa AOT, and show int8 GEMM differences between AVX and AVX2
4. Add rnn_benchmark example to resolve #1993

* Add schema for new Qops (#2611)

* Add schema for new Qops

* adding shape inference + qlinearaveragepool

* plus review comments

* plus review comments

* updates per review comments

* plus review comments

* [server] Add supposed for model_name and model_version as cli parameter (#2708)

* remove 64bit warning message from python validation. (#2727)

* MLAS: ARM64 build fix (#2734)

fix bad usage of vreinterpret to cast vector element types

* Fix broken python docs links (#2740)

* Fix build on Mac OS (#2731)

mac os ld doesn't support --while-archive, correct option is -all_load

* fix ngraph wheel (#2737)

* fix ngraph wheel

1.1.0 onnxruntime_ngraph wheel doesn't work

* remove libdnnl.so in nGraph Libs

* make it easy to compare

* Split onnxruntime server to a separated folder (#2744)

* Fix build for Python 3.8 (#2747)

* Fix build for Python 3.8

* Update protobuf to 3.11.2 (#1928)

Update protobuf to 3.11.2 (#1928)

* Change default optimization level to All (from Basic) (#2745)

* change default optimization level to All (from Basic)

* fix test

* fix c# test

* Update numpy to 1.18 (#2758)

* Update numpy to 1.18

* Pipeline changes for python 3.8 (#2753)

1. Pipeline changes for python 3.8
2. Fix a regression in setup.py which was just introduced in the previous commit.

Please notice, we still haven't made python 3.8 + Windows + CUDA work.

* Add basic stacktrace output for posix debug builds. (#2749)

* [NupharEP] fix a race condition when multiple sessions running different models concurrently (#2772)

* Revert "Change default optimization level to All (from Basic) (#2745)"

This reverts commit 56bb503.

* Fix typo in error message (#2736)

* Rename MKL-DNN to DNNL to fix broken link (#2730)

* Fix nightly build version number issue

* Pass BUILD_BUILDNUMBER to linux docker

* Disable featurizers in python packages

* Import more featurizers (#2781)

Make kernels non-template. Add input constraint for learnt data.
  Add min_max_scalar_transformer, robust_scalar_transformer,
  inputation_marker_transfomer, label_encoder_transformer,
 missing_dummies_transformer along with tests.
 Advance Featurizers library commit.

* Implement a more stable softmax (#2715)

* Implement a more stable SoftMax
 e^x is represented as infinity if x is large enough, like 100.f. Infinity divided by Infinity is a NAN. Thus, softmax gets a NAN if one or more item are large enough.
A math transform as below is leveraged to get a stable softmax:
e^xi/(e^x1 + ...e^xn) = e^(xi - max) / (e^(x1 - max) + ... + e^(xn - max))

And for convenience, force max to 0.f if all xi are negative

* Contributing: Fix a typo (#2784)

* ACL EP GEMM improvements (#2780)

When it is posible we use a fully connected layer instead of the gemm implementation.
This will let the library use the best implementation based on the input data.

* ACL EP convolution improvements (#2774)

Added the optimized implementation for depthwise convolution for both ACL v19.02 and ACL 19.05.
Also the pointwise convolution seems to be more optimal in the CPU implementation so we opted for that instead.

* Add script for release Nuget validation (#2719)

* Initial commit

* Nits

* Disable a test temporarily

* Change working directory

* Test

* Add download python step

* Test update

* More changes

* Fix space issue

* Fix

* Verify nuget signing

* Fix

* Spaces

* PR feedback

* Nit

* Fix

* Fix

* Remove temporary changes

* add uint8 support to where op (#2792)

* Improve bert optimization script: (#2712)

(1) Move input int64=>int32 conversion to embed layer fusion.
(2) Output epsilon attribute for LayerNormalization fusion.

* add session creation time cost. (#2798)

* ML.NET team needs featurizers within a package (#2789)

Add auto ml featurizers to Windows, MacOS as well as to GPU  packaging-pipelines.

* Initialize max of softmax with lowest of float (#2786)

* MLAS: update SGEMM threading parameters (#2808)

* add interface to copy batch tensors. (#2807)

* add interface to copy batch tensors.

* onnxruntime

* speed up Windows TRT CI (#2811)

* don't run cuda tests if building with tensorrt

* remove unnecessary build options for win trt ci

* refactor win gpu tensorrt ci yml

* --numpy_version=1.17

* update

* update

* azcopy and cuda path

* Update test data (#2356)

* Add timeseries imputer transformer featurizer kernel (#2813)

 Make kernels non-template. Add input constraint for learnt data.
  Fixup tests.
  Add two more featurizers along with tests. Tests fail.
  min_max_scalar_transformer
  robust_scalar_transformer
  Fix tests serialized stream by prepending version bytes.
  Add inputation_marker_transfomer and the test.
  Fix up float/double type designations.
 Added label_encoder_transformer along with a test.
  string_throw case is broken at the momement.
  Fix labelencodertransfomer_test.cc string_throw case
  Rename maxabsscalertransformer_test.cc
  Add MissingDummiesTransformer along with the test.
  Update manifest.
  Add TimeSeriesImputerTransformer definition, implementation and tests

* Fix memory leak in TRT (#2815)

* fix memory leak issue

* revert EP_FAIL on enueueV2

* Add manifest missing comma

* Run static code analyzer on most of our code (#2817)

* Scneario Test : Build Google Test and Taef Test based on preprocessor definition (#2809)

* Add winml macro wrappers on top of google test macros

* change test methods to disabled

* Add custom winml macros for both taef and google tests

* PR comments

* update quantization doc (#2783)

* update documentation for quantization script

* plus some spell corrections

* Filter CPU case for IsFloat16Supported (#2802)

* update default optimization level + fix gemm_activation fusion (#2791)

* update defualt optimization level + fix gemm_activation fusion

* fix typo

* add unit test and incorporate review comments

* fix test comment

* Fix dnnl wheel package name (#2823)

* Append '-dnnl' to whl package name when --use_dnnl

* Update build.py

* Update Ubuntu & TensorRT version  in README (#2820)

Dockerfile.tensorrt is using nvcr.io/nvidia/tensorrt:19.09-py3 as base Image, update Ubuntu and TensorRT version according to
https://docs.nvidia.com/deeplearning/sdk/tensorrt-container-release-notes/rel_19-09.html#rel_19-09

* Merge fixes

* Add OneHotEncoder and HashOneHotEncoder kernels. (#2830)

 Add defs and imlementation for OneHotEncoders, adjuist date_time_transformer kernel and test.
  Add OneHotEncoder kernel test.
  Add HashOneHotVectorizerTransformer unit test.
  This does not link due to multiple definitions of functions
  that are included into header from a CPP file.

* Upgrade gtest to the latest version (#2827)

WinML would like to update the googletest submodule. They want some newer features (namely GTEST_SKIP to skip tests programmatically and be able to skip entire fixtures easily) and would need to update the submodule version.

However, because the new version of code hit a bug in gcc, even though the bug is already fixed in the latest gcc but we're using gcc 4.8.x and it won't get patched for the bug, so we have to do a compromise, change our code a little bit to make it work.

The gcc bug:  https://gcc.gnu.org/bugzilla/show_bug.cgi?id=51213

* Add support for int64_t for topk CPU. Fixes github issue #2806. (#2833)

* Ignore allocator type in ExecutionProviders allocator map. Make default initialization of OrtMemoryInfo more clearly invalid. (#2768)

* Remove allocator type from the key comparison in ExecutionProviders.
Remove usage of DummyArena as it's no longer necessary.

* Fix x86 tests where arena allocator is disabled.
Make initialization of OrtMemoryInfo clearer by adding Invalid enum value.

* Make OrtValueNameIdxMap::MaxIdx more intuitive.

* Convert ExternalProject Featurizers into git submodule (#2834)

Add git submodule for Featurizer library.
  Update cmake to build for git submodule.

* add domain check for nodes + update documentation (#2831)

* Fix cgmanifest.json generating script (#2770)

* Fix protobuf submodule name

* Workaround pygit2 bug

* User/orilevari/32bit comparison warning (#2800)

* use correct type for for loop

* explicitly specify void for parameters of OrtGetApiBase because the function is defined in c, so when the function is just (), it is interpreted as having an unknown number of parameters. This was causing compiler warning C4276.

* CMake cross-generator fixes (#2790)

* Fix compilation w/ non-VS CMake generators

* Fix custom WINMD target in Ninja

* Remove usage of msbuild .targets file

* Fix linking using DML in Ninja

* Automate SDK kit version choice

* Cleanup DML package install

* Fix SDK version detection

* Fix comment

* Revert unittest linkage changes

* Fix latest SDK detection

* Don't link to non-uapcore libraries

* Remove MessageBoxA reference and unused link libs

* Fix Linux CUDA nuget packaging pipeline break

* Refactor WinMLAPI Tests to build both google and taef test based on preprocessor definition (#2829)

* Add winml macro wrappers on top of google test macros

* change test methods to disabled

* Add custom winml macros for both taef and google tests

* PR comments

* Refactor winml api tests

* Move additional gtest specific macro definition into googleTestMacros.h

* Fix test build break since winml_lib_api needs to be statically linked to tests since winmlp::learningmodeldevice::iscpu() is being used in devicehelpers.cpp (#2837)

* Enforce WINML_TEST_CLASS_BEGIN_* matches w/ a WINML_TEST_CLASS_END (#2841)

* update optimization doc for BERT related fusions  (#2819)

* Add bert related transformers to doc
* Add execution provider and comment for bert optimizations
* Add comment about accuracy impact of approximation

* Fix warnings that cause build to fail

* MLAS: enable threading for quantized GEMMs (#2844)

* Fix test warnings and delayload linking (#2843)

* Ortmemoryinfo struct changed

* mark the camera scenario test as edgecore because it uses d3d11 (#2852)

* User/orilevari/pipeline fi breaks (#2853)

* remove conflicting artifact names. Decided to stop using drop-nuget-cuda since this may have implications on other dependent pipelines.

* change job name in gpu.yml back to Windows_CI_GPU_CUDA_Dev

* Remove internal libs from tests (#2864)

* Support custom DML in onnxruntime_providers.cmake (#2867)

* remove old winmladapter cpp

Co-authored-by: Changming Sun <chasun@microsoft.com>
Co-authored-by: KeDengMS <kedeng@microsoft.com>
Co-authored-by: Jeff <38966965+jeffbloo@users.noreply.github.com>
Co-authored-by: Ashwini Khade <askhade@microsoft.com>
Co-authored-by: Andrey <andrey.lompart@gmail.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: Tracy Sharpe <42477615+tracysh@users.noreply.github.com>
Co-authored-by: Faith Xu <txsafx@gmail.com>
Co-authored-by: zhanyi-ms <zhanyi@microsoft.com>
Co-authored-by: Changyoung Koh <gkcy1019@gmail.com>
Co-authored-by: Scott McKay <Scott.McKay@microsoft.com>
Co-authored-by: Takeshi Watanabe <take-cheeze@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: Yufeng Li <liyufeng1987@gmail.com>
Co-authored-by: Maher Jendoubi <maher.jendoubi@gmail.com>
Co-authored-by: Andrews548 <32704142+Andrews548@users.noreply.github.com>
Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: Nathan <7902510+ybrnathan@users.noreply.github.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Ke Zhang <kezhan@microsoft.com>
Co-authored-by: stevenlix <38092805+stevenlix@users.noreply.github.com>
Co-authored-by: Ryan Lai <ryalai96@gmail.com>
Co-authored-by: Ori Levari <ori.levari@microsoft.com>
Co-authored-by: Yingge WAN <y-wan@users.noreply.github.com>
Co-authored-by: Qing <cwq1913@gmail.com>
Co-authored-by: Pranav Sharma <emailpranav@gmail.com>
Co-authored-by: Tiago Koji Castro Shibata <tiago.shibata@gmail.com>
smk2007 added a commit that referenced this pull request Jan 28, 2020
* Create winml adapter c api

* fix build

* make it build

* move adapter into onnxruntime core/session

* entry point not exported

* minor changes

* make model metadata work

* make tests pass

* implement all the model reflection apis on the adapter c abi

* update the new ort interface to create a lotus ennvironment with a logging sink

* start adding ort env

* move all winml code into adapter folder/lib to isolate it

* ensure a single logging manager at a time

* start refactoring session

* refactor session creation interface

* add cpu and dml session option methods to adapter

* finish session init

* stub out interfaces in ort lib to perform similar mechanics of iinference session

* enable profiling, and enable schema override

* update session register graph transformers

* turn back on custom registry for custom ops

* Add sync api

* add last c api stubs

* should build... but all feature values are broken since this is in flight to moving all implementation details into ivalue

* remove ep adapter header

* Implement DML execution provider functions from adapter (#2846)

* Implement DML execution provider functions from adapter

* Use functions in OnnxruntimeEngine.cpp

* make map/sequence type_infos freeable, and start implementing ivalue

* make it build again

* implement value methods

* implement remaining methods

* remove com adapter abi

* check dml session

* cache the allocator on ivalue

* check if resource is cpu/gpu when access its mutable data

* update tensor

* mismatched parentheses

* fix tensor base and binding obj

* it evaluates tensors! sometimes...

* minor fixes

* enable gpu evals

* wrapper all existing winml adapter apis with API_IMPL to try catch (#2854)

* update winml... tensor strings are broken, need to template tensorbase to do different things for strings

* make tensor strings work with 2 copies in/2 copies out

* Fix tensor string and allocator bug

* make maps work again... needs some fixes still

* Make it build!

* enable map inputs

* map outputs

* unbound outputs for sequences and maps

* User/xianz/merge windowsai (#2883)

* Packaging pipeline changes for VS 2019 (#2711)

* Tiny fix to codegen

* Simplify cache implementation and avoid static variables that may carry over between models

* Extend DML kernels (#2641)

* Additional DML operators

* Check unsupported attributes and inputs

* Address PR comments

* Add kernel capability function used for partitioning, and re-enable stride-based int64 support based on value range

* Fix test failures

* Build fix

* PR comments

* Update Nuphar tutorial notebook (#2721)

1. Reflect int8 GEMV improvements for multi-threading from #2696
2. Add notes on multi-threading control using OpenMP
3. Add samples of running multi-isa AOT, and show int8 GEMM differences between AVX and AVX2
4. Add rnn_benchmark example to resolve #1993

* Add schema for new Qops (#2611)

* Add schema for new Qops

* adding shape inference + qlinearaveragepool

* plus review comments

* plus review comments

* updates per review comments

* plus review comments

* [server] Add supposed for model_name and model_version as cli parameter (#2708)

* remove 64bit warning message from python validation. (#2727)

* MLAS: ARM64 build fix (#2734)

fix bad usage of vreinterpret to cast vector element types

* Fix broken python docs links (#2740)

* Fix build on Mac OS (#2731)

mac os ld doesn't support --while-archive, correct option is -all_load

* fix ngraph wheel (#2737)

* fix ngraph wheel

1.1.0 onnxruntime_ngraph wheel doesn't work

* remove libdnnl.so in nGraph Libs

* make it easy to compare

* Split onnxruntime server to a separated folder (#2744)

* Fix build for Python 3.8 (#2747)

* Fix build for Python 3.8

* Update protobuf to 3.11.2 (#1928)

Update protobuf to 3.11.2 (#1928)

* Change default optimization level to All (from Basic) (#2745)

* change default optimization level to All (from Basic)

* fix test

* fix c# test

* Update numpy to 1.18 (#2758)

* Update numpy to 1.18

* Pipeline changes for python 3.8 (#2753)

1. Pipeline changes for python 3.8
2. Fix a regression in setup.py which was just introduced in the previous commit.

Please notice, we still haven't made python 3.8 + Windows + CUDA work.

* Add basic stacktrace output for posix debug builds. (#2749)

* [NupharEP] fix a race condition when multiple sessions running different models concurrently (#2772)

* Revert "Change default optimization level to All (from Basic) (#2745)"

This reverts commit 56bb503.

* Fix typo in error message (#2736)

* Rename MKL-DNN to DNNL to fix broken link (#2730)

* Fix nightly build version number issue

* Pass BUILD_BUILDNUMBER to linux docker

* Disable featurizers in python packages

* Import more featurizers (#2781)

Make kernels non-template. Add input constraint for learnt data.
  Add min_max_scalar_transformer, robust_scalar_transformer,
  inputation_marker_transfomer, label_encoder_transformer,
 missing_dummies_transformer along with tests.
 Advance Featurizers library commit.

* Implement a more stable softmax (#2715)

* Implement a more stable SoftMax
 e^x is represented as infinity if x is large enough, like 100.f. Infinity divided by Infinity is a NAN. Thus, softmax gets a NAN if one or more item are large enough.
A math transform as below is leveraged to get a stable softmax:
e^xi/(e^x1 + ...e^xn) = e^(xi - max) / (e^(x1 - max) + ... + e^(xn - max))

And for convenience, force max to 0.f if all xi are negative

* Contributing: Fix a typo (#2784)

* ACL EP GEMM improvements (#2780)

When it is posible we use a fully connected layer instead of the gemm implementation.
This will let the library use the best implementation based on the input data.

* ACL EP convolution improvements (#2774)

Added the optimized implementation for depthwise convolution for both ACL v19.02 and ACL 19.05.
Also the pointwise convolution seems to be more optimal in the CPU implementation so we opted for that instead.

* Add script for release Nuget validation (#2719)

* Initial commit

* Nits

* Disable a test temporarily

* Change working directory

* Test

* Add download python step

* Test update

* More changes

* Fix space issue

* Fix

* Verify nuget signing

* Fix

* Spaces

* PR feedback

* Nit

* Fix

* Fix

* Remove temporary changes

* add uint8 support to where op (#2792)

* Improve bert optimization script: (#2712)

(1) Move input int64=>int32 conversion to embed layer fusion.
(2) Output epsilon attribute for LayerNormalization fusion.

* add session creation time cost. (#2798)

* ML.NET team needs featurizers within a package (#2789)

Add auto ml featurizers to Windows, MacOS as well as to GPU  packaging-pipelines.

* Initialize max of softmax with lowest of float (#2786)

* MLAS: update SGEMM threading parameters (#2808)

* add interface to copy batch tensors. (#2807)

* add interface to copy batch tensors.

* onnxruntime

* speed up Windows TRT CI (#2811)

* don't run cuda tests if building with tensorrt

* remove unnecessary build options for win trt ci

* refactor win gpu tensorrt ci yml

* --numpy_version=1.17

* update

* update

* azcopy and cuda path

* Update test data (#2356)

* Add timeseries imputer transformer featurizer kernel (#2813)

 Make kernels non-template. Add input constraint for learnt data.
  Fixup tests.
  Add two more featurizers along with tests. Tests fail.
  min_max_scalar_transformer
  robust_scalar_transformer
  Fix tests serialized stream by prepending version bytes.
  Add inputation_marker_transfomer and the test.
  Fix up float/double type designations.
 Added label_encoder_transformer along with a test.
  string_throw case is broken at the momement.
  Fix labelencodertransfomer_test.cc string_throw case
  Rename maxabsscalertransformer_test.cc
  Add MissingDummiesTransformer along with the test.
  Update manifest.
  Add TimeSeriesImputerTransformer definition, implementation and tests

* Fix memory leak in TRT (#2815)

* fix memory leak issue

* revert EP_FAIL on enueueV2

* Add manifest missing comma

* Run static code analyzer on most of our code (#2817)

* Scneario Test : Build Google Test and Taef Test based on preprocessor definition (#2809)

* Add winml macro wrappers on top of google test macros

* change test methods to disabled

* Add custom winml macros for both taef and google tests

* PR comments

* update quantization doc (#2783)

* update documentation for quantization script

* plus some spell corrections

* Filter CPU case for IsFloat16Supported (#2802)

* update default optimization level + fix gemm_activation fusion (#2791)

* update defualt optimization level + fix gemm_activation fusion

* fix typo

* add unit test and incorporate review comments

* fix test comment

* Fix dnnl wheel package name (#2823)

* Append '-dnnl' to whl package name when --use_dnnl

* Update build.py

* Update Ubuntu & TensorRT version  in README (#2820)

Dockerfile.tensorrt is using nvcr.io/nvidia/tensorrt:19.09-py3 as base Image, update Ubuntu and TensorRT version according to
https://docs.nvidia.com/deeplearning/sdk/tensorrt-container-release-notes/rel_19-09.html#rel_19-09

* Merge fixes

* Add OneHotEncoder and HashOneHotEncoder kernels. (#2830)

 Add defs and imlementation for OneHotEncoders, adjuist date_time_transformer kernel and test.
  Add OneHotEncoder kernel test.
  Add HashOneHotVectorizerTransformer unit test.
  This does not link due to multiple definitions of functions
  that are included into header from a CPP file.

* Upgrade gtest to the latest version (#2827)

WinML would like to update the googletest submodule. They want some newer features (namely GTEST_SKIP to skip tests programmatically and be able to skip entire fixtures easily) and would need to update the submodule version.

However, because the new version of code hit a bug in gcc, even though the bug is already fixed in the latest gcc but we're using gcc 4.8.x and it won't get patched for the bug, so we have to do a compromise, change our code a little bit to make it work.

The gcc bug:  https://gcc.gnu.org/bugzilla/show_bug.cgi?id=51213

* Add support for int64_t for topk CPU. Fixes github issue #2806. (#2833)

* Ignore allocator type in ExecutionProviders allocator map. Make default initialization of OrtMemoryInfo more clearly invalid. (#2768)

* Remove allocator type from the key comparison in ExecutionProviders.
Remove usage of DummyArena as it's no longer necessary.

* Fix x86 tests where arena allocator is disabled.
Make initialization of OrtMemoryInfo clearer by adding Invalid enum value.

* Make OrtValueNameIdxMap::MaxIdx more intuitive.

* Convert ExternalProject Featurizers into git submodule (#2834)

Add git submodule for Featurizer library.
  Update cmake to build for git submodule.

* add domain check for nodes + update documentation (#2831)

* Fix cgmanifest.json generating script (#2770)

* Fix protobuf submodule name

* Workaround pygit2 bug

* User/orilevari/32bit comparison warning (#2800)

* use correct type for for loop

* explicitly specify void for parameters of OrtGetApiBase because the function is defined in c, so when the function is just (), it is interpreted as having an unknown number of parameters. This was causing compiler warning C4276.

* CMake cross-generator fixes (#2790)

* Fix compilation w/ non-VS CMake generators

* Fix custom WINMD target in Ninja

* Remove usage of msbuild .targets file

* Fix linking using DML in Ninja

* Automate SDK kit version choice

* Cleanup DML package install

* Fix SDK version detection

* Fix comment

* Revert unittest linkage changes

* Fix latest SDK detection

* Don't link to non-uapcore libraries

* Remove MessageBoxA reference and unused link libs

* Fix Linux CUDA nuget packaging pipeline break

* Refactor WinMLAPI Tests to build both google and taef test based on preprocessor definition (#2829)

* Add winml macro wrappers on top of google test macros

* change test methods to disabled

* Add custom winml macros for both taef and google tests

* PR comments

* Refactor winml api tests

* Move additional gtest specific macro definition into googleTestMacros.h

* Fix test build break since winml_lib_api needs to be statically linked to tests since winmlp::learningmodeldevice::iscpu() is being used in devicehelpers.cpp (#2837)

* Enforce WINML_TEST_CLASS_BEGIN_* matches w/ a WINML_TEST_CLASS_END (#2841)

* update optimization doc for BERT related fusions  (#2819)

* Add bert related transformers to doc
* Add execution provider and comment for bert optimizations
* Add comment about accuracy impact of approximation

* Fix warnings that cause build to fail

* MLAS: enable threading for quantized GEMMs (#2844)

* Fix test warnings and delayload linking (#2843)

* Ortmemoryinfo struct changed

* mark the camera scenario test as edgecore because it uses d3d11 (#2852)

* User/orilevari/pipeline fi breaks (#2853)

* remove conflicting artifact names. Decided to stop using drop-nuget-cuda since this may have implications on other dependent pipelines.

* change job name in gpu.yml back to Windows_CI_GPU_CUDA_Dev

* Remove internal libs from tests (#2864)

* Support custom DML in onnxruntime_providers.cmake (#2867)

* remove old winmladapter cpp

Co-authored-by: Changming Sun <chasun@microsoft.com>
Co-authored-by: KeDengMS <kedeng@microsoft.com>
Co-authored-by: Jeff <38966965+jeffbloo@users.noreply.github.com>
Co-authored-by: Ashwini Khade <askhade@microsoft.com>
Co-authored-by: Andrey <andrey.lompart@gmail.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: Tracy Sharpe <42477615+tracysh@users.noreply.github.com>
Co-authored-by: Faith Xu <txsafx@gmail.com>
Co-authored-by: zhanyi-ms <zhanyi@microsoft.com>
Co-authored-by: Changyoung Koh <gkcy1019@gmail.com>
Co-authored-by: Scott McKay <Scott.McKay@microsoft.com>
Co-authored-by: Takeshi Watanabe <take-cheeze@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: Yufeng Li <liyufeng1987@gmail.com>
Co-authored-by: Maher Jendoubi <maher.jendoubi@gmail.com>
Co-authored-by: Andrews548 <32704142+Andrews548@users.noreply.github.com>
Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: Nathan <7902510+ybrnathan@users.noreply.github.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Ke Zhang <kezhan@microsoft.com>
Co-authored-by: stevenlix <38092805+stevenlix@users.noreply.github.com>
Co-authored-by: Ryan Lai <ryalai96@gmail.com>
Co-authored-by: Ori Levari <ori.levari@microsoft.com>
Co-authored-by: Yingge WAN <y-wan@users.noreply.github.com>
Co-authored-by: Qing <cwq1913@gmail.com>
Co-authored-by: Pranav Sharma <emailpranav@gmail.com>
Co-authored-by: Tiago Koji Castro Shibata <tiago.shibata@gmail.com>

* move sequence implementation into ort lib... still commented out... need to turn back on...

* begin sequence implementation

* make maps and sequences work

* fix broken tests

* remove dead code

* misc cleanup

* CR feedback

* User/xianz/winml adapter c api (#2869)

* wrapper all existing winml adapter apis with API_IMPL to try catch

* Return HR or Throw for WinML adapter APIs if failed

* undo macro wrapper for two places

* Wrap error macros around ort apis, too.

* address CR feedback #2

* add more api throw/return macros

* Revert changes no longer needed

* revert changes to cxx api

* format winml lib.ort and winml adapter

* remove static pheonix singleton

Co-authored-by: Ryan Lai <ryalai96@gmail.com>
Co-authored-by: Xiang Zhang <xianz@microsoft.com>
Co-authored-by: Changming Sun <chasun@microsoft.com>
Co-authored-by: KeDengMS <kedeng@microsoft.com>
Co-authored-by: Jeff <38966965+jeffbloo@users.noreply.github.com>
Co-authored-by: Ashwini Khade <askhade@microsoft.com>
Co-authored-by: Andrey <andrey.lompart@gmail.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: Tracy Sharpe <42477615+tracysh@users.noreply.github.com>
Co-authored-by: Faith Xu <txsafx@gmail.com>
Co-authored-by: zhanyi-ms <zhanyi@microsoft.com>
Co-authored-by: Changyoung Koh <gkcy1019@gmail.com>
Co-authored-by: Scott McKay <Scott.McKay@microsoft.com>
Co-authored-by: Takeshi Watanabe <take-cheeze@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: Yufeng Li <liyufeng1987@gmail.com>
Co-authored-by: Maher Jendoubi <maher.jendoubi@gmail.com>
Co-authored-by: Andrews548 <32704142+Andrews548@users.noreply.github.com>
Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: Nathan <7902510+ybrnathan@users.noreply.github.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Ke Zhang <kezhan@microsoft.com>
Co-authored-by: stevenlix <38092805+stevenlix@users.noreply.github.com>
Co-authored-by: Ori Levari <ori.levari@microsoft.com>
Co-authored-by: Yingge WAN <y-wan@users.noreply.github.com>
Co-authored-by: Qing <cwq1913@gmail.com>
Co-authored-by: Pranav Sharma <emailpranav@gmail.com>
Co-authored-by: Tiago Koji Castro Shibata <tiago.shibata@gmail.com>
smk2007 added a commit that referenced this pull request Feb 5, 2020
* Initial Commit

* Merged PR 3985217: add onecoreuap_apiset.lib in order to avoid linking against kernel32.lib etc (#2346)

add onecoreuap_apiset.lib in order to avoid linking against kernel32.lib etc and violating our OS layering requirements.

We linked against onecoreuap_apiset.lib in VB so we will continue doing this, but I am still unsure why not to link against onecore instead since that is where we ship. However, since Sheil is the owner of this code we will wait to discuss with him before changing anything.

* Initial changes for layering

* more snipping to get core into ort

* update build instructions to include --build_shared_lib (#2358)

* update build instructions to include --build_shared_lib

* fix line breaks

* Task 23998197: add winml_lib_core into onnnxruntime.dll (#2368)

* Task 23998197: add winml_lib_core into onnnxruntime.dll

* PR feedback
build break on perf_test

* return proper error when the model path isn't found (#2391)

* LearningModelSession is cleaned up to use the adapter, and parts of b… (#2382)

this is a big PR.    we are going to move it up to layer_dev , which is still a L3 so we are still safe to do work there agile.

we are going to move this into the L3 so that ryan can start doing intergration testing.   

we will pause for a full code review and integration test result prior to going into the L2.

>>>> raw comments from previous commits >>> 

* LearningModelSession is cleaned up to use the adapter, and parts of binding are.
* moved everything in the winmladapter
made it all nano-com using, WRL to construct objects in the ORT side.
base interfaces for everythign for winml to call
cleaned up a bunch of winml to use the base interfaces.
* more pieces
* GetData across the abi.
* renamed some namepsace
cleaned up OrtValue
cleaned up Tensor
cleaned up custom ops.
everything *but* learnignmodel should be clean
* make sure it's building.   winml.dll is still a monolith.

* model moved over.
everything builds clean.
step !

* weak ref comment

* Layer dev paulm (#2408)

* model moved over.
everything builds clean.
step !

* weak ref comment

* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.

* Layer dev paulm (#2414)

* model moved over.
everything builds clean.
step !

* weak ref comment

* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.

* User/xianz/win ml telemetry (#2410)

* add option to enable winml telemetry

* add option to enable winml telemetry

* clean logs while developping

* clean the log of GUID

* compile onnxruntime_common with winml telemetry

* use option for use_telemetry

* rename option winml_use_telemetry to onnxruntime_use_telemetry

* little change

* fixed some lifetime management.
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU

* Layer dev paulm (#2423)

* model moved over.
everything builds clean.
step !

* weak ref comment

* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.

* fixed some lifetime management.
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU

* PR feedback.

* Layer dev paulm (#2424)

* model moved over.
everything builds clean.
step !

* weak ref comment

* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.

* fixed some lifetime management.
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU

* PR feedback.

* couple of fixes and coded getmutabledata()

* Layer dev paulm (#2425)

* model moved over.
everything builds clean.
step !

* weak ref comment

* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.

* fixed some lifetime management.
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU

* PR feedback.

* couple of fixes and coded getmutabledata()

* fixed 2 more heap corruptions

* Layer dev paulm (#2426)

* model moved over.
everything builds clean.
step !

* weak ref comment

* added a wrapper for RoGetActivationFactory to hook back into winml for creating winml objects.
fixes model load.

* fixed some lifetime management.
fixed the debug build.
squeezenet passes using winmlrunner for CPU and GPU

* PR feedback.

* couple of fixes and coded getmutabledata()

* fixed 2 more heap corruptions

* Add opset and IR check when loading model (#2413)

* Add opset and IR check.
* Add test case for future opsets.

https://github.com/microsoft/onnxruntime/issues/2371

* fixed map and sequence when passing stl types across the ABI .
found a leak in nvidia driver, but skipped it.
all winmlapitests pass now

* Moved SessionOptions over to the abi

* WinML CI (#2412)

* Pass flags to build/test WinML in CI

* Add initial CMake config for unit tests in WinML

* Set winml_unittests standard to C++17

* Add WinML API tests and port them to googletest

* Install WinML test collateral

* Add LearningModelSessionAPITests ported to googletest

* Fix WinML test files encoding

* Add GPU tests

* Add parameterized test, skip GPU tests

* Enable precompiled header

* Remove unused code and collateral

* Remove brand images

* Add dllload.cpp

* Remove images not used in API tests

* Add LICENSE.md to image collaterals

* Add models with licenses

* Remove FNS Candy tests

* Add API test models

* Add ModelInSubdirectory

* Install collaterals post-build with copy_if_different, split common lib

* fix warnings

* Link to gtest_main

* Register WinML TraceLogging provider on Onnxruntime.dll (#2455)

* Register WinML TraceLogging provider on Onnxruntime.dll

* Add ifdef to make sure trace logging provider has telemetry option when LAYERING_DONE

* No need for ifdef for TraceLoggingOptionMicrosoftTelemetry

* PR feedback

* Move etw registration into lotus environment constructor and deresgister in lotus environment destructor

* Brianma/cpuwinml (#2466)

* allow building winml cpu without dml.

* Brianma/breaks (#2469)

* fix some more breaks

* learning model doesn't need lotusEnvironment and CPU shouldn't include dmlEP headers

* move dml checks out of winml and into the adapter

* better error handling

* Brianma/fi (#2470)

* learning model doesn't need lotusEnvironment and CPU shouldn't include dmlEP headers

* User/xianz/win ml telemetry (#2410)

* add option to enable winml telemetry

* add option to enable winml telemetry

* clean logs while developping

* clean the log of GUID

* compile onnxruntime_common with winml telemetry

* use option for use_telemetry

* rename option winml_use_telemetry to onnxruntime_use_telemetry

* little change

* Add opset and IR check when loading model (#2413)

* Add opset and IR check.
* Add test case for future opsets.

https://github.com/microsoft/onnxruntime/issues/2371

* WinML CI (#2412)

* Pass flags to build/test WinML in CI

* Add initial CMake config for unit tests in WinML

* Set winml_unittests standard to C++17

* Add WinML API tests and port them to googletest

* Install WinML test collateral

* Add LearningModelSessionAPITests ported to googletest

* Fix WinML test files encoding

* Add GPU tests

* Add parameterized test, skip GPU tests

* Enable precompiled header

* Remove unused code and collateral

* Remove brand images

* Add dllload.cpp

* Remove images not used in API tests

* Add LICENSE.md to image collaterals

* Add models with licenses

* Remove FNS Candy tests

* Add API test models

* Add ModelInSubdirectory

* Install collaterals post-build with copy_if_different, split common lib

* fix warnings

* Link to gtest_main

* fix bad merge

* Checking in a staging checkpoint point so that Ryan can work with me in parrallel

* build break.

* Brianma/testfails (#2473)

* add missing ir version to dictvectorizer-string.onnx

* add missing ir version to relu.onnx

* add missing ir version to zipmap*onnx

* add IR version to manually generated models

* remove an unnecessary ifdef dml

* Brianma/windowsai fi (#2475)

* update dockerfiles/README (#2336)

* Make elementwise op run 4 items per thread (#2335)

Description: Describe your changes.
Make elementwise op run 4 items per thread
unroll for loop to leverage ILP
remove unnessary N==0 check inside elementwise GPU kernel
Motivation and Context
Why is this change required? What problem does it solve?
It can improve the performance of GPU elementwise ops. ~2% performance gain on popular NLP bert model.
If it fixes an open issue, please link to the issue here.

* Add CUDA GatherElements kernel (#2310)

* Updates

* Update test

* Update

* Updates

* nits

* PR feedback

* Update

* Update

* PR feedback

* PR comments

* Update

* Fix build

* Fix build

* Nits

* Fix

* Layer Normalization Fusion  (#2319)

basic layer normalization transform

* Add FastGelu Cuda Op for Gelu and Add bias fusion (#2293)

* Add FastGelu cuda op

* Add AddBiasGelu for experiment

* Revert "Add AddBiasGelu for experiment"

This reverts commit 5c1ee019858c657e6bb75887265cb85675626e5b.

* Add bias

* Add unit tests

* update comment

* update script

* fix build error

* update coding style

* update for CR feedback
Enable half2 optimization only when cuda arch >= 7.0

* move _Tanh to common.cuh

* implement CPU contrib OP Attention (#2333)

* Remove unused initializer from GraphProto as well as name_to_initial_tensor_ in CleanUnusedInitializers. (#2320)

* Remove unused initializer from GraphProto as well as name_to_initial_tensor_ in CleanupUnusedInitializers.

This means initializers that have been replaced during graph optimizations are not left in the GraphProto when we save an optimized model.

* Handle edge case where a model has an unused initializer with matching graph input by also removing the graph input.

* Use non-const iterators in std::find_if calls to make centos build happy.

* Nuget pipeline changes (#2305)

1. refactor the pipeline, remove some duplicated code
2. Move Windows_py_GPU_Wheels job to Win-GPU-CUDA10. We'll deprecated the "Win-GPU" pool
3. Delete cpu-nocontribops-esrp-pipeline.yml and cpu-nocontribops-pipeline.yml
4. In Linux nuget jobs, run "make install" before creating the package. So that extra RPAH info will be removed

* Cuda Reverse Sequence Op, maping types of same size using same template function. (#2281)

* Set ElementType to String type of node metadata, instead of byte[] (#2348)

* Set ElementType to String type of node metadata, instead of byte[]

* Fix spacing

* Introduce PrimitiveType into a Type System along with an integer constant (#2307)

Improve perf by avoiding GetType<T>() calls. Introduce MLTypeCallDispatcher to switch on Input Type. Add Tensor IsType<T>() fast method.

* Fix/test dim value of 0 handling in a couple of places (#2337)

* Update the CUDA Where implementation broadcasting logic to handle a dim with value of 0.
Add unit test
Also add unit test for unary op with dim value of 0

* Exclude ngraph from Where test with 0 dim.

* Openvino EP R3.1 onnxrt server (#2357)

* onnxrt server with OVEP

* onnxrt server with OVEP

* Update Dockerfile.server.openvino

* onnxrt server OVEP fix reviews

* onnxrt server OVEP fix reviews

* Implement cuda nonzero op. (#2056)

Implement cuda nonzero op.

* Direct use python numpy array's memory if already contiguous.  (#2355)

* Direct use python numpy array's memory if already contiguous. This
could greatly improve performance for session with large input,
like big image 1920x1080 fastrcnn, 30~40% speed up could be achieved.

* Add test case enforce contiguous/non-contiguos numpy array as inputs.

* Add helper to create output to minimize binary size. (#2365)

Add ConstEigenTensorMap typedef so we don't unnecessarily const_cast the const input Tensor.

* fix builds enabling onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS (#2369)

* fix builds enabling onnxruntime_DEBUG_NODE_INPUTS_OUTPUTS

* update

* Add Tracelogging for profiling (#1639)

Enabled only if onnxruntime_ENABLE_INSTRUMENT is ON

* test bidaf with nuphar for avx target (#2370)

increase nuphar test coverage a bit

* Fix a bug in TLS refcount that may destabilized CUDA CI (#2374)

* update output size calculation for resize (#2366)

* change how output size is calculated for resize op

* add tests for ver 10 resize

* Extend OneHot CPU kernel to support more types (#2311)

* Extend OneHot CPU kernel to support input int64_t, depth int32_t, output float

* Skip BERT before the test data fix is picked up

* Fix bug with Slice. Need to pass in flattened input dimensions so the initial offset into the input is calculated correctly. (#2372)

* Add opset 11 version of Split to CUDA ops (#2376)

Organize the CUDA ops definitions so all the opset 10 and 11 parts are together (same setup used for CPU ops)

* Layer Norm Fusion Fix (#2379)

* layer norm fusion fix

* Add input shape check in code and unit tests

* Fuse Add + Gelu (#2360)

Implement the transformer to fuse add + gelu
Implement the accurate kernel

* Skip layer norm transform (#2350)

* skip layer normalization transformer

* Another try to stabilize CUDA CI (#2383)

The root cause seems to be failure in CUDA dealloc when tear down. cudaFree return code was ignored before, so should the debug check.

* fix BUILD.md typo (#2375)

build.py: error: argument --config: invalid choice: 'RelWithDebugInfo' (choose from 'Debug', 'MinSizeRel', 'Release', 'RelWithDebInfo')

* Fixed compilation with ngraph (#2388)

* Fix reuse logic in allocation planner. (#2393)

* Fix reuse logic in allocation planner.

* PR comments

* Add helpful comments

* Don't allow reuse across string tensors.

* [NupharEP] Multiple optimizations  (#2380)

Fuse transpose into MatMul
Implement Pow and constant scalar simplification
Vectorize ReduceMean
Improve symbolic shape inference
Minor updates for better debugging in fused function name

* Avoid using the default logger in the graph lib and optimizers (#2361)

1. Use the session logger if it is available.
2. Don't disable warning 4100 globally. We should fix the warnings instead of disabling it.

* Change CUDA implementation of Transpose to support all fixed size tensor types (#2387)

* Change CUDA implementation of Transpose to not use a typed kernel so we can support more types with minimum binary size.
Add support for 8, 16, 32 and 64 bit types.
Add unit tests.
Add method so the implementation can be called directly (will be used by CUDA Scan very soon).

* Disable TensorRT for MLFloat16 and int8 unit tests.

* Address PR comment and add support for calling cublas implementation if type is mlfloat16.

* Add opset 11 versions of the existing CUDA operators that had negative axis support explicitly added. (#2398)

* Add opset 11 versions of the existing CUDA operators that had negative axis support explicitly added.

* [NupharEP] force some low/zero cost ops to be inlined (#2409)

* fix cross compile bug (#2415)

* Minor optimization: if a node has already been placed, there's no need to find a kernel for it. (#2417)

* Add Reshape Fusion (#2395)

* Add reshape fusion

* Add some comments

* update comments

* update comment format

* update according to feedback

* update for recent logger change

* fix build error

* (1) Support both input and output edges in find path in graphutils
(2) Add a test case of only one constant initializer of Concat input.
(3) Refactor ReshapeFusion class to allow add more subgraph fusion in the future.

* fix error

* (1) loose constraint on initializer: non constant is allowed for reshape fusion.
(2) Change versions type to vector.
(3) Add logging.
(4) Return false when multiple output edges matched in FindPath. Add comments.

* only allow one direction (input or output) in FindPath

* [NupharEP] Update notebook and docker image (#2416)

Add BERT squad in Nuphar tutorial
Enhance speed comparsion readability

* Fix the issue in matmul_add_fusion (#2407)

Fix the issue in matmul_add_fusion

If Muatmul + Add has shape [K] * [K, N], reset it to [1, K] * [K, N] will make the output shape to [1, N] will also requires a reshape on the output.
Fix: just remove the shape reset to not fuse it.

Add a negative test case for matmul+add fusion

* feat(treeregressor): Update TreeEnsembleRegressor for type support (#2389)

Updates the `TreeEnsembleRegressor` to allow for `double`, `float`,
`int64`, and `int32` inputs to match the upstream specification.

Signed-off-by: Nick Groszewski <nicholas.groszewski@capitalone.com>

* onnxrt server documentation update (#2396)

* Added support for Pad-2 operator in OpenVINO-EP (#2405)

* Add CUDA If operator. (#2377)

* Add CUDA If operator.
Uses CPU operator for implementation.
By adding a CUDA version the inputs/outputs (with the exception of the 'cond' input) stay on GPU, and no other logic is required to avoid a copy to CPU across the control flow node.

* Improved documentation for onnxruntime::utils::SwapByteOrderCopy(), added precondition check.

* Fix the type constraints on CUDA If operator to exclude strings. (#2431)

* add Im2col<uint8_t> (#2438)

* Adjust codegen vectorization width from target (#2439)

* Adjust codegen vectorization width from target

* Add CUDA Scan operator. (#2403)

* Add Scan CUDA op.
Uses CPU implementation for logic.
Added some device specific functors for handling when data needs to be manipulated on a different device.
Added ability to override the materialization logic in the OrtValue slicer so DML can plugin their handling.

* Fix Windows GPU C API packaging pipeline failure (#2440)

Fix Windows GPU C API packaging pipeline failure (#2440)

* Correctly handle implicit inputs for fused nodes (#2390)

* Correctly handle implicit inputs for fused nodes

Previously, nuphar's partitioning function didn't include
node's implicit inputs into the inputs list of MetaDef, and hence
a crash was triggered in the onnx graph checker.

This commit fixed the issue. Furthermore, it also fixed a related
issue where we didn't add implicit inputs into
graph_inputs_excluding_initializers_ in Graph::SetGraphInputsOutputs.

the issue was that graph_inputs_including_initializers_ populated by
SetInputs (e.g. called by FunctionImpl::FunctionImpl) may contain
implicit inputs which were not of any node's initializers in the graph.
Because they were not part of any initializers, these implicit inputs
couldn't be visited by going through all nodes' inputs.
Consequently, they would *not* be added into graph_inputs_excluding_initializers_.

We fixed the issue by first copying the populated graph_inputs_including_initializers_
into graph_inputs_excluding_initalizers_, which then had both initializers and
non-initializers as its initial content. Later, we erase initializers from the
list. In this way, we can ensure all implicit inputs to remain in
graph_inputs_excluding_initializers_.

* refined comments and fixed duplicates

Address CR by revisiting comments in terms of implicit inputs

Also fixed an issue by skipping duplicates while copying inputs
from graph_inputs_including_initializers_.

* address CR

explain why we need to collect nodes' implicit inputs

* don't rely on pointer values for iterating std::set

Previously, openvino relied on iterating a set of NodeArg pointers
to construct inputs and outputs for a fused graph. It could cause
non-determinism. The reason was that although iterating std::set by
itself is stable, pointer values of NodeArgs may vary. Consequently,
we could end up visiting the set's elements in different orders for
different runs for the same test, which resulted in constructing
inputs (and outputs) with different orders to the fused graph.
For example, for the same test, we may have inputs [A, B] in some
runs but inputs[B, A] in others.

Let's use std::string as the key type to avoid such nondeterminism.

This commit also added implicit inputs into meta->inputs while returning
the capability from the openvino provider.

* Fixed another latent issue in openvino's GetCapability function

The issue was that we couldn't simply erase fused_inputs and fused_outputs
while iterating the nodes. For example, an output NodeArg may have multiple
uses, and it's wrong if we erase it from fused_outputs when we encounter only
one of its uses as input.

* Remove DeviceAllocatorRegistry class (#2451)

Remove DeviceAllocatorRegistry class

* CSharp api and test for loading custom op shared library (#2420)

- Added C-API test for loading custom op shared lib.
- Made some changes in C++ api header and C-api implementation to get it working.
- Added C# API and corresponding test for loading custom op shared library.

* Parallel Gelu with ParallelFor (#2399)

Parallel Gelu to get better performance for Gelu

* Clean up build.py (#2446)

* Pull the latest image before running docker build

* Fuse SkipLayerNorm with Bias (#2453)

Fuse SkipLayerNorm with Bias

* Allow more than one invocation of CreateEnv in the same process. (#2467)

* Allow more than one invocation of CreateEnv in the same process.

* Fix centos build

* Symbolic shape inference improvements: (#2460)

* Symbolic shape inference improvements:
- add a mode to guess unknown ops' output rank
- add support for GatherND
- add support for If
- fix a bug in get_int_values when then tensor rank > 1D, by treating it as no sympy data
- add symbol to literal merge when ONNX silently merges dims
- fix a bug in Concat when input dim is 0
- fix a bug in ConstantOfShape that computed dim is not updated
- add support for dynamic shape in ConstantOfShape
- fix a bug in Loop output shape that loop iterator dim is not inserted at dim 0
- add support for dynamic padding in Pad
- add support for dynamic shape in Reshape
- add support for Resize with opset > 10, by treating output dims as dynamic
- fix a bug in Slice when starts/ends are dynamic
- restrict input model to opset 7 and above
- make output model optional to avoid disk write when testing

Run model tests for symbolic shape inference

Reduce 2GB docker image size of nuphar

* add additional test data set for nuget pipeline (#2448)

* add SAS token to download internal test data for nuget pipeline

* update azure endpoint

* fix keyvault download step

* fix variable declaration for secret group

* fix indentation

* fix yaml syntax for variables

* fix setting secrets for script

* fix env synctax

* Fix macos pipeline

* attempt to add secrets to windows download data

* fix mac and win data download

* fix windows data download

* update test data set url and location

* Revert "Brianma/windowsai fi (#2475)"

This reverts commit 5780b864a15513fda4eadbfc2b5345fefe70b5ec.

* Add scenario tests (#2457)

* Add scenario tests

* Remove TODO from model license

* Add winml_api test dependency

* fix model load test. fi from master changed the constructor (#2483)

* make api tests all pass (#2486)

* fix bad merge

* fix bad model merge

* Layer dev paulm (#2492)

* commetns for dml graph transformer
fixed ort value passing using the allocatir info

* fixed and coded maps and sequences across the abi

* Rename ambiguous header (#2489)

* fix one more missing IR version model (#2500)

* add missing IR version to 4 more models used by scenario tests (#2501)

* Add CLI parameters to test runner, build WinML in ARM and x86 CI (#2479)

* Support test parameters through CLI arguments

* Add WinML do Windows x86/ARM CI builds

* Code style fixes

* Update googletest

Remove GPUTEST macros everywhere now that GTEST_SKIP is supported

* Refactor main.cpp

* Build scenario tests without DML

* Link scenario tests to DML when it's enabled (#2502)

* Layer dev release pipeline (#2488)

Adds winml binaries to existing cpu nuget package, and creates new gpu dml nuget package with winml binaries and DML EP.

* Layer dev paulm (#2506)

* commetns for dml graph transformer
fixed ort value passing using the allocatir info

* fixed and coded maps and sequences across the abi

* cleaned up w4's
cleaned up the model info ABI
delayload directml.dll from winml

* Remove usage of IOBinding in WinML and use C_API Run method (#2504)

* remove usage of iobinding

* Change data structure to use vector of Ort::Values

* Polish bind input / output

* Use C APIrun method

* Update providers on evaluate getresults

* Remove run and IObinding interface from WinMLAdapter

* Remove use of IObinding

* bind unbound outputs code moved to learningmodelbinding

* clean up unneeded istensor adapter function

* Fix comment

* Check if session is closed before binding and clearing

* PR feedback

* Layer dev paulm (#2507)

* commetns for dml graph transformer
fixed ort value passing using the allocatir info

* fixed and coded maps and sequences across the abi

* cleaned up w4's
cleaned up the model info ABI
delayload directml.dll from winml

* cleaned up namepsace aliases.
renamed _winmla to winmla
this was good PR feedback from tiago a while back.

* Make tests dependend on winml_dll (#2509)

* add dml binaries to DirectML package and be more explicit about condition variables (#2520)

* re-enable warnings for winml builds and fix the warnings that were hiding (#2526)

* turn devmode back on for winml builds

* fix some warnings. include protobuf in a way that disables some warnings

* undo protobufhelpers changes and just ignore 4100 errors in pb code

* attempt to isolate protobufhelpers errors

* add template specialization for getting tensor proto data

* Layer dev paulm (#2533)

* commetns for dml graph transformer
fixed ort value passing using the allocatir info

* fixed and coded maps and sequences across the abi

* cleaned up w4's
cleaned up the model info ABI
delayload directml.dll from winml

* cleaned up namepsace aliases.
renamed _winmla to winmla
this was good PR feedback from tiago a while back.

* moved files from inc to lib\api.core
cleaned up some of the cmake

* staged changes

* Spawn child process to run DeviceLostRecovery scenario test (#2530)

* Spawn child process to run DeviceLostRecovery scenario test

* Layer dev paulm (#2536)

ori said yes

* add missing namespace to winml_trace_logging_provider in lotusenvironment.h (#2542)

* Handle exception thrown from all apis in WinMLAdapter (#2539)

* various changes to unblock windowsai ADO build

* Fix custom ops scenario tests (#2562)

* Do not shutdown protobuf after ort environment gets destroyed. Lazy load lotus environment first time it is needed

* comment typo

* pr comment  about calling phoenix singleton

* Make lotus_environment static in winmladapter

* Layer dev paulm (#2567)

* commetns for dml graph transformer
fixed ort value passing using the allocatir info

* fixed and coded maps and sequences across the abi

* cleaned up w4's
cleaned up the model info ABI
delayload directml.dll from winml

* cleaned up namepsace aliases.
renamed _winmla to winmla
this was good PR feedback from tiago a while back.

* moved files from inc to lib\api.core
cleaned up some of the cmake

* staged changes

* making windowsAI azure dev ops work.

* code review comments.

* revert changes

* Cmake and preprocessor fixes that where uncovered by building on agents without DML available via SDK

* Layer dev dml delayload (#2580)

* Brianma/cpu (#2583)

* don't include dml stuff in cpu builds

* tests that link the image lib also need the telemetry lib now

* Throw Winml_err_invalid_binding if binding gpu resource on cpu device (#2589)

* Throw Winml_err_invalid_binding if binding gpu resource on cpu device

* PR comments. No need to query executionprovider for is gpu device

* User/xianz/ortthrow (#2596)

* thrown and handle onnxruntime exceptions

* handle exception thrown from ort in winmladapter

* undo changes in error.h

* add message to HRESULT

* User/xianz/ortthrow (#2599)

* thrown and handle onnxruntime exceptions

* handle exception thrown from ort in winmladapter

* undo changes in error.h

* add message to HRESULT

* add status error message

* Remove uwp onsuspending winrt call because logruntimeperf is getting removed (#2630)

* User/xianz/dedup telemetry (#2631)

* investigate duplication of telemetry in winml and ort

* remove winml telemetry events

* telemetry executionProviderEvent

* remove unneccessary file and refactor code little bit

* Revert back TelemetryEvent, which send up ETW event.

* merge changes from layer_dev to windowsai (#2638)

* Remove underscore from googletest names (#2616)

* Fix leaking memory allocator

Fix https://microsoft.visualstudio.com/OS/_workitems/edit/24278761
and https://microsoft.visualstudio.com/OS/_workitems/edit/24330198

* Explicitly initialize Ort::Value with nullptr

* Cache WinML adapter

* bad merge

* define private version of dxcore enum that is added in 19H1 SDK. (#2654)

* add comment for explaning private definition of dxcore d3d feature level ennum value. (#2672)

* do not package directml.pdb for redist packages. (#2676)

* Fix leaking operator registry (#2645)

Fix https://microsoft.visualstudio.com/OS/_workitems/edit/24354916

* User/orilevari/windowsai master merge (#2674)

merge resolutions included pulling in telemetry logic that was merged to master and not windowsai and dereferencing InferenceSession::sessionstate now that it is a unique pointer

* Delete Ort Allocator in LearningModelBinding (#2653)

* Delete OrtAllocator in LearningModelBinding

* PR comments to make Ort::Allocator a smart pointer

* Small comment change

* PR feedback to clean up code

* PR feedback on move semantics

* Clean up std::move

* Fix memory leaks (#2679)

Fix https://microsoft.visualstudio.com/OS/_workitems/edit/24356109,
https://microsoft.visualstudio.com/OS/_workitems/edit/24388361 and
https://microsoft.visualstudio.com/OS/_workitems/edit/24388596

* various changes to properly organize and skip GPU tests. For now for No DML builds we will not run GPU tests at all. In the future we should adapt the tests to expect the appropiate errors. (#2695)

* Windowsai without fi (#2701)

* Disable Attention fusion tests when DISABLE_CONTRIB_OPS is defined (#2529)

* Setup java ci (#2528)

* Add provision in ORT for session options to be parsed when available via model file  (#2449)

* Initial commit

* Fix gitmodules

* Nits

* Nits

* Updates

* Update

* More changes

* Updates

* Update

* Some updates

* More changes

* Update

* Update

* Merge

* Update

* Updates

* More changes

* Update

* Fix nits

* Updates

* Fix warning

* Fix build

* Add comment

* PR feedback

* PR feedback

* Updates

* Updates

* Update

* More changes

* Fix build break

* Comment test for now

* Updates

* Updates

* PR feedback

* Updates

* Nits

* Add tests

* Fix build

* Fix build

* Fix build

* Fix build break

* Fix build

* Nits

* PR feedback

* More change

* Expose GetSessionOptions in pybind logic and add unit test for python

* Fix build

* PR feedback

* PR feedback

* Revert "Disable thread pool creation when enabled OpenMP (#2485)" (#2535)

This reverts commit 7c7d5a149c9ed52eec67304bae5c4b132166a8a1.

* Add dynamic shape support in TensorRT execution provider (#2450)

* remove onnx-tensorrt submodule

* add new onnx-tensorrt submodule (experiment) for trt6

* update engine build for trt6

* update compile and compute for tensorrt6.0

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* switch to onnx-tensorrt master for TensorRT6'

* Update tensorrt_execution_provider.cc

* Handle dynamic batch size and add memcpy in TensorRT EP

* update test cases

* Update tensorrt_execution_provider.cc

* update onnx-tensorrt submodule

* Update Dockerfile.ubuntu_tensorrt

* Update Dockerfile.ubuntu_tensorrt

* Update run_dockerbuild.sh

* Update run_dockerbuild.sh

* Update install_ubuntu.sh

* Update concat_op_test.cc

* Update tensorrt_execution_provider.cc

* Upgrade TensorRT to version 6.0.1.5

* Update onnxruntime_providers.cmake

* Update CMakeLists.txt

* Update reduction_ops_test.cc

* Update install_ubuntu.sh

* Update Dockerfile.ubuntu_tensorrt

* Update Dockerfile.tensorrt

* Update BUILD.md

* Update run_dockerbuild.sh

* Update install_ubuntu.sh

* Update onnxruntime_providers.cmake

* Update install_ubuntu.sh

* Update install_ubuntu.sh

* Update gemm_test.cc

* Update gather_op_test.cc

* Update CMakeLists.txt

* Removed submodule

* update onnx-tensorrt submodule

* update header file

* Removed submodule

* add submodule onnx-tensorrt kevin's branch shape-test'

* add debugging code

* Update tensorrt_execution_provider.cc

* Update tensorrt_execution_provider.cc

* merge master

* Removed submodule

* update onnx-tensorrt submodule

* add more changes for dynamic shapes

* Update tensorrt_execution_provider.cc

* update for dynamic shape

* update dynamic shape processing

* fix logger issue

* remove submodule onnx-tensorrt

* add submodule onnx-tensorrt

* add env variable min_subgraph_size

* remove redundency

* update document

* use onnxruntime::make_unique

* fix multi-run issue

* remove some tests to save CI build time

* Add dynamic shape test

* Update TensorRT-ExecutionProvider.md

* Add example of running Faster R-CNN model on TensorRT EP

* Add more details on env variables

* update environment variables

* Update tensorrt_basic_test.cc

* Update model tests

* Update tensor_op_test.cc

* remove --use_full_protobuf

* Update build.py

* User/xianz/telemetry (#2458)

* enabme telemetry

* enable telemetry

* set enable telemetry as default

* for debugging

* remove log and set disable telemetry as default back

* delete private file while testing

* resolve comment: mainly add license header, rename macro and update docs

* rewording in privacy.md

* Fix integer overflow in cuda NonMaxSuppression implementation (#2540)

* add test case that should pass but fail

* fix nms

* extract int_max_output_boxes_per_class

* Introduce container type runtime checks and other improvements (#2522)

Rework TensorSeq in a manner consistent with Tensor and SparseTensor
  in terms of type system setup.
  Reduce templating. Introduce helpers to ensure the same
  data type.
  Make OrtValue __dtor not virtual.
  Introduce ContainerChecker

* Fix C API tests for centos and mac (#2544)

* change c++14 to c++11

* add ld lib path for centos

* enable csharp tests on macos

* fix C API test on MacOS + fix manylinux dotnet install

* fix manylinux dotnet install

* fix lib link

* Add back executable bit to build.py

* Fix a bug handling negative begin pad values in Pad op (#2550)

* Fix bug in Pad op

* Update

* DNNL CMAKE update (#2548)

* Fix android build (#2558)

* Update win-x86-ci.yml (#2557)

Fix build pipeline break

* Re-enable Windows C# tests (#2564)

* disable onnx_test_runner -x invocations for dnnl (#2568)

* Allow sequence length to be symbolic (#2559)

* setup java ci mac (#2570)

* make layernorm fusion to support opset 11 (#2545)

* Fix a warning found in the latest VS release

* Add more check on SkipLayerNorm and BiasGelu fusion (#2574)

* Fix file not found error during docker build. (#2569)

* Add ConvTranspose1D (#2578)

* Ryanunderhill/packagename test (#2582)

* [Nuphar EP] fixes for some object detection models (#2581)

Update notebook tutorial with multi-threaded int8 GEMM from #2517

* EmbedLayerNormalization Fusion Improvement (#2553)

Embedding layer norm fusion improvements - add more checks

* Update version (#2584)

* Temporarily exclude vgg19 test from Python backend test

1. temporarily exclude vgg19 test which comsumes too much memory, run out of memory on Upsquared device. Single test pass for vgg19, need furture investigation (#2588)
2. Update docker file to decrease the docker image size

* Update docs for Android NNAPI EP (#2586)

* Fix lto bug for protobuf and ubuntu

* add path to build dir before test run (#2590)

* Add missig env variables for mac pipeline test (#2595)

* Fixed an issue in updating realized dims (#2597)

when we update realized dims for scan's output, the sliced axis also
needs to be inclusive, i.e. we should check with "dim >= insert_inclusive_axis",
because the offset in the symbols are based on Scan sugraph.
Otherwise, we would end up with shape mismatch later.

* Java API for onnxruntime (#2215)

* Add support for opset 11 in reshape fusion (#2592)

 Support opset verion 11 in reshape fusion

* Rename automl python tools folder to featurizer_ops. (#2593)

* Support opset 11 subgraph of Squad model in Embed Layer Normalization (#2605)

Support opset 11 Squad model that is exported from PyTorch nightly. The embed layer uses Range op which is missed in the transformer.

* symbolic shape inference: fix warnings in GPT-2 model (#2608)

And revise nuphar perf test on BERT squad

* Dump subgraph ID and fused graph ID (#2607)

* Dump subgraph ID and fused graph ID

Dump subgraph ID and fused graph ID for better debugging

* Remove local static fused_count

added a field global_fused_count_ to NupharExecutionProvider class

* EmbedLayerNormalization Fusion For Dynamic Squad Model Opset 10 (#2613)

Support subgraph of SQuAD model exported from pytorch with dynamic input axes

* Allow providers to be set for InferenceSession at construction (#2606)

* Remove unnecessary parameter in some places in GatherElements implementation (#2612)

* Remove unnecessary parameter in some places

* Update

* Update

* Make sure fenced tensor could not reuse other tensor. (#2561)

Fix random error caused by this.

* Improve Embed Layer Norm Fusion for SQuAD with static input shape  (#2621)

* fix float16 comparison in initializer (#2629)

* epsilon attribute for layernormalization fusion (#2639)

* removed unnecessary batch file and fix path (#2640)

* Add shape inference to ConvTransposeWithDynamicPads schema (#2632)

* Improve cuda expand() opeator's performance. (#2624)

* Cuda pad optimize when no padding is needed. (#2625)

* Shortcut cuda Pad() when no padding is needed.

* Optimize cuda scatter() on 2D compatible. (#2628)

* Optimize cuda scatter() on 2D compatible.

* Add some comments.

* fix build error for ARM (#2648)

* Improve performance of resize() in Nearest mode (#2626)

Special treatment for 2D, check same size as input image.
And in 2d kernel, template use_expolation.

* Fix memory exception in Layer Norm Fusion (#2644)

* Windows CI changes(#2650)

* Revert "User/orilevari/windowsai master merge (#2674)"

This reverts commit fe261463112f0cf7cdef214c57eb7c70e816b616.

* Revert "Windowsai without fi (#2701)"

This reverts commit 285d4c85ff5c4e265f963208170304ef3461e684.

* Revert "User/orilevari/windowsai master merge (#2674)"

This reverts commit fe261463112f0cf7cdef214c57eb7c70e816b616.

* Deref unique pointer for session_state

* send shutdown event when dll is unloaded and EvaluationStop, SessionC… (#2704)

* send shutdown event when dll is unloaded and EvaluationStop, SessionCreationStart Events.

* Add EvalutationStart Event

* add comment

* use correct type for for loop (#2755)

* ARM CI (#2759)

* Set ARM agent pool

* Set CMake generator to VS 2019 in ARM

* Use system-wide CMake instead of custom version

Our custom version is too old for VS 2019

* Use DML and build shared lib in ARM CI

* Restore nuget packages in ARM CI

* Disable DML

* Refactor ARM debug/release builds

* Use system packaged Python version

* Remove hardcoded Python path

* Downgrade Python to 3.7 for build

* Remove explicit CMake path

* Fix invalid JSON in cgmanifest.json (#2760)

* Fix cgmanifest.json generating script (#2770)

* Fix protobuf submodule name

* Workaround pygit2 bug

* Remove usage of WHOLEARCHIVE in WinML CMake and add WinMLAdapterFactory (#2726)

* Remove usage of WHOLEARCHIVE in WinMLAdapter CMake and add WinMLAdapterFactory

* PR feedback, no need for dll(export) since using def file

* PR comments

* Small comment in gen_def.py

* User/orilevari/32bit comparison warning (#2800)

* use correct type for for loop

* explicitly specify void for parameters of OrtGetApiBase because the function is defined in c, so when the function is just (), it is interpreted as having an unknown number of parameters. This was causing compiler warning C4276.

* Move winml_provider_factory.h to proper location (#2801)

* Scneario Test : Build Google Test and Taef Test based on preprocessor definition (#2809)

* Add winml macro wrappers on top of google test macros

* change test methods to disabled

* Add custom winml macros for both taef and google tests

* PR comments

* Filter CPU case for IsFloat16Supported (#2802)

* Merge fixes

* CMake cross-generator fixes (#2790)

* Fix compilation w/ non-VS CMake generators

* Fix custom WINMD target in Ninja

* Remove usage of msbuild .targets file

* Fix linking using DML in Ninja

* Automate SDK kit version choice

* Cleanup DML package install

* Fix SDK version detection

* Fix comment

* Revert unittest linkage changes

* Fix latest SDK detection

* Don't link to non-uapcore libraries

* Remove MessageBoxA reference and unused link libs

* Refactor WinMLAPI Tests to build both google and taef test based on preprocessor definition (#2829)

* Add winml macro wrappers on top of google test macros

* change test methods to disabled

* Add custom winml macros for both taef and google tests

* PR comments

* Refactor winml api tests

* Move additional gtest specific macro definition into googleTestMacros.h

* Fix test build break since winml_lib_api needs to be statically linked to tests since winmlp::learningmodeldevice::iscpu() is being used in devicehelpers.cpp (#2837)

* Enforce WINML_TEST_CLASS_BEGIN_* matches w/ a WINML_TEST_CLASS_END (#2841)

* Fix warnings that cause build to fail

* Fix test warnings and delayload linking (#2843)

* Ortmemoryinfo struct changed

* mark the camera scenario test as edgecore because it uses d3d11 (#2852)

* User/orilevari/pipeline fi breaks (#2853)

* remove conflicting artifact names. Decided to stop using drop-nuget-cuda since this may have implications on other dependent pipelines.

* change job name in gpu.yml back to Windows_CI_GPU_CUDA_Dev

* Remove internal libs from tests (#2864)

* Support custom DML in onnxruntime_providers.cmake (#2867)

* Make DML include path global (#2882)

* Make DML include path global

* Add generated cppwinrt headers to winml_lib_common

* Integrate changes to WindowsAI to make ADO Build (#2886)

* Revert "CMake cross-generator fixes (#2790)"

This reverts commit dbe7d97fa1ab155f1309bced87199527e8f35bd2.

*  add additional suppress warning in onnx_proto

* ignore /wd4996 warning

* DML execution provider fixes

* Revert "Revert "CMake cross-generator fixes (#2790)""

This reverts commit 1ae7b4bcbc02edc881ad28685da98e095dfceb17.

* Update func signature of custom op function overloads

* common devicehelpers fixes

* Add pch.h for winml_lib_common

* re-add winml_lib_common_dir/inc to include path for winml_adapter

* User/orilevari/dml redist shared folder (#2890)

* move dml nuget package directory up one level to make it shared between build flavors

* Merge conflict fix

* Revert "Merge conflict fix"

This reverts commit 142fa72cf9ce4344ad717b50b7ea2b8582aadc7c.

* Revert "Merge remote-tracking branch 'origin/master' into windowsai"

This reverts commit 6e2126d46e5e5f564d65da37dd4f70c93dd81165, reversing
changes made to b3f5583dc9249834b947c8ea905f6a98060d5bd6.

* Make winml_test_common free of test macros (#2902)

* Add option to build winml_test_common without googletest specifics

* remove test macros from squeezenet

* comment change

* Make cmake functions to get scenario and api source

* PRcomments about hresult

* Build errors fixed

* Fix cmake variable

* Make winml_google_test_lib to build main.cpp once

* PRcomments

* Don't generate files outside the build root (#2914)

* Don't generate files outside the build root

* Add onnxruntime_EXTERNAL_DEPENDENCIES to WinML

* Add DML depedency on RESTORE_PACKAGES

* User/orilevari/fix yaml merge bugs (#2918)

* Add winml test source parameter into cmake function (#2919)

* Add option to build winml_test_common without googletest specifics

* remove test macros from squeezenet

* comment change

* Make cmake functions to get scenario and api source

* PRcomments about hresult

* Build errors fixed

* Fix cmake variable

* Make winml_google_test_lib to build main.cpp once

* PRcomments

* Add arguments to unittest cmake functions

* remove comment

* Revert "Revert "Merge remote-tracking branch 'origin/master' into windowsai""

This reverts commit ade5abe72a4234fdbc3623093c61c02c6b0bdc26.

* Fix breaks from merge with ORT master

* Brianma/linux (#2917)

* don't include windows.h in cross-plat header

* add default case for switch statement

* signed/unsigned mismatch fix

Co-authored-by: Brian Martin <42186431+martinb35@users.noreply.github.com>

* User/sheilk/winml adapter c api (#2891)

* Create winml adapter c api

* fix build

* make it build

* move adapter into onnxruntime core/session

* entry point not exported

* minor changes

* make model metadata work

* make tests pass

* implement all the model reflection apis on the adapter c abi

* update the new ort interface to create a lotus ennvironment with a logging sink

* start adding ort env

* move all winml code into adapter folder/lib to isolate it

* ensure a single logging manager at a time

* start refactoring session

* refactor session creation interface

* add cpu and dml session option methods to adapter

* finish session init

* stub out interfaces in ort lib to perform similar mechanics of iinference session

* enable profiling, and enable schema override

* update session register graph transformers

* turn back on custom registry for custom ops

* Add sync api

* add last c api stubs

* should build... but all feature values are broken since this is in flight to moving all implementation details into ivalue

* remove ep adapter header

* Implement DML execution provider functions from adapter (#2846)

* Implement DML execution provider functions from adapter

* Use functions in OnnxruntimeEngine.cpp

* make map/sequence type_infos freeable, and start implementing ivalue

* make it build again

* implement value methods

* implement remaining methods

* remove com adapter abi

* check dml session

* cache the allocator on ivalue

* check if resource is cpu/gpu when access its mutable data

* update tensor

* mismatched parentheses

* fix tensor base and binding obj

* it evaluates tensors! sometimes...

* minor fixes

* enable gpu evals

* wrapper all existing winml adapter apis with API_IMPL to try catch (#2854)

* update winml... tensor strings are broken, need to template tensorbase to do different things for strings

* make tensor strings work with 2 copies in/2 copies out

* Fix tensor string and allocator bug

* make maps work again... needs some fixes still

* Make it build!

* enable map inputs

* map outputs

* unbound outputs for sequences and maps

* User/xianz/merge windowsai (#2883)

* Packaging pipeline changes for VS 2019 (#2711)

* Tiny fix to codegen

* Simplify cache implementation and avoid static variables that may carry over between models

* Extend DML kernels (#2641)

* Additional DML operators

* Check unsupported attributes and inputs

* Address PR comments

* Add kernel capability function used for partitioning, and re-enable stride-based int64 support based on value range

* Fix test failures

* Build fix

* PR comments

* Update Nuphar tutorial notebook (#2721)

1. Reflect int8 GEMV improvements for multi-threading from #2696
2. Add notes on multi-threading control using OpenMP
3. Add samples of running multi-isa AOT, and show int8 GEMM differences between AVX and AVX2
4. Add rnn_benchmark example to resolve #1993

* Add schema for new Qops (#2611)

* Add schema for new Qops

* adding shape inference + qlinearaveragepool

* plus review comments

* plus review comments

* updates per review comments

* plus review comments

* [server] Add supposed for model_name and model_version as cli parameter (#2708)

* remove 64bit warning message from python validation. (#2727)

* MLAS: ARM64 build fix (#2734)

fix bad usage of vreinterpret to cast vector element types

* Fix broken python docs links (#2740)

* Fix build on Mac OS (#2731)

mac os ld doesn't support --while-archive, correct option is -all_load

* fix ngraph wheel (#2737)

* fix ngraph wheel

1.1.0 onnxruntime_ngraph wheel doesn't work

* remove libdnnl.so in nGraph Libs

* make it easy to compare

* Split onnxruntime server to a separated folder (#2744)

* Fix build for Python 3.8 (#2747)

* Fix build for Python 3.8

* Update protobuf to 3.11.2 (#1928)

Update protobuf to 3.11.2 (#1928)

* Change default optimization level to All (from Basic) (#2745)

* change default optimization level to All (from Basic)

* fix test

* fix c# test

* Update numpy to 1.18 (#2758)

* Update numpy to 1.18

* Pipeline changes for python 3.8 (#2753)

1. Pipeline changes for python 3.8
2. Fix a regression in setup.py which was just introduced in the previous commit.

Please notice, we still haven't made python 3.8 + Windows + CUDA work.

* Add basic stacktrace output for posix debug builds. (#2749)

* [NupharEP] fix a race condition when multiple sessions running different models concurrently (#2772)

* Revert "Change default optimization level to All (from Basic) (#2745)"

This reverts commit 56bb503c2f26474b6613bcb2a198691a11dcef38.

* Fix typo in error message (#2736)

* Rename MKL-DNN to DNNL to fix broken link (#2730)

* Fix nightly build version number issue

* Pass BUILD_BUILDNUMBER to linux docker

* Disable featurizers in python packages

* Import more featurizers (#2781)

Make kernels non-template. Add input constraint for learnt data.
  Add min_max_scalar_transformer, robust_scalar_transformer,
  inputation_marker_transfomer, label_encoder_transformer,
 missing_dummies_transformer along with tests.
 Advance Featurizers library commit.

* Implement a more stable softmax (#2715)

* Implement a more stable SoftMax
 e^x is represented as infinity if x is large enough, like 100.f. Infinity divided by Infinity is a NAN. Thus, softmax gets a NAN if one or more item are large enough.
A math transform as below is leveraged to get a stable softmax:
e^xi/(e^x1 + ...e^xn) = e^(xi - max) / (e^(x1 - max) + ... + e^(xn - max))

And for convenience, force max to 0.f if all xi are negative

* Contributing: Fix a typo (#2784)

* ACL EP GEMM improvements (#2780)

When it is posible we use a fully connected layer instead of the gemm implementation.
This will let the library use the best implementation based on the input data.

* ACL EP convolution improvements (#2774)

Added the optimized implementation for depthwise convolution for both ACL v19.02 and ACL 19.05.
Also the pointwise convolution seems to be more optimal in the CPU implementation so we opted for that instead.

* Add script for release Nuget validation (#2719)

* Initial commit

* Nits

* Disable a test temporarily

* Change working directory

* Test

* Add download python step

* Test update

* More changes

* Fix space issue

* Fix

* Verify nuget signing

* Fix

* Spaces

* PR feedback

* Nit

* Fix

* Fix

* Remove temporary changes

* add uint8 support to where op (#2792)

* Improve bert optimization script: (#2712)

(1) Move input int64=>int32 conversion to embed layer fusion.
(2) Output epsilon attribute for LayerNormalization fusion.

* add session creation time cost. (#2798)

* ML.NET team needs featurizers within a package (#2789)

Add auto ml featurizers to Windows, MacOS as well as to GPU  packaging-pipelines.

* Initialize max of softmax with lowest of float (#2786)

* MLAS: update SGEMM threading parameters (#2808)

* add interface to copy batch tensors. (#2807)

* add interface to copy batch tensors.

* onnxruntime

* speed up Windows TRT CI (#2811)

* don't run cuda tests if building with tensorrt

* remove unnecessary build options for win trt ci

* refactor win gpu tensorrt ci yml

* --numpy_version=1.17

* update

* update

* azcopy and cuda path

* Update test data (#2356)

* Add timeseries imputer transformer featurizer kernel (#2813)

 Make kernels non-template. Add input constraint for learnt data.
  Fixup tests.
  Add two more featurizers along with tests. Tests fail.
  min_max_scalar_transformer
  robust_scalar_transformer
  Fix tests serialized stream by prepending version bytes.
  Add inputation_marker_transfomer and the test.
  Fix up float/double type designations.
 Added label_encoder_transformer along with a test.
  string_throw case is broken at the momement.
  Fix labelencodertransfomer_test.cc string_throw case
  Rename maxabsscalertransformer_test.cc
  Add MissingDummiesTransformer along with the test.
  Update manifest.
  Add TimeSeriesImputerTransformer definition, implementation and tests

* Fix memory leak in TRT (#2815)

* fix memory leak issue

* revert EP_FAIL on enueueV2

* Add manifest missing comma

* Run static code analyzer on most of our code (#2817)

* Scneario Test : Build Google Test and Taef Test based on preprocessor definition (#2809)

* Add winml macro wrappers on top of google test macros

* change test methods to disabled

* Add custom winml macros for both taef and google tests

* PR comments

* update quantization doc (#2783)

* update documentation for quantization script

* plus some spell corrections

* Filter CPU case for IsFloat16Supported (#2802)

* update default optimization level + fix gemm_activation fusion (#2791)

* update defualt optimization level + fix gemm_activation fusion

* fix typo

* add unit test and incorporate review comments

* fix test comment

* Fix dnnl wheel package name (#2823)

* Append '-dnnl' to whl package name when --use_dnnl

* Update build.py

* Update Ubuntu & TensorRT version  in README (#2820)

Dockerfile.tensorrt is using nvcr.io/nvidia/tensorrt:19.09-py3 as base Image, update Ubuntu and TensorRT version according to
https://docs.nvidia.com/deeplearning/sdk/tensorrt-container-release-notes/rel_19-09.html#rel_19-09

* Merge fixes

* Add OneHotEncoder and HashOneHotEncoder kernels. (#2830)

 Add defs and imlementation for OneHotEncoders, adjuist date_time_transformer kernel and test.
  Add OneHotEncoder kernel test.
  Add HashOneHotVectorizerTransformer unit test.
  This does not link due to multiple definitions of functions
  that are included into header from a CPP file.

* Upgrade gtest to the latest version (#2827)

WinML would like to update the googletest submodule. They want some newer features (namely GTEST_SKIP to skip tests programmatically and be able to skip entire fixtures easily) and would need to update the submodule version.

However, because the new version of code hit a bug in gcc, even though the bug is already fixed in the latest gcc but we're using gcc 4.8.x and it won't get patched for the bug, so we have to do a compromise, change our code a little bit to make it work.

The gcc bug:  https://gcc.gnu.org/bugzilla/show_bug.cgi?id=51213

* Add support for int64_t for topk CPU. Fixes github issue #2806. (#2833)

* Ignore allocator type in ExecutionProviders allocator map. Make default initialization of OrtMemoryInfo more clearly invalid. (#2768)

* Remove allocator type from the key comparison in ExecutionProviders.
Remove usage of DummyArena as it's no longer necessary.

* Fix x86 tests where arena allocator is disabled.
Make initialization of OrtMemoryInfo clearer by adding Invalid enum value.

* Make OrtValueNameIdxMap::MaxIdx more intuitive.

* Convert ExternalProject Featurizers into git submodule (#2834)

Add git submodule for Featurizer library.
  Update cmake to build for git submodule.

* add domain check for nodes + update documentation (#2831)

* Fix cgmanifest.json generating script (#2770)

* Fix protobuf submodule name

* Workaround pygit2 bug

* User/orilevari/32bit comparison warning (#2800)

* use correct type for for loop

* explicitly specify void for parameters of OrtGetApiBase because the function is defined in c, so when the function is just (), it is interpreted as having an unknown number of parameters. This was causing compiler warning C4276.

* CMake cross-generator fixes (#2790)

* Fix compilation w/ non-VS CMake generators

* Fix custom WINMD target in Ninja

* Remove usage of msbuild .targets file

* Fix linking using DML in Ninja

* Automate SDK kit version choice

* Cleanup DML package install

* Fix SDK version detection

* Fix comment

* Revert unittest linkage changes

* Fix latest SDK detection

* Don't link to non-uapcore libraries

* Remove MessageBoxA reference and unused link libs

* Fix Linux CUDA nuget packaging pipeline break

* Refactor WinMLAPI Tests to build both google and taef test based on preprocessor definition (#2829)

* Add winml macro wrappers on top of google test macros

* change test methods to disabled

* Add custom winml macros for both taef and google tests

* PR comments

* Refactor winml api tests

* Move additional gtest specific macro definition into googleTestMacros.h

* Fix test build break since winml_lib_api needs to be statically linked to tests since winmlp::learningmodeldevice::iscpu() is being used in devicehelpers.cpp (#2837)

* Enforce WINML_TEST_CLASS_BEGIN_* matches w/ a WINML_TEST_CLASS_END (#2841)

* update optimization doc for BERT related fusions  (#2819)

* Add bert related transformers to doc
* Add execution provider and comment for bert optimizations
* Add comment about accuracy impact of approximation

* Fix warnings that cause build to fail

* MLAS: enable threading for quantized GEMMs (#2844)

* Fix test warnings and delayload linking (#2843)

* Ortmemoryinfo struct changed

* mark the camera scenario test as edgecore because it uses d3d11 (#2852)

* User/orilevari/pipeline fi breaks (#2853)

* remove conflicting artifact names. Decided to stop using drop-nuget-cuda since this may have implications on other dependent pipelines.

* change job name in gpu.yml back to Windows_CI_GPU_CUDA_Dev

* Remove internal libs from tests (#2864)

* Support custom DML in onnxruntime_providers.cmake (#2867)

* remove old winmladapter cpp

Co-authored-by: Changming Sun <chasun@microsoft.com>
Co-authored-by: KeDengMS <kedeng@microsoft.com>
Co-authored-by: Jeff <38966965+jeffbloo@users.noreply.github.com>
Co-authored-by: Ashwini Khade <askhade@microsoft.com>
Co-authored-by: Andrey <andrey.lompart@gmail.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: Tracy Sharpe <42477615+tracysh@users.noreply.github.com>
Co-authored-by: Faith Xu <txsafx@gmail.com>
Co-authored-by: zhanyi-ms <zhanyi@microsoft.com>
Co-authored-by: Changyoung Koh <gkcy1019@gmail.com>
Co-authored-by: Scott McKay <Scott.McKay@microsoft.com>
Co-authored-by: Takeshi Watanabe <take-cheeze@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: Yufeng Li <liyufeng1987@gmail.com>
Co-authored-by: Maher Jendoubi <maher.jendoubi@gmail.com>
Co-authored-by: Andrews548 <32704142+Andrews548@users.noreply.github.com>
Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: Nathan <7902510+ybrnathan@users.noreply.github.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Ke Zhang <kezhan@microsoft.com>
Co-authored-by: stevenlix <38092805+stevenlix@users.noreply.github.com>
Co-authored-by: Ryan Lai <ryalai96@gmail.com>
Co-authored-by: Ori Levari <ori.levari@microsoft.com>
Co-authored-by: Yingge WAN <y-wan@users.noreply.github.com>
Co-authored-by: Qing <cwq1913@gmail.com>
Co-authored-by: Pranav Sharma <emailpranav@gmail.com>
Co-authored-by: Tiago Koji Castro Shibata <tiago.shibata@gmail.com>

* move sequence implementation into ort lib... still commented out... need to turn back on...

* begin sequence implementation

* make maps and sequences work

* fix broken tests

* remove dead code

* misc cleanup

* CR feedback

* User/xianz/winml adapter c api (#2869)

* wrapper all existing winml adapter apis with API_IMPL to try catch

* Return HR or Throw for WinML adapter APIs if failed

* undo macro wrapper for two places

* Wrap error macros around ort apis, too.

* address CR feedback #2

* add more api throw/return macros

* Revert changes no longer needed

* revert changes to cxx api

* format winml lib.ort and winml adapter

* remove static pheonix singleton

Co-authored-by: Ryan Lai <ryalai96@gmail.com>
Co-authored-by: Xiang Zhang <xianz@microsoft.com>
Co-authored-by: Changming Sun <chasun@microsoft.com>
Co-authored-by: KeDengMS <kedeng@microsoft.com>
Co-authored-by: Jeff <38966965+jeffbloo@users.noreply.github.com>
Co-authored-by: Ashwini Khade <askhade@microsoft.com>
Co-authored-by: Andrey <andrey.lompart@gmail.com>
Co-authored-by: George Wu <jywu@microsoft.com>
Co-authored-by: Tracy Sharpe <42477615+tracysh@users.noreply.github.com>
Co-authored-by: Faith Xu <txsafx@gmail.com>
Co-authored-by: zhanyi-ms <zhanyi@microsoft.com>
Co-authored-by: Changyoung Koh <gkcy1019@gmail.com>
Co-authored-by: Scott McKay <Scott.McKay@microsoft.com>
Co-authored-by: Takeshi Watanabe <take-cheeze@users.noreply.github.com>
Co-authored-by: Dmitri Smirnov <yuslepukhin@users.noreply.github.com>
Co-authored-by: Yufeng Li <liyufeng1987@gmail.com>
Co-authored-by: Maher Jendoubi <maher.jendoubi@gmail.com>
Co-authored-by: Andrews548 <32704142+Andrews548@users.noreply.github.com>
Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: Nathan <7902510+ybrnathan@users.noreply.github.com>
Co-authored-by: Tianlei Wu <tlwu@microsoft.com>
Co-authored-by: Ke Zhang <kezhan@microsoft.com>
Co-authored-by: stevenlix <38092805+stevenlix@users.noreply.github.com>
Co-authored-by: Ori Levari <ori.levari@microsoft.com>
Co-authored-by: Yingge WAN <y-wan@users.noreply.github.com>
Co-authored-by: Qing <cwq1913@gmail.com>
Co-authored-by: Pranav Sharma <emailpranav@gmail.com>
Co-authored-by: Tiago Koji Castro Shibata <tiago.shibata@gmail.com>

* missing use_dml check in winml_adapter_session (#2930)

* --use_dnnl flag was mangled in merge (#2931)

* use dml macro not wrapping custom registry code (#2934)

* Disable LNK4199 winml_dll to enable cuda builds (#2936)

* Disable LNK4199 in winml_dll

* linkler->linker

* LearningModelSessionAPITestGpu.CreateSessionWithCastToFloat16InModel should return DXGI_ERROR_UNSUPPORTED when FP16 not supported (#2937)

* Disable LNK4199 in winml_dll

* linkler->linker

* Need to return DXGI_ERROR_UNSUPPORTED when Model does not support fp16

* Publish build symbols (#2939)

* Publish build symbols

* Don't upload PDBs for .exe files

* Make x86 build (#2943)

* fix last remaining size_t/int64_t warnings->errors (#2948)

* TensorString, Sequences and Maps use the first allocator, but should use the cpu default allocator. (#2952)

* fix tensor string allcoator

* clean up default allocator usage for strings in winml lib/api.ort

Co-authored-by: Ryan Lai <ryalai96@gmail.com>

* Handle tensor shape of ze…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants