Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: Converting from ONNX to ORT fails when setting Device=Direct ML [C++] [ONNX2ORT converter] [Direct ML] #8440

Closed
gineshidalgo99 opened this issue Jul 20, 2021 · 28 comments
Labels
ep:DML issues related to the DirectML execution provider

Comments

@gineshidalgo99
Copy link
Contributor

gineshidalgo99 commented Jul 20, 2021

Describe the bug
ONNX to ORT conversion works when device=CPU, but does not with Direct ML (exact same code)

Low level details:

  • I tried with 7 networks and it happens in all of them (including MNIST and ResNet)
  • I tried and this affects both v1.7.1 (the one we are using) and your very latest GitHub code.
  • ORT doesn't crash on conversion but rather later when loading/using those new ORT models, but when checking the ORT files, the ORT for CPU is similar in size to the ONNX file (~MBs), while the GPU one is only a few KBs, so definitely a bug on their converter. The crash it tells me is something about operators not implemented, but the ORT file is just too small (error in Modifying ORT to load 3rd Party Model #7931)

Also, all models run (and we checked they match the original PyTorch model accuracies) if loaded from ONNX and set to DML:
What works:

  • ONNX file loaded, set to CPU and running inference on it
  • ONNX file loaded, set to GPU and running inference on it
  • ONNX file loaded, set to CPU, converted to ORT, loaded as ORT file, set to CPU, and running inference on it

What does not work:

  • ONNX file loaded, set to GPU, converted to ORT, loaded as ORT file, set to GPU, and running inference on it
  • ONNX file loaded, set to CPU, converted to ORT, loaded as ORT file, set to GPU, and running inference on it --> This one does not crash, but it is clearly running on CPU because its runtime timings are those of the CPU version (not the GPU version). So it seems that whatever session option was loaded for the ORT file is what it's used for it regardless of me trying to set it to another kind of device

Urgency
Urgent --> It blocks ORT file deployment on DirectML networks. We have an internal deadline in August to release this project

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
  • ONNX Runtime installed from (source or binary): Tried both, but we care mostly about the source one
  • ONNX Runtime version: v1.7.1 and also tested in latest GitHub code
  • Python version: None, using C++
  • Visual Studio version (if applicable): VS 2019 Professional
  • GCC/Compiler version (if compiling from source): VS 2019 Professional
  • CUDA/cuDNN version: None (DirectML)
  • GPU model and memory: Nvidia 3080

To Reproduce

  • Describe steps/code to reproduce the behavior.
// Conversion step
{
    // Set up ORT and create an environment
    Ort::InitApi();
    const char* const ModelRelativeFilePathCharPtr = TCHAR_TO_ANSI(*InModelRelativeFilePath);
    Environment = MakeUnique<Ort::Env>(ORT_LOGGING_LEVEL_WARNING, ModelRelativeFilePathCharPtr);
    Allocator = MakeUnique<Ort::AllocatorWithDefaultOptions>();
    SessionOptions = MakeUnique<Ort::SessionOptions>();
    if (Device == GPU)
    {
        SessionOptions->SetGraphOptimizationLevel(GraphOptimizationLevel::ORT_ENABLE_EXTENDED);
        OrtSessionOptionsAppendExecutionProvider_DML(*SessionOptions, 0);
    }
    else
    {
        SessionOptions->SetGraphOptimizationLevel(GraphOptimizationLevel::ORT_ENABLE_ALL);
    }

    // Generate ORT file
    SessionOptions->SetOptimizedModelFilePath(*OutputORTOptimizedModelPath);
    Session = MakeUnique<Ort::Session>(*Impl->Environment, *FullModelFilePath, *SessionOptions);

    // Result --> ORT file on disk on OutputORTOptimizedModelPath, which is good if Device = CPU, but smaller than it should be if Device = GPU
}

// Running step
{
    // Same setting code

    // Load/run ORT file
    // Note the lack of "SetOptimizedModelFilePath()"
    Session = MakeUnique<Ort::Session>(*Impl->Environment, *FullModelFilePath, *SessionOptions);

   // Result --> ORT file working fine as long as it's on CPU, but crashing when it's DirectML giving the error shown in https://github.com/microsoft/onnxruntime/discussions/7931
}

Expected behavior
I expect both ORT files to approximately have the same size, and for the DirectML one not to crash when used later

@faxu
Copy link
Contributor

faxu commented Jul 20, 2021

Did you check if the original model (non-ORT format) runs on DML?

@gineshidalgo99
Copy link
Contributor Author

gineshidalgo99 commented Jul 20, 2021

Yes, I forgot to say that, all models run (and we checked they match the original PyTorch model accuracies) if loaded from ONNX and set to DML

What works:

  • ONNX file loaded, set to CPU and running inference on it
  • ONNX file loaded, set to GPU and running inference on it
  • ONNX file loaded, set to CPU, converted to ORT, loaded as ORT file, set to CPU, and running inference on it

What does not work:

  • ONNX file loaded, set to GPU, converted to ORT, loaded as ORT file, set to GPU, and running inference on it

We also tried this:

  • ONNX file loaded, set to CPU, converted to ORT, loaded as ORT file, set to GPU, and running inference on it --> This one does not crash, but it is clearly running on CPU because its runtime timings are those of the CPU version (not the GPU version). So it seems that whatever session option was loaded for the ORT file is what it's used for it regardless of me trying to set it to another kind of device

@pranavsharma
Copy link
Contributor

Did you try saving the optimized ONNX model as foo.onnx (where foo is the name of your model) without making a call to session_options.AddConfigEntry("session.save_model_format", "ORT"); and then running the saved model with DML?

@guoyu-wang
Copy link
Contributor

Please set the logging level to ORT_LOGGING_LEVEL_VERBOSE in
Environment = MakeUnique<Ort::Env>(ORT_LOGGING_LEVEL_WARNING, ModelRelativeFilePathCharPtr); and attach the logs.

@guoyu-wang
Copy link
Contributor

guoyu-wang commented Jul 20, 2021

From the ORT file uploaded, the one converted with DML has no initializers in the graph (the model has no weights at all), seems the initializer of the graph are cleared out by the DML EP before the saving to ORT format happens, probably this is the main reason why the execution fails.

@skottmckay
Copy link
Contributor

What's the reason for attempting to use the ORT file format in the GPU scenarios?

ORT format is targeting mobile/edge scenarios where binary size is critical, so the current expected usage is with CPU kernels and optionally things like the NNAPI or CoreML EP to utilize the NPU on a device. CUDA kernels are massive so any binary size saving from using the ORT fomat is meaningless. Not sure how large the DML kernels are, although I know there's no infrastructure setup to exclude them in a minimal build, so a build with DML enabled would include all the kernels and not just the required ones. Based on that, there doesn't seem to be a binary size benefit, so it's not clear why you'd want/need to use an ORT format model.

ONNX file loaded, set to CPU, converted to ORT, loaded as ORT file, set to GPU, and running inference on it --> This one does not crash, but it is clearly running on CPU because its runtime timings are those of the CPU version (not the GPU version). So it seems that whatever session option was loaded for the ORT file is what it's used for it regardless of me trying to set it to another kind of device

ORT format doesn't support changing the static kernel assigned to a node at runtime. If you generated the ORT format model with CPU enabled, it will only use CPU at runtime. It does allow dynamic kernels (e.g. NNAPI and CoreML) taking nodes at runtime (node is executed as a CoreML or NNAPI model so the static kernel assigned is ignored), but that doesn't seem to be applicable to your usage.

@gineshidalgo99
Copy link
Contributor Author

PS: I will answer with @pranavsharma and @gwang-msft tests tomorrow (foo.onnx and ORT_LOGGING_LEVEL_VERBOSE)

Answering to @skottmckay:
It is critical for us to be able to use a single and unified ORT API:

  • Using ORT files is important for us because we need to support Android, iOS, and CPU.
  • Supporting DML is also crucial because we need to support Windows/XBox machines.

Another hard requirement we have is that we cannot let the file sit on the hard-disk, we have to feed it to ORT on runtime. And ORT files/FlatBuffers are way simpler to serialize than protobuf/onnx ones.

  • I.e., it's extremely easy to send a std::vector to the ORT API and hack it to read it instead of a foo.ort file (we already have this working).
  • ONNX/Protobuf: We tried doing this with the ONNX file, but feeding a buffer of Protobuf data to the ORT API is not that easy at all, and the ORT API seems to open the onnx file in many places, it does not only read it as a vector as it does with the ORT file.

Given these 2 reasons, having ORT files working with DML is very important for us in the short term.

@skottmckay
Copy link
Contributor

skottmckay commented Jul 21, 2021

ONNX format files are supported on all platforms. It's just that the binary size of the ORT library will be bigger vs. a minimal build that only supports ORT format models (by a few MB). For that you get a lot more flexibility though, such as the ability to use CPU or GPU depending on what's available at runtime.

Can you provide more details on how you were trying to feed the ONNX format file at runtime? InferenceSession has an API where raw bytes can be provided, which can be used for both ONNX and ORT format models. Given that, I'm not quite following how 'the ORT API seems to open the onnx file in many places' given it's only seeing bytes and not a filename if that API is used.

common::Status InferenceSession::Load(const void* model_data, int model_data_len) {

I did a quick test using the python API and it seemed to work fine with the ONNX format model being provided as bytes.

import onnxruntime as ort
import numpy as np

model_path = r'my_test_model.onnx'

so = ort.SessionOptions()
s = ort.InferenceSession(model_path, so)

# random input matching what the model requires
input_data = np.zeros((1, 5, 512, 867), dtype=np.float32)
inputs = { 'input': input_data }

# run with filename
o1 = s.run(None, inputs)

with open(model_path, 'rb') as infile:
    bytes = infile.read()
    # run with bytes
    s2 = ort.InferenceSession(bytes, so)
    o2 = s2.run(None, inputs)

    # this model produces a single output so compare the run via filename with the run with bytes
    print(np.array_equal(o1[0],o2[0]))

@pranavsharma
Copy link
Contributor

@gineshidalgo99 Our public C API already provides a unified way to create sessions by passing the bytes associated with both ORT and ONNX models. Take a look at this function. This way you can use the ORT format models on ios and android and ONNX format on desktop/server.

@gineshidalgo99
Copy link
Contributor Author

gineshidalgo99 commented Jul 21, 2021

We are happy to try this solution, it'd solve the problem for us in the short term (getting Windows fully working)!

But we are working on C++, and I could not find any C++ example of this InferenceSession::Load(const void* model_data, int model_data_len). How can it be used from a onnx file in C++? Do I read it as a vector? As std::string? Or what exactly is what that void* takes? Any minimal C++ code snippet about how to turn the onnx file to that void* would highly help here!

(Less important in the short term) Also, about why we cared about ORT files and DML, we need a solution that also works for our custom GPU EP (for platforms like Nintendo Switch and PlayStation 5), where we also need to minimize build size in eg PS5. Given the ORT file issue with DML, we are concerned this might also occur if we create our own GPU EP for PS5/Nintendo, is this the case?

@skottmckay
Copy link
Contributor

Example of reading bytes from file:

const char* model_path = "testdata/matmul_1.onnx";
std::vector<char> buffer;
{
std::ifstream file(model_path, std::ios::binary | std::ios::ate);
if (!file)
ORT_THROW("Error reading model");
buffer.resize(file.tellg());
file.seekg(0, std::ios::beg);
if (!file.read(buffer.data(), buffer.size()))
ORT_THROW("Error reading model");
}

The bytes are just passed directly when creating the inference session.

Ort::Session session(*ort_env.get(), buffer.data(), buffer.size(), so);

We'll look into the DML issue as it should be possible to use that with an ORT format model.

@pranavsharma
Copy link
Contributor

One example in our repo is here.

@guoyu-wang
Copy link
Contributor

Or you can look at this past issue, #6475 (comment)

@yuslepukhin yuslepukhin added the ep:DML issues related to the DirectML execution provider label Jul 21, 2021
@gineshidalgo99
Copy link
Contributor Author

Thanks to those last answers we were able to feed the ONNX buffer into ORT directly, which is a working workaround for us!

We will keep an eye to this post to know when the DML-ORT file issue is solved, as we'd need to switch to it once it's working, but we are no longer blocked.

Thanks for the quick answers and the great work!

@skottmckay
Copy link
Contributor

Regarding the DML support, the DML EP has two different ways of handling parts of the graph. One is with statically registered kernels, and one is with dynamically created kernels. The static ones should work out-of-the-box with the ORT format. The dynamically registered ones however are making some changes to the graph earlier than expected, so parts of the graph aren't available to be saved in the ORT format model. As that's done somewhat unofficially (there's a const_cast to get access to initializers) we'd need to look into restructuring that to make sure that when we're creating the ORT format model that doesn't happen.

@skottmckay
Copy link
Contributor

POC for adding support for DML when using an ORT format model: https://github.com/microsoft/onnxruntime/compare/skottmckay/ORT_model_support_with_DML_EP

Technically we could create the ORT format model with just basic optimizations and DML disabled to not require the changes in the DML graph partitioning. At runtime, if DML was enabled it could still execute the same nodes.

@diablodale
Copy link
Contributor

I think I have the same or highly related issue.

  1. onnx runtime 1.12 with DML ep
  2. squeezenet1.0-7.onnx from Microsoft git repo; filesize = 4,952,222 bytes
  3. SetOptimizedModelFilePath(thepath)
  4. session optimizes model and saves it to thepath with a file size = 3,756 bytes
  5. inference runs correctly
  6. shutdown

then

  1. onnx runtime 1.12 with DML ep
  2. the optimized squeezenet1.0-7.onnx from above step 4
  3. session fails with Load model from C:\**redacted**\squeezenet1.0-7.onnx failed:D:\a\_work\1\s\onnxruntime\core\graph\graph.cc:1203 onnxruntime::Graph::Graph This is an invalid model. Tensor does not have type information.

If not the same issue, then please tell me and I'll open a new issue

@gedoensmax
Copy link
Contributor

gedoensmax commented Sep 9, 2022

@diablodale I am with him on this one. Getting the same error on multiple models and the resulting ONNX files are not viewable in netron.
I also tried to set ORT_DISABLE_ALL optimizations in case ops are fused for DML but the Model is still broken.

@skottmckay
Copy link
Contributor

The DML EP makes some changes to the model during partitioning that are not really expected by ORT. Essentially it does a const_cast and steals initializers for memory usage reasons, but that means ORT doesn't have the initializers to write to the optimized file. @fdwr would your PR (still open I note) help with that?

@diablodale @gedoensmax can you elaborate on your use case where you want/need DML to be enabled when creating an optimized model vs. doing that at runtime?

@gedoensmax
Copy link
Contributor

I have been looking into session creation time on ORT. For some models it is quite drastically decreased if the shape is known for esch tensor. With a fixed size input model and simplifying these shapes are usually saved - but sometimes only to some stage within the model. If i understand ort correctly it runs the model to „really“ know all shapes if the input has a fixed size.

I am aware that these models might get a complete shape inference with some graph surgeon magic.
Nonetheless some applications habe either fixed size engines that are used on demand but have this problem (would be great to cache this to disk for later use). Or use a dynamic size model but if one size is used it is being used multiple times so that you might want to save this fixed shape ONNX file after first use. Something like TensorRT engine caching for DML. Or would the better way to save to ORT format ?

@skottmckay
Copy link
Contributor

@gedoensmax If you have a model with dynamic dimensions and want to make them fixed, you could use this tool: https://onnxruntime.ai/docs/reference/mobile/make-dynamic-shape-fixed.html

I don't quite understand how model load time would be affected by having fixed shapes. If anything, I would expect more optimizations to be possible when shapes are fixed.

I would suggest running the 'basic' level optimizations on the model with just the CPU EP enabled to do those optimizations ahead of time. They are not specific to any EP, only use official ONNX operators, and cover things like constant folding and common subexpression elimination.

Beyond the 'basic' level you get into EP specific optimizations which may involve compiling nodes or fusing nodes that will use a custom operator. Currently there's no general purpose way to save a compiled node like TensorRT engine caching does. An inference session is intended to be re-used though, so this cost during loading is not per-inference.

@fdwr
Copy link
Contributor

fdwr commented Sep 12, 2022

@skottmckay 🤔 I should abandon that PR, as @sumitsays is working on a more complete solution after discussing with Cheng Tang about the EP interface refactor. Currently the DML EP fuses partitions of DML nodes into a single DML_GRAPH node, which is an IDMLOperator that contains all the operators for that partition, but if you attempt to reload the .ort graph containing a "DmlFusedGraph" node, ORT won't know how to map that to any operator because context is lost (there is no such ONNX operator with that name, and the internal subgraph only existed in memory).

However, beware that even after Sumit's changes, it will generally not be robust to optimize the graph with one GPU and run the same graph on a different GPU, as differences between GPU's (e.g. which data types are supported) could actually make a difference in the optimized graph. Replaying on the same machine, or on a specific device (e.g. gaming console) would be more robust.

@diablodale
Copy link
Contributor

@diablodale @gedoensmax can you elaborate on your use case where you want/need DML to be enabled when creating an optimized model vs. doing that at runtime?

I create a DLL plugin for the Cycling74 Max runtime patching system. My customers are educators, researchers, artists, musicians, etc. I provide one onnx model for a specific use case plus the ability to run any onnx model. My DLL transforms in/outs between native Max data. My plugin allows running the model on the cpu, directml, cuda, or tensorRT providers with a single setting change. I hide all the technical complexities so my customers can focus on their art/research/education.

The Max environment is always running, it is a graphical hack/patch environment where nodes are connected by patchcords. Patchcords and nodes are reshaped/connected hundreds of times a day as customers experiment and try ideas. This realtime iteration necessitates caching and reuse. The time burden of running the onnx optimization process every time they connect a patchcord or click "go" hampers their creativity and kills their "flow".

I know when hardware, models, or settings change...therefore I can cache models after they go through the optimization process. I already do this successfully with the TensorRT provider. A similar ability with DirectML is desired and I attempted it with SetOptimizedModelFilePath() but ran into this same OP...the saved DirectML model is unusable.

@skottmckay
Copy link
Contributor

Unfortunately ORT doesn't have a way to general way to save a compiled node. The TensorRT EP is doing that via TensorRT's ability to save but AFAIK that is the only place that's possible. For CPU and CUDA you could save the fully optimized model as neither of those compile nodes. The saved model would contain internal operators that are specific to the CPU/CUDA EPs, but that should be fine for local caching.

@fdwr
Copy link
Contributor

fdwr commented Dec 9, 2022

@diablodale / @gedoensmax:

  • This pending change allows exporting/reimporting the optimized model (recently enabled after Sumit's refactoring): [DML EP] Disable DML Graph Fusion for lower graph optimization level OR setOptimizedFilePath true #13913.
  • It will be in ORT 1.14.
  • Note the caveat remains that the same .ort file cannot reliably be replayed on different execution providers, and that the same file may not be replayable on the same execution provider across different GPU's, due to different graph partitioning assignments made based on GPU data type support.

@diablodale
Copy link
Contributor

diablodale commented Dec 9, 2022

Got it, I've already code in place to invalidate a persisted optimized model if any config changes.

A question, in #13913 I saw the comment This transformer applies DML-specific fusions that go beyond what ORT offers by default. The following is some guessing...
When we persist with setOptFilePath=true, it will not do the fusing of partitions of DML nodes into a single DML_GRAPH.
It will instead persist a slightly less optimized model lacking that fuse.
When this persisted model is loaded, will Ort do that final fuse optimization?
Or, is this the tradeoff to have a faster load?

sumitsays added a commit that referenced this issue Dec 12, 2022
…OR setOptimizedFilePath true (#13913)

### Description
DML EP won't fuse the ONNX Graph if ORT Graph optimization level is <= 1
or `SessionOption::SetOptimizedFilePath` is passed.

This is the successor of
#11346.

### Motivation and Context
- **Why is this change required? What problem does it solve?**  
Requested by few a users (issues below) and also helps in debugging.
- **If it fixes an open issue, please link to the issue here:**
  - #13535
  - #8440
baijumeswani pushed a commit that referenced this issue Dec 13, 2022
…OR setOptimizedFilePath true (#13913)

### Description
DML EP won't fuse the ONNX Graph if ORT Graph optimization level is <= 1
or `SessionOption::SetOptimizedFilePath` is passed.

This is the successor of
#11346.

### Motivation and Context
- **Why is this change required? What problem does it solve?**  
Requested by few a users (issues below) and also helps in debugging.
- **If it fixes an open issue, please link to the issue here:**
  - #13535
  - #8440
natke added a commit to natke/onnxruntime that referenced this issue Dec 14, 2022
* Update libclang version for building C/C++ API docs

* Temporarily add a workflow dispatch

* Fix libclang-cpp too

* Fix libclang-cpp again

* Upgrade doxygen

* Upgrade doxygen again

* Upgrade doxygen yet again

* Update Doxygen config

* Temporarily disable warning as error to test remainder of workflow

* Add path to upload-pages-artifact

* Remove doxygen binaries after using

* Pass SessionOptions to XnnpackProviderFactoryCreator. (microsoft#13318)

### Description
To pass session_options to Xnnpack EP via
`XnnpackProviderFactoryCreator` for Initializing xnnpack's threadpool.

If you want to use different threadpool size or even disable xnnpack's
threadpool, just setting intra_threadpool to 1 by xnnpack EP's
provider_options.


### Motivation and Context

Co-authored-by: Guangyun Han <guangyunhan@microsoft.com>
Co-authored-by: Jicheng Wen <jicwen@microsoft.com>

* Allow using separate GPT2 decoder subgraphs for the initial run and the subsequent runs in BeamSearch/GreedySearch (microsoft#13914)

* Add two daily build jobs to validate some extra build configs (microsoft#13921)

### Description
Add two daily build jobs to validate some extra build configs


### Motivation and Context

To catch issues like: microsoft#13893

* Output cache stats (microsoft#13937)

### Description
Output cache stats

* Add support for other data types to Split CPU kernel. (microsoft#13900)

Split copies data - we can add support for all data types without too much binary size impact by using data type size-based implementations. The DispatchStridedCopy() function used here does this.

* Build docs with jekyll

* [DML EP] Disable DML Graph Fusion for lower graph optimization level OR setOptimizedFilePath true (microsoft#13913)

### Description
DML EP won't fuse the ONNX Graph if ORT Graph optimization level is <= 1
or `SessionOption::SetOptimizedFilePath` is passed.

This is the successor of
microsoft#11346.

### Motivation and Context
- **Why is this change required? What problem does it solve?**  
Requested by few a users (issues below) and also helps in debugging.
- **If it fixes an open issue, please link to the issue here:**
  - microsoft#13535
  - microsoft#8440

* Debug build

* Debug build 2

* Cjian/pad ops bug (microsoft#13930)

* Debug build 3

* Enabling thread pool to be numa-aware (microsoft#13778)

The PR enables ort thread pool to be numa-aware, so that threads could
be evenly created and distributed among numa nodes.
In addition, to facilitate performance tuning, the PR opens a new API
allowing customers to attach threads to certain logical processors.
Please check the API
[definition](https://github.com/microsoft/onnxruntime/pull/13778/files#diff-5845a5c76fb64abdc8f0cffe21b37f8da1712674eb3abc4cd87190891be1bd48)
for details.

Co-authored-by: Randy Shuai <rashuai@microsoft.com>

* Move C API docs into the right output area and clean up unused artifacts

* Permission to remove old C API docs

* Permission to move new api docs

* More permissions

* Debug

* Use onnxruntime_fetchcontent_makeavailable cmake function for TRT (microsoft#13918)

### Description
Use onnxruntime_fetchcontent_makeavailable cmake function for TRT. See
the comment for the reason.


### Motivation and Context
To support a newer TRT version. Previously they have a "BUILD_EXE" build
option to allow us to exclude such things from build. But in
onnx/onnx-tensorrt#879 they deleted the build
option. It wouldn't be a problem if we continue to use git submodules as
before, because cmake's add_subdirectories function has an
"EXCLUDE_FROM_ALL" keyword. However, cmake's FetchContent module
doesn't. That's why I needed to create our own version of the macro.

* Bump decode-uri-component from 0.2.0 to 0.2.2 in /js/react_native (microsoft#13846)

Bumps
[decode-uri-component](https://github.com/SamVerschueren/decode-uri-component)
from 0.2.0 to 0.2.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/SamVerschueren/decode-uri-component/releases">decode-uri-component's
releases</a>.</em></p>
<blockquote>
<h2>v0.2.2</h2>
<ul>
<li>Prevent overwriting previously decoded tokens  980e0bf</li>
</ul>
<p><a
href="https://github.com/SamVerschueren/decode-uri-component/compare/v0.2.1...v0.2.2">https://github.com/SamVerschueren/decode-uri-component/compare/v0.2.1...v0.2.2</a></p>
<h2>v0.2.1</h2>
<ul>
<li>Switch to GitHub workflows  76abc93</li>
<li>Fix issue where decode throws - fixes <a
href="https://github-redirect.dependabot.com/SamVerschueren/decode-uri-component/issues/6">#6</a>
746ca5d</li>
<li>Update license (<a
href="https://github-redirect.dependabot.com/SamVerschueren/decode-uri-component/issues/1">#1</a>)
486d7e2</li>
<li>Tidelift tasks  a650457</li>
<li>Meta tweaks  66e1c28</li>
</ul>
<p><a
href="https://github.com/SamVerschueren/decode-uri-component/compare/v0.2.0...v0.2.1">https://github.com/SamVerschueren/decode-uri-component/compare/v0.2.0...v0.2.1</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/SamVerschueren/decode-uri-component/commit/a0eea469d26eb0df668b081672cdb9581feb78eb"><code>a0eea46</code></a>
0.2.2</li>
<li><a
href="https://github.com/SamVerschueren/decode-uri-component/commit/980e0bf09b64d94f1aa79012f895816c30ffd152"><code>980e0bf</code></a>
Prevent overwriting previously decoded tokens</li>
<li><a
href="https://github.com/SamVerschueren/decode-uri-component/commit/3c8a373dd4837e89b3f970e01295dd03e1405a33"><code>3c8a373</code></a>
0.2.1</li>
<li><a
href="https://github.com/SamVerschueren/decode-uri-component/commit/76abc939783fe3900fadb7d384a74d324d5557f3"><code>76abc93</code></a>
Switch to GitHub workflows</li>
<li><a
href="https://github.com/SamVerschueren/decode-uri-component/commit/746ca5dcb6667c5d364e782d53c542830e4c10b9"><code>746ca5d</code></a>
Fix issue where decode throws - fixes <a
href="https://github-redirect.dependabot.com/SamVerschueren/decode-uri-component/issues/6">#6</a></li>
<li><a
href="https://github.com/SamVerschueren/decode-uri-component/commit/486d7e26d3a8c0fbe860fb651fe1bc98c2f2be30"><code>486d7e2</code></a>
Update license (<a
href="https://github-redirect.dependabot.com/SamVerschueren/decode-uri-component/issues/1">#1</a>)</li>
<li><a
href="https://github.com/SamVerschueren/decode-uri-component/commit/a65045724e6234acef87f31da499d4807b20b134"><code>a650457</code></a>
Tidelift tasks</li>
<li><a
href="https://github.com/SamVerschueren/decode-uri-component/commit/66e1c2834c0e189201cb65196ec3101372459b02"><code>66e1c28</code></a>
Meta tweaks</li>
<li>See full diff in <a
href="https://github.com/SamVerschueren/decode-uri-component/compare/v0.2.0...v0.2.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=decode-uri-component&package-manager=npm_and_yarn&previous-version=0.2.0&new-version=0.2.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the
default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as
the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as
the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the
default for future PRs for this repo and language

You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/microsoft/onnxruntime/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* More permissions

* Update site dir

* Update upload pages version

* Remove quotes from artifact upload path

* Re order jekyll generation

* Debug

* Clean repo after gh-pages checkout

* Create API docs dir

* Update protobuf version to 3.18.3 in tools/ci_build/github/linux/docker/scripts/requirements.txt. (microsoft#13922)

### Description
<!-- Describe your changes. -->

Update protobuf version to 3.18.3 in
tools/ci_build/github/linux/docker/scripts/requirements.txt.

### Motivation and Context

Address component governance alert CVE-2022-1941

* Cjian/where python operator (microsoft#12795)

**Description**: 
This PR will enable the python tool to run QWhere and QDQWhere operation

**Limitation**:
s8s8 Where is still not supported.

* Upload API artifact before checking out gh-pages

* Name upload and download artifact actions

* Add params to upload and download artifact

* Add deploy as a separate step

* Debug after download artifact

* Correct attribute of download artifact

* Remove debug and add an explicit pages.yml file

* Re set WARN_AS_ERROR to true

* Restrict running workflow to files containing C/C++ API sources

* fix(cuda): install missing python3-packaging in Dockerfile

Signed-off-by: Jian Zeng <anonymousknight96@gmail.com>

* Labeler: Add ROCm ep issue labeling (microsoft#13923)

### Description
Add ROCm EP to auto labeler

* Auto add docs issues to project (microsoft#13897)

* Correct logic for GPU backend detection (microsoft#13944)

Currently these checks yield the opposite of the desired logic.

* Bump engine.io and socket.io in /js/web (microsoft#13723)

Bumps [engine.io](https://github.com/socketio/engine.io) and
[socket.io](https://github.com/socketio/socket.io). These dependencies
needed to be updated together.
Updates `engine.io` from 6.1.3 to 6.2.1
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/socketio/engine.io/releases">engine.io's
releases</a>.</em></p>
<blockquote>
<h2>6.2.1</h2>
<p>:warning: This release contains an important security fix
:warning:</p>
<p>A malicious client could send a specially crafted HTTP request,
triggering an uncaught exception and killing the Node.js process:</p>
<pre><code>Error: read ECONNRESET
    at TCP.onStreamRead (internal/stream_base_commons.js:209:20)
Emitted 'error' event on Socket instance at:
    at emitErrorNT (internal/streams/destroy.js:106:8)
    at emitErrorCloseNT (internal/streams/destroy.js:74:3)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
  errno: -104,
  code: 'ECONNRESET',
  syscall: 'read'
}
</code></pre>
<p>Please upgrade as soon as possible.</p>
<h3>Bug Fixes</h3>
<ul>
<li>catch errors when destroying invalid upgrades (<a
href="https://github-redirect.dependabot.com/socketio/engine.io/issues/658">#658</a>)
(<a
href="https://github.com/socketio/engine.io/commit/425e833ab13373edf1dd5a0706f07100db14e3c6">425e833</a>)</li>
</ul>
<h2>6.2.0</h2>
<h2>Features</h2>
<ul>
<li>add the &quot;maxPayload&quot; field in the handshake details (<a
href="https://github.com/socketio/engine.io/commit/088dcb4dff60df39785df13d0a33d3ceaa1dff38">088dcb4</a>)</li>
</ul>
<p>So that clients in HTTP long-polling can decide how many packets they
have to send to stay under the maxHttpBufferSize
value.</p>
<p>This is a backward compatible change which should not mandate a new
major revision of the protocol (we stay in v4), as
we only add a field in the JSON-encoded handshake data:</p>

<pre><code>0{&quot;sid&quot;:&quot;lv_VI97HAXpY6yYWAAAC&quot;,&quot;upgrades&quot;:[&quot;websocket&quot;],&quot;pingInterval&quot;:25000,&quot;pingTimeout&quot;:5000,&quot;maxPayload&quot;:1000000}
</code></pre>
<h4>Links</h4>
<ul>
<li>Diff: <a
href="https://github.com/socketio/engine.io/compare/6.1.3...6.2.0">https://github.com/socketio/engine.io/compare/6.1.3...6.2.0</a></li>
<li>Client release: <a
href="https://github.com/socketio/engine.io-client/releases/tag/6.2.0">6.2.0</a></li>
<li>ws version: <a
href="https://github.com/websockets/ws/releases/tag/8.2.3">~8.2.3</a></li>
</ul>
</blockquote>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/socketio/engine.io/blob/main/CHANGELOG.md">engine.io's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/socketio/engine.io/compare/6.2.0...6.2.1">6.2.1</a>
(2022-11-20)</h2>
<p>:warning: This release contains an important security fix
:warning:</p>
<p>A malicious client could send a specially crafted HTTP request,
triggering an uncaught exception and killing the Node.js process:</p>
<pre><code>Error: read ECONNRESET
    at TCP.onStreamRead (internal/stream_base_commons.js:209:20)
Emitted 'error' event on Socket instance at:
    at emitErrorNT (internal/streams/destroy.js:106:8)
    at emitErrorCloseNT (internal/streams/destroy.js:74:3)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
  errno: -104,
  code: 'ECONNRESET',
  syscall: 'read'
}
</code></pre>
<p>Please upgrade as soon as possible.</p>
<h3>Bug Fixes</h3>
<ul>
<li>catch errors when destroying invalid upgrades (<a
href="https://github-redirect.dependabot.com/socketio/engine.io/issues/658">#658</a>)
(<a
href="https://github.com/socketio/engine.io/commit/425e833ab13373edf1dd5a0706f07100db14e3c6">425e833</a>)</li>
</ul>
<h1><a
href="https://github.com/socketio/engine.io/compare/3.5.0...3.6.0">3.6.0</a>
(2022-06-06)</h1>
<h3>Bug Fixes</h3>
<ul>
<li>add extension in the package.json main entry (<a
href="https://github-redirect.dependabot.com/socketio/engine.io/issues/608">#608</a>)
(<a
href="https://github.com/socketio/engine.io/commit/3ad0567dbd57cfb7c2ff4e8b7488d80f37022b4a">3ad0567</a>)</li>
<li>do not reset the ping timer after upgrade (<a
href="https://github.com/socketio/engine.io/commit/1f5d4699862afee1e410fcb0e1f5e751ebcd2f9f">1f5d469</a>),
closes <a
href="https://github-redirect.dependabot.com//github-redirect.dependabot.com/socketio/socket.io-client-swift/pull/1309/issues/issuecomment-768475704">socketio/socket.io-client-swift#1309</a></li>
</ul>
<h3>Features</h3>
<ul>
<li>decrease the default value of maxHttpBufferSize (<a
href="https://github.com/socketio/engine.io/commit/58e274c437e9cbcf69fd913c813aad8fbd253703">58e274c</a>)</li>
</ul>
<p>This change reduces the default value from 100 mb to a more sane 1
mb.</p>
<p>This helps protect the server against denial of service attacks by
malicious clients sending huge amounts of data.</p>
<p>See also: <a
href="https://github.com/advisories/GHSA-j4f2-536g-r55m">https://github.com/advisories/GHSA-j4f2-536g-r55m</a></p>
<ul>
<li>increase the default value of pingTimeout (<a
href="https://github.com/socketio/engine.io/commit/f55a79a28a5fbc6c9edae876dd11308b89cc979e">f55a79a</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/socketio/engine.io/commit/24b847be6a61b64efc8c8c4d058a69259ad67693"><code>24b847b</code></a>
chore(release): 6.2.1</li>
<li><a
href="https://github.com/socketio/engine.io/commit/425e833ab13373edf1dd5a0706f07100db14e3c6"><code>425e833</code></a>
fix: catch errors when destroying invalid upgrades (<a
href="https://github-redirect.dependabot.com/socketio/engine.io/issues/658">#658</a>)</li>
<li><a
href="https://github.com/socketio/engine.io/commit/99adb00ba11d80ab27a4a2f4afd0eebd8aa406c5"><code>99adb00</code></a>
chore(deps): bump xmlhttprequest-ssl and engine.io-client in
/examples/latenc...</li>
<li><a
href="https://github.com/socketio/engine.io/commit/d196f6a6b746b5e362b131a1a16901a3db12cb21"><code>d196f6a</code></a>
chore(deps): bump minimatch from 3.0.4 to 3.1.2 (<a
href="https://github-redirect.dependabot.com/socketio/engine.io/issues/660">#660</a>)</li>
<li><a
href="https://github.com/socketio/engine.io/commit/7c1270f98c51e51dfae1237492a56276070fd10e"><code>7c1270f</code></a>
chore(deps): bump nanoid from 3.1.25 to 3.3.1 (<a
href="https://github-redirect.dependabot.com/socketio/engine.io/issues/659">#659</a>)</li>
<li><a
href="https://github.com/socketio/engine.io/commit/535a01d8898a5cc858c9d6031fc5ecda96ea4579"><code>535a01d</code></a>
ci: add Node.js 18 in the test matrix</li>
<li><a
href="https://github.com/socketio/engine.io/commit/1b71a6f5cb868c934696ae3cc1a92d1168ec8505"><code>1b71a6f</code></a>
docs: remove &quot;Vanilla JS&quot; highlight from README (<a
href="https://github-redirect.dependabot.com/socketio/engine.io/issues/656">#656</a>)</li>
<li><a
href="https://github.com/socketio/engine.io/commit/917d1d29e13f2e8f523c3738f6413f67b587aebe"><code>917d1d2</code></a>
refactor: replace deprecated <code>String.prototype.substr()</code> (<a
href="https://github-redirect.dependabot.com/socketio/engine.io/issues/646">#646</a>)</li>
<li><a
href="https://github.com/socketio/engine.io/commit/020801ab8ce2d4cba517fe04df89b39d403123a5"><code>020801a</code></a>
chore: add changelog for version 3.6.0</li>
<li><a
href="https://github.com/socketio/engine.io/commit/ed1d6f912ce61b13e2ae7ce7a1027b8c5fae2f15"><code>ed1d6f9</code></a>
test: make test script work on Windows (<a
href="https://github-redirect.dependabot.com/socketio/engine.io/issues/643">#643</a>)</li>
<li>Additional commits viewable in <a
href="https://github.com/socketio/engine.io/compare/6.1.3...6.2.1">compare
view</a></li>
</ul>
</details>
<br />

Updates `socket.io` from 4.4.1 to 4.5.3
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/socketio/socket.io/releases">socket.io's
releases</a>.</em></p>
<blockquote>
<h2>4.5.3</h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>typings:</strong> accept an HTTP2 server in the constructor
(<a
href="https://github.com/socketio/socket.io/commit/d3d0a2d5beaff51fd145f810bcaf6914213f8a06">d3d0a2d</a>)</li>
<li><strong>typings:</strong> apply types to
&quot;io.timeout(...).emit()&quot; calls (<a
href="https://github.com/socketio/socket.io/commit/e357daf5858560bc84e7e50cd36f0278d6721ea1">e357daf</a>)</li>
</ul>
<h4>Links:</h4>
<ul>
<li>Diff: <a
href="https://github.com/socketio/socket.io/compare/4.5.2...4.5.3">https://github.com/socketio/socket.io/compare/4.5.2...4.5.3</a></li>
<li>Client release: <a
href="https://github.com/socketio/socket.io-client/releases/tag/4.5.3">4.5.3</a></li>
<li>engine.io version:  <code>~6.2.0</code></li>
<li>ws version: <code>~8.2.3</code></li>
</ul>
<h2>4.5.2</h2>
<h3>Bug Fixes</h3>
<ul>
<li>prevent the socket from joining a room after disconnection (<a
href="https://github.com/socketio/socket.io/commit/18f3fdab12947a9fee3e9c37cfc1da97027d1473">18f3fda</a>)</li>
<li><strong>uws:</strong> prevent the server from crashing after upgrade
(<a
href="https://github.com/socketio/socket.io/commit/ba497ee3eb52c4abf1464380d015d8c788714364">ba497ee</a>)</li>
</ul>
<h4>Links:</h4>
<ul>
<li>Diff: <a
href="https://github.com/socketio/socket.io/compare/4.5.1...4.5.2">https://github.com/socketio/socket.io/compare/4.5.1...4.5.2</a></li>
<li>Client release: <a
href="https://github.com/socketio/socket.io-client/releases/tag/4.5.2">4.5.2</a></li>
<li>engine.io version:  <code>~6.2.0</code></li>
<li>ws version: <code>~8.2.3</code></li>
</ul>
<h2>4.5.1</h2>
<h3>Bug Fixes</h3>
<ul>
<li>forward the local flag to the adapter when using fetchSockets() (<a
href="https://github.com/socketio/socket.io/commit/30430f0985f8e7c49394543d4c84913b6a15df60">30430f0</a>)</li>
<li><strong>typings:</strong> add HTTPS server to accepted types (<a
href="https://github-redirect.dependabot.com/socketio/socket.io/issues/4351">#4351</a>)
(<a
href="https://github.com/socketio/socket.io/commit/9b43c9167cff817c60fa29dbda2ef7cd938aff51">9b43c91</a>)</li>
</ul>
<h4>Links:</h4>
<ul>
<li>Diff: <a
href="https://github.com/socketio/socket.io/compare/4.5.0...4.5.1">https://github.com/socketio/socket.io/compare/4.5.0...4.5.1</a></li>
<li>Client release: <a
href="https://github.com/socketio/socket.io-client/releases/tag/4.5.1">4.5.1</a></li>
<li>engine.io version:  <code>~6.2.0</code></li>
<li>ws version: <code>~8.2.3</code></li>
</ul>
<h2>4.5.0</h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>typings:</strong> ensure compatibility with TypeScript 3.x
(<a
href="https://github-redirect.dependabot.com/socketio/socket.io/issues/4259">#4259</a>)
(<a
href="https://github.com/socketio/socket.io/commit/02c87a85614e217b8e7b93753f315790ae9d99f6">02c87a8</a>)</li>
</ul>
<h3>Features</h3>
<ul>
<li>add support for catch-all listeners for outgoing packets (<a
href="https://github.com/socketio/socket.io/commit/531104d332690138b7aab84d5583d6204132c8b4">531104d</a>)</li>
</ul>
<p>This is similar to <code>onAny()</code>, but for outgoing
packets.</p>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Changelog</summary>
<p><em>Sourced from <a
href="https://github.com/socketio/socket.io/blob/main/CHANGELOG.md">socket.io's
changelog</a>.</em></p>
<blockquote>
<h2><a
href="https://github.com/socketio/socket.io/compare/4.5.2...4.5.3">4.5.3</a>
(2022-10-15)</h2>
<h3>Bug Fixes</h3>
<ul>
<li><strong>typings:</strong> accept an HTTP2 server in the constructor
(<a
href="https://github.com/socketio/socket.io/commit/d3d0a2d5beaff51fd145f810bcaf6914213f8a06">d3d0a2d</a>)</li>
<li><strong>typings:</strong> apply types to
&quot;io.timeout(...).emit()&quot; calls (<a
href="https://github.com/socketio/socket.io/commit/e357daf5858560bc84e7e50cd36f0278d6721ea1">e357daf</a>)</li>
</ul>
<h2><a
href="https://github.com/socketio/socket.io/compare/4.5.1...4.5.2">4.5.2</a>
(2022-09-02)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>prevent the socket from joining a room after disconnection (<a
href="https://github.com/socketio/socket.io/commit/18f3fdab12947a9fee3e9c37cfc1da97027d1473">18f3fda</a>)</li>
<li><strong>uws:</strong> prevent the server from crashing after upgrade
(<a
href="https://github.com/socketio/socket.io/commit/ba497ee3eb52c4abf1464380d015d8c788714364">ba497ee</a>)</li>
</ul>
<h1><a
href="https://github.com/socketio/socket.io/compare/2.4.1...2.5.0">2.5.0</a>
(2022-06-26)</h1>
<h3>Bug Fixes</h3>
<ul>
<li>fix race condition in dynamic namespaces (<a
href="https://github.com/socketio/socket.io/commit/05e1278cfa99f3ecf3f8f0531ffe57d850e9a05b">05e1278</a>)</li>
<li>ignore packet received after disconnection (<a
href="https://github.com/socketio/socket.io/commit/22d4bdf00d1a03885dc0171125faddfaef730066">22d4bdf</a>)</li>
<li>only set 'connected' to true after middleware execution (<a
href="https://github.com/socketio/socket.io/commit/226cc16165f9fe60f16ff4d295fb91c8971cde35">226cc16</a>)</li>
<li>prevent the socket from joining a room after disconnection (<a
href="https://github.com/socketio/socket.io/commit/f223178eb655a7713303b21a78f9ef9e161d6458">f223178</a>)</li>
</ul>
<h2><a
href="https://github.com/socketio/socket.io/compare/4.5.0...4.5.1">4.5.1</a>
(2022-05-17)</h2>
<h3>Bug Fixes</h3>
<ul>
<li>forward the local flag to the adapter when using fetchSockets() (<a
href="https://github.com/socketio/socket.io/commit/30430f0985f8e7c49394543d4c84913b6a15df60">30430f0</a>)</li>
<li><strong>typings:</strong> add HTTPS server to accepted types (<a
href="https://github-redirect.dependabot.com/socketio/socket.io/issues/4351">#4351</a>)
(<a
href="https://github.com/socketio/socket.io/commit/9b43c9167cff817c60fa29dbda2ef7cd938aff51">9b43c91</a>)</li>
</ul>
<h1><a
href="https://github.com/socketio/socket.io/compare/4.4.1...4.5.0">4.5.0</a>
(2022-04-23)</h1>
<h3>Bug Fixes</h3>
<ul>
<li><strong>typings:</strong> ensure compatibility with TypeScript 3.x
(<a
href="https://github-redirect.dependabot.com/socketio/socket.io/issues/4259">#4259</a>)
(<a
href="https://github.com/socketio/socket.io/commit/02c87a85614e217b8e7b93753f315790ae9d99f6">02c87a8</a>)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="https://github.com/socketio/socket.io/commit/945c84be47d2923a9132786c9fd11dd90fa2c6db"><code>945c84b</code></a>
chore(release): 4.5.3</li>
<li><a
href="https://github.com/socketio/socket.io/commit/d3d0a2d5beaff51fd145f810bcaf6914213f8a06"><code>d3d0a2d</code></a>
fix(typings): accept an HTTP2 server in the constructor</li>
<li><a
href="https://github.com/socketio/socket.io/commit/19b225b0c8a093d7f54ccf1b9d3765bc8f463a65"><code>19b225b</code></a>
docs(examples): update dependencies of the basic CRUD example</li>
<li><a
href="https://github.com/socketio/socket.io/commit/8fae95dd182ee1fdd033f7646eacc6beca6f456a"><code>8fae95d</code></a>
docs: add jsdoc for each public method</li>
<li><a
href="https://github.com/socketio/socket.io/commit/e6f6b906db8209996b1adb564332cb443df38fc6"><code>e6f6b90</code></a>
docs: add deprecation notice for the allSockets() method</li>
<li><a
href="https://github.com/socketio/socket.io/commit/596eb88af7fcd41e9d7c0abca4d1305a7e2c2fea"><code>596eb88</code></a>
ci: upgrade to actions/checkout@3 and actions/setup-node@3</li>
<li><a
href="https://github.com/socketio/socket.io/commit/e357daf5858560bc84e7e50cd36f0278d6721ea1"><code>e357daf</code></a>
fix(typings): apply types to &quot;io.timeout(...).emit()&quot;
calls</li>
<li><a
href="https://github.com/socketio/socket.io/commit/10fa4a2690fafcf9415e49aad507394e0b9a9ab0"><code>10fa4a2</code></a>
refactor: add list of possible disconnection reasons</li>
<li><a
href="https://github.com/socketio/socket.io/commit/8be95b3bd323f83b9bc5d7b0292abc2dbea9ce56"><code>8be95b3</code></a>
chore(release): 4.5.2</li>
<li><a
href="https://github.com/socketio/socket.io/commit/ba497ee3eb52c4abf1464380d015d8c788714364"><code>ba497ee</code></a>
fix(uws): prevent the server from crashing after upgrade</li>
<li>Additional commits viewable in <a
href="https://github.com/socketio/socket.io/compare/4.4.1...4.5.3">compare
view</a></li>
</ul>
</details>
<br />


Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the
default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as
the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as
the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the
default for future PRs for this repo and language

You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/microsoft/onnxruntime/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* Fix react native ci (microsoft#13948)

Find build error in react native ci pipeline by adding the common
header.

Co-authored-by: Randy Shuai <rashuai@microsoft.com>

* Bug Fix - ORT Web build script (microsoft#13925)

Copying the right files according to the build documentation. 
The bug originated to address a run break under some machines (needed
threaded SIMD instead of only threaded), analysis ongoing.

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Jian Zeng <anonymousknight96@gmail.com>
Co-authored-by: JiCheng <wejoncy@163.com>
Co-authored-by: Guangyun Han <guangyunhan@microsoft.com>
Co-authored-by: Jicheng Wen <jicwen@microsoft.com>
Co-authored-by: Hariharan Seshadri <shariharan91@gmail.com>
Co-authored-by: Changming Sun <chasun@microsoft.com>
Co-authored-by: Yi Zhang <zhanyi@microsoft.com>
Co-authored-by: Edward Chen <18449977+edgchen1@users.noreply.github.com>
Co-authored-by: Sumit Agarwal <sumitagarwal330@gmail.com>
Co-authored-by: Jian Chen <cjian@microsoft.com>
Co-authored-by: RandySheriffH <48490400+RandySheriffH@users.noreply.github.com>
Co-authored-by: Randy Shuai <rashuai@microsoft.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Jian Zeng <anonymousknight96@gmail.com>
Co-authored-by: Faith Xu <faxu@microsoft.com>
Co-authored-by: Joseph Groenenboom <joseph.groenenboom@amd.com>
Co-authored-by: shalvamist <94086448+shalvamist@users.noreply.github.com>
henrywu2019 pushed a commit to henrywu2019/onnxruntime that referenced this issue Dec 26, 2022
…OR setOptimizedFilePath true (microsoft#13913)

### Description
DML EP won't fuse the ONNX Graph if ORT Graph optimization level is <= 1
or `SessionOption::SetOptimizedFilePath` is passed.

This is the successor of
microsoft#11346.

### Motivation and Context
- **Why is this change required? What problem does it solve?**  
Requested by few a users (issues below) and also helps in debugging.
- **If it fixes an open issue, please link to the issue here:**
  - microsoft#13535
  - microsoft#8440
@fdwr
Copy link
Contributor

fdwr commented Jan 11, 2023

When we persist with setOptFilePath=true, it will not do the fusing of partitions of DML nodes into a single DML_GRAPH.

@diablodale Correct, nodes will remain distinct operators (or fused operators).

It will instead persist a slightly less optimized model lacking that fuse.

Yes, it will have operator fusions (e.g. Conv + Relu -> ConvRelu), but not whole-graph-fusion.

image
-->
image

When this persisted model is loaded, will Ort do that final fuse optimization?

Yes, that final whole-graph-fusion will be done upon reload.

Or, is this the tradeoff to have a faster load?

That final fusion happens in either case, loading the original model or loading the pre-operator-fused model. Exporting to .onnx file and reloading, I noticed a time saving during session load of like 5-15% depending on the model, and run time is the same. Exporting to .ort file format and reloading, I noticed a substantial time saving in session load, from 2-7x depending on the model, but as enticing as that is, beware .ort is just recently enabled by #13913, and I can't yet vouch for it's robustness without further more exhuastive testing (I just tried it with a few models), because interaction with the DML EP might call new code paths. Also, we should verify whether this issue applies still: #13535.

Got it, I've already code in place to invalidate a persisted optimized model if any config changes.

Great. I'd also include the driver version too in your hash, just in case updating the driver changes registered data type support.

@nums11
Copy link
Contributor

nums11 commented Jul 27, 2023

Closing as resolved.

@nums11 nums11 closed this as completed Jul 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:DML issues related to the DirectML execution provider
Projects
None yet
Development

No branches or pull requests

10 participants