Added code to support Softmaxgrad for DNNL EP#9022
Conversation
Signed-off-by: Chethan Palangotu Keshava <chethan.palangotu.keshava@intel.com>
|
maintainer please remember to run |
|
/azp run Linux DNNL CI Pipeline |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
I am looking into the failure. I had disabled a few checks in softmax, which I feel is causing this model failure. |
This will fix the test failures from onnnx repo. Signed-off-by: Chethan Palangotu Keshava <chethan.palangotu.keshava@intel.com>
Can you please run this again? I have pushed a fix for these failures. |
|
/azp run Linux DNNL CI Pipeline |
|
Azure Pipelines successfully started running 1 pipeline(s). |
|
/azp run MacOS NoContribops CI Pipeline, Windows CPU CI Pipeline, Windows GPU CI Pipeline, Windows GPU TensorRT CI Pipeline, Windows WebAssembly CI Pipeline, orttraining-amd-gpu-ci-pipeline, orttraining-linux-ci-pipeline, orttraining-linux-gpu-ci-pipeline, orttraining-ortmodule-distributed |
|
/azp run Linux CPU CI Pipeline, Linux CPU Minimal Build E2E CI Pipeline, Linux CPU x64 NoContribops CI Pipeline, Linux CPU x64 NoContribops CI Pipeline, Linux GPU CI Pipeline, Linux GPU TensorRT CI Pipeline, Linux Nuphar CI Pipeline, Linux OpenVINO CI Pipeline, MacOS CI Pipeline |
|
Azure Pipelines successfully started running 8 pipeline(s). |
1 similar comment
|
Azure Pipelines successfully started running 8 pipeline(s). |
|
/azp run onnxruntime-python-checks-ci-pipeline |
|
Azure Pipelines successfully started running 1 pipeline(s). |
Signed-off-by: Chethan Palangotu Keshava chethan.palangotu.keshava@intel.com
Description: Added code to support Softmaxgrad operator for the DNNL EP and broadened the support of softmax.
Motivation and Context
Necessary operator for complete execution of transformer model graphs.