Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add opset 15 kernels for Pow, BatchNorm, and Shape #8442

Merged
merged 41 commits into from Aug 25, 2021
Merged

Conversation

hariharans29
Copy link
Member

@hariharans29 hariharans29 commented Jul 20, 2021

Description:

Clubbing some simple opset-15 implementations together-

Pow had an opset bump with the inclusion of bfloat16 in the list of supported types for the exponent input. (onnx/onnx#3412)

BatchNorm had an opset bump with separate type constraints being introduced for scale and bias. (onnx/onnx#3545)

Shape supports slicing specific indices (onnx/onnx#3580)

Motivation and Context
ONNX compliance

@hariharans29 hariharans29 requested a review from a team as a code owner July 20, 2021 19:10
@hariharans29 hariharans29 changed the title Add opset 15 kernels for Pow WIP: Add opset 15 kernels for Pow Jul 20, 2021
@hariharans29 hariharans29 changed the title WIP: Add opset 15 kernels for Pow WIP: Add opset 15 kernels for Pow and BatchNorm Jul 20, 2021
.Alias(3, 1)
.Alias(4, 2)
.TypeConstraint("T", DataTypeImpl::GetTensorType<double>())
.TypeConstraint("U", DataTypeImpl::GetTensorType<double>()),
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug fix: "U" was missing for opset-14. Kept the kernel hash backwards compatible by ensuring the hash is computed only using T.

kCudaExecutionProvider, \
(*KernelDefBuilder::Create()) \
.TypeConstraint("T", DataTypeImpl::GetTensorType<T>()) \
.TypeConstraint("U", DataTypeImpl::GetTensorType<T>()), \
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hariharans29 hariharans29 changed the title WIP: Add opset 15 kernels for Pow and BatchNorm WIP: Add opset 15 kernels for Pow, BatchNorm, and Shape Jul 21, 2021

Status ShapeToInitializer::Apply(Graph& graph, Node& node, RewriteRuleEffect& rule_effect, const logging::Logger&) const {
// Store the statically inferred shape of the input to the Shape operator.
const ONNX_NAMESPACE::TensorShapeProto* input_shape_proto = node.InputDefs()[0]->Shape();
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Retiring this RewriteRule as its functionality is already offered by another GraphTransformer (ConstantFolding) at the same optimization level (Level 1). Updating this would have been a copy/paste of the new logic added to the Shape processing logic in ConstantFolding

@hariharans29 hariharans29 changed the title WIP: Add opset 15 kernels for Pow, BatchNorm, and Shape Add opset 15 kernels for Pow, BatchNorm, and Shape Aug 13, 2021
OpTester test("BatchNormalization", 15);
float epsilon = 1e-05f;
float momentum = 0.1f;
int64_t training_mode = 1;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Training mode is enabled. But it still tests the opset 15 kernel's availability.

@@ -67,7 +67,6 @@
"^test_add_uint8_cpu",
"^test_div_uint8_cpu",
// Following tests are for opset 15 ops and are not yet implemented in ORT
"^test_shape_*",
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pow and BatchNorm came with no ONNX tests. Only Shape came with tests.

@pranavsharma
Copy link
Contributor

LGTM 👍 . Can you resolve the pipelines?

@hariharans29
Copy link
Member Author

hariharans29 commented Aug 24, 2021

LGTM 👍 . Can you resolve the pipelines?

Yeah, it just needs the newer OperatorKernels.md (this version is missing BatchNorm) file checked-in. Other failures are just due to flakiness. I will check the file in while trying to resolve comments from others (if any).

@hariharans29
Copy link
Member Author

/azp run Windows GPU CI Pipeline

@azure-pipelines
Copy link

Azure Pipelines successfully started running 1 pipeline(s).

@hariharans29
Copy link
Member Author

LGTM 👍 . Can you resolve the pipelines?

Fixed the pipelines.

Copy link
Contributor

@askhade askhade left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@edgchen1 edgchen1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit:

@hariharans29 hariharans29 merged commit cee7952 into master Aug 25, 2021
@hariharans29 hariharans29 deleted the hari/pow15 branch August 25, 2021 19:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants