Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement per-channel for quantizelinear and dequantizelinear #4759

Merged
merged 8 commits into from Aug 21, 2020

Conversation

yufenglee
Copy link
Member

@yufenglee yufenglee commented Aug 11, 2020

Description: Describe your changes.

  1. update onnx to latest master
  2. implement per-channel for quantizelinear and dequantizelinear
  3. fix a bug for IsScalarOr1ElementVector

@yufenglee yufenglee requested a review from a team as a code owner August 11, 2020 22:04
@yufenglee yufenglee changed the title implement per-channel for quantizelinear and dequantizelinear Implement per-channel for quantizelinear and dequantizelinear Aug 11, 2020
@yufenglee yufenglee force-pushed the roli/quantize_per_channel branch 2 times, most recently from 4936b47 to 7561a50 Compare August 13, 2020 07:53
@yufenglee yufenglee changed the title Implement per-channel for quantizelinear and dequantizelinear Update onnx and implement per-channel for quantizelinear and dequantizelinear Aug 13, 2020
@yufenglee yufenglee force-pushed the roli/quantize_per_channel branch 2 times, most recently from 57c19b9 to 31394c3 Compare August 17, 2020 14:59
@yufenglee yufenglee changed the title Update onnx and implement per-channel for quantizelinear and dequantizelinear implement per-channel for quantizelinear and dequantizelinear Aug 17, 2020
@yufenglee yufenglee force-pushed the roli/quantize_per_channel branch 4 times, most recently from bcdd2f4 to 0066c24 Compare August 18, 2020 20:47
@@ -47,17 +63,9 @@ Status DequantizeLinear<T>::Compute(OpKernelContext* ctx) const {
ORT_ENFORCE(x_scale.Shape().NumDimensions() == 1 && x_scale.Shape().Size() == broadcast_dim,
"x_scale must be 1D tensor with size ",
broadcast_dim);
ORT_ENFORCE(x_zero_point != nullptr && x_zero_point->Shape().NumDimensions() == 1 && x_zero_point->Shape().Size() == broadcast_dim,
"x_zero_point must be 1D tensor with size ",
ORT_ENFORCE(x_zero_point == nullptr || (x_zero_point->Shape().NumDimensions() == 1 && x_zero_point->Shape().Size() == broadcast_dim),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could this just be x_zero_point->Shape()[0] == broadcast_dim?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would apply to the other cases where Shape().Size() is calculated too. Mostly caring about code size here.

Copy link
Member Author

@yufenglee yufenglee Aug 20, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

x_zero_point->Shape()[0] may be out of range even if x_zero_point is not null

@askhade
Copy link
Contributor

askhade commented Aug 21, 2020

Looks good to me... there is just 1 thing I am trying to understand - How will onnx testing will be handled in CI.
In CIs I believe the base image already includes all onnx versions for testing but does it build onnx for the latest commit?

Earlier the suggestion was to update install_onnx.sh file with the latest onnx commit but this file does not exist anymore so wondering how this will be handled. @snnn
(https://github.com/microsoft/onnxruntime/blob/master/docs/How_To_Update_ONNX_Dev_Notes.md)

Copy link
Contributor

@tracysh tracysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The operator changes look good. As discussed offline, I can't tell if bumping up the ONNX commit hash needs any other changes to occur too.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants