Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quantize nearest_interp and nearest_interp_v2 #38622

Merged
merged 3 commits into from Jan 5, 2022

Conversation

wozna
Copy link
Contributor

@wozna wozna commented Dec 30, 2021

PR types

Performance optimization

PR changes

OPs

Describe

This PR adds:

  • quantization for the nearest_interp and the nearest_interp_v2 for QAT and PTQ method,
  • unit tests,
  • in cpu_quantize_placement_pass.cc added checking if the list of operators entered by the user contains all the operators supported by quantization

Nearest_interp quantization improved the performance by 2% in the "faster_rcnn" model and almost 11% in the "ocr_det" model.

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

std::unordered_set<std::string>(
{"concat", "conv2d", "depthwise_conv2d", "elementwise_add", "fc",
"matmul", "nearest_interp", "nearest_interp_v2", "pool2d",
"prior_box", "reshape2", "transpose2", "fusion_gru", "fusion_lstm",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just for my curiosity, what is prior_box? Do we support that operation?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have found it already, I think that it is the only op that doesn't end with "mkldnn_op.cc"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, you are right, we are using a native version here which supports all data types including int8.

Copy link
Contributor

@jakpiase jakpiase left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

std::unordered_set<std::string>(
{"concat", "conv2d", "depthwise_conv2d", "elementwise_add", "fc",
"matmul", "nearest_interp", "nearest_interp_v2", "pool2d",
"prior_box", "reshape2", "transpose2", "fusion_gru", "fusion_lstm",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have found it already, I think that it is the only op that doesn't end with "mkldnn_op.cc"

@wozna wozna requested a review from Aganlengzi January 3, 2022 17:06
Copy link
Contributor

@lidanqing-intel lidanqing-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.
This PR supported Nearest_interp int8 and improved "faster_rcnn" int8 model by 2%, ocr_det int8 model by 11%, Retinanet int8 model by 2%

@lidanqing-intel
Copy link
Contributor

@baoachun Can I merge this PR?

@wozna wozna requested review from lidanqing-intel and removed request for Aganlengzi and sfraczek January 4, 2022 08:09
Copy link
Contributor

@baoachun baoachun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@lidanqing-intel lidanqing-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@lidanqing-intel
Copy link
Contributor

@Aganlengzi Hi Baoachun approved. Please merge this PR, Thanks !

@Aganlengzi Aganlengzi merged commit 1456b02 into PaddlePaddle:develop Jan 5, 2022
@wozna wozna deleted the quant_nearest_interp branch February 24, 2023 16:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants