Skip to content

Conversation

@lgeiger
Copy link
Member

@lgeiger lgeiger commented May 20, 2021

What do these changes do?

This PR is a bit subtle and changes the input quantization ranges to None which is now consistent with the float input type we use. This prevent the TF quantization pass from adding unnecessary quantize dequantize ops at the beginning of the network which will not be removed again later.

How Has This Been Tested?

CI and manually verified the example of #637

Related issue number

Fixes #637

@lgeiger lgeiger added the bug Something isn't working label May 20, 2021
@lgeiger lgeiger requested a review from a team May 20, 2021 14:42
@AdamHillier AdamHillier merged commit 02ea45b into main May 20, 2021
@AdamHillier AdamHillier deleted the fix-lingering-quantize branch May 20, 2021 15:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Spurious Quantize/Dequantize on int8 quantized model with LCE converter

4 participants