Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Quantize_first_last_layer #2

Open
mmmiiinnnggg opened this issue Jul 1, 2022 · 1 comment
Open

Quantize_first_last_layer #2

mmmiiinnnggg opened this issue Jul 1, 2022 · 1 comment

Comments

@mmmiiinnnggg
Copy link

Hi! I noticed that in your code, you set bits_weights=8 and bits_activations=32 for first layer as default, it's not what is claimed in your paper " For the first and last layers of all quantized models, we quantize both weights and activations to 8-bit. " And I see an accuracy drop if I adjust the bits_activations to 8 for the first layer, could u please explain what is the reason? Thanks!

@liujingcs
Copy link
Collaborator

We do not apply quantization to the input images since they have been quantized to 8-bit during image preprocessing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants