Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DynamicQuant] Questions about pretrained weight + Some inconsistency #49

Closed
jyeah05 opened this issue May 9, 2023 · 1 comment
Closed

Comments

@jyeah05
Copy link

jyeah05 commented May 9, 2023

Hello,

I found your CVPR paper to be a very interesting study.
I'm currently trying to reproduce your work using your repository, and I would be grateful if you could give me some advice.

I have a question about the first and last layers in ResNet.
It seems like the code uses full-precision (FP) for both layers, but don't they need to be quantized to 8 bits?

Secondly, I was wondering whether FP pretrained weights are used for DQnet training or if DQnet is trained from scratch without using any pretrained weights.

Thank you for your time and assistance!

@liu-zhenhua
Copy link
Collaborator

  1. The full-precision is used for the first and last layers, you can also quantize them to 8 bits, which will only decrease the performance as verified in the previous works.
  2. The pretrained weights are used for DQNet training.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants