You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I found your CVPR paper to be a very interesting study.
I'm currently trying to reproduce your work using your repository, and I would be grateful if you could give me some advice.
I have a question about the first and last layers in ResNet.
It seems like the code uses full-precision (FP) for both layers, but don't they need to be quantized to 8 bits?
Secondly, I was wondering whether FP pretrained weights are used for DQnet training or if DQnet is trained from scratch without using any pretrained weights.
Thank you for your time and assistance!
The text was updated successfully, but these errors were encountered:
The full-precision is used for the first and last layers, you can also quantize them to 8 bits, which will only decrease the performance as verified in the previous works.
The pretrained weights are used for DQNet training.
Hello,
I found your CVPR paper to be a very interesting study.
I'm currently trying to reproduce your work using your repository, and I would be grateful if you could give me some advice.
I have a question about the first and last layers in ResNet.
It seems like the code uses full-precision (FP) for both layers, but don't they need to be quantized to 8 bits?
Secondly, I was wondering whether FP pretrained weights are used for DQnet training or if DQnet is trained from scratch without using any pretrained weights.
Thank you for your time and assistance!
The text was updated successfully, but these errors were encountered: