New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accuracy Problem #2
Comments
Hi, I tried the example Squeeze Net. Sadly I found that the accuracy of only quantizing activations of Squeeze Net v1.0 drops 0.006. (0.57058 vs 0.5768) and Squeeze Net v1.1 drops 0.004 (0.5839 vs 0.5793). But in your paper, you got the accuracy of only 0.01% drop. It looks like you just put the approximate float weights into caffe and get the accuracy or do something else? This troubles me a lot, any suggestions? @gudovskiy @gudovskiy |
Another question, in your script you divide each value by a max value, how did you deal with this at inference? @gudovskiy |
Hi, I want to know when you reimplement it on the computer. Is the Squeeze Net significantly accelerated? I am curious about the speed-up. Thank you very much. @wonderzy . And speed-up results on PC are not reflected in the ImageNet result table. Can you tell me a rough result ? Thank you very much. @gudovskiy |
@lawrencewxj it is not accelerated since it is emulation. For acceleration you need specialized hardware. |
OK, I get it, thank you anyway. |
In your paper, you say '.In our architecture, inputs and feature maps are in the dynamic fixed-point format'.
Do you use this in your ImageNet result or just use weights quantization and float input? From your python script, it seems that you just use approximate float weights to produce the shift loss. @gudovskiy
The text was updated successfully, but these errors were encountered: