Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some results about resnet20 on cifar10 #21

Open
xiaolonghao opened this issue Sep 18, 2021 · 0 comments
Open

Some results about resnet20 on cifar10 #21

xiaolonghao opened this issue Sep 18, 2021 · 0 comments

Comments

@xiaolonghao
Copy link

xiaolonghao commented Sep 18, 2021

Hello, I am doing the quantization work now, I tried to reproduce your results, but unfortunately the results are not the same. The full accuracy reported in your paper is 91.6, but the result in the code is 92.9, which is much larger than the result reported in the paper, but in the 4bit quantization, the paper reports 92.3. It is unfair to compare full precision with quantization. Please confirm whether the results reported in the paper are wrong. In addition, what you report in your paper is the effect of ending the training test set or the result of getting the best test set during the training. It can be seen from the result that your paper is different from the result of the code, please give an explanation. In addition, my repeated results are different, I would like to ask whether the training results with multiple cards and single cards will be much worse, provided that other parameters remain the same. thank you
图片1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant