Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Imagenet Accuracy quickly dropping #6

Closed
mostafaelhoushi opened this issue Jun 10, 2020 · 4 comments
Closed

Imagenet Accuracy quickly dropping #6

mostafaelhoushi opened this issue Jun 10, 2020 · 4 comments

Comments

@mostafaelhoushi
Copy link

When I try this command in the ImageNet folder:

python main.py -a resnet18 -b 5 --data <path to imagenet directory>

I get this log. Is that expected?

image

@yhhhli
Copy link
Owner

yhhhli commented Jun 10, 2020

Hi, have your tried to use a lower learning rate?

@yhhhli
Copy link
Owner

yhhhli commented Jun 10, 2020

Hello, mostafaelhoushi,

I train our model using our internal framework, so the training codes may have some bugs. Thanks for discovering this problem, here are the suggestions and maybe we can find the problem together:

Try to use full precision model, by setting bit=32:
IF the model can be trained, then the problem must be quantization.
Try to not learn the clipping threshold, just set the LR of alpha to 0, and see if the model can be trained.
IF the FP model cannot be trained, then the problem must be hyper-parameters
Try to use a lower LR

@mostafaelhoushi
Copy link
Author

Thanks @yhhhli . I played around with learning rate and batch size. When I set the batch size to 128 and learning rate 0.001, the training accuracy starts with around 70% and reaches 89% soon. There could be a better combination of learning rate and batch size.

Just a side note: looking at the code, the default batch size seems to be 1024. However, when we run main.py without setting batch size, the number of batches in the log shown in the screenshot is 256, 234. If the size of Imagenet training set is around 1 million, then this means the batch size is about 4.
I don't know how the code seems to have a default batch size of 1024 but when running the code the actual default batch size is 4. Batch size of 4 is expected to cause this degradation in accuracy with the default learning rate.

@mostafaelhoushi
Copy link
Author

I found the cause of the problem, I mistakenly used -b 5 to set the bitwidth to 5, while -b actually sets the batch size. Sorry for that!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants