Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Acc drop significantly during the last epoch of stage1 #16

Closed
FANG-Xiaolin opened this issue Apr 16, 2018 · 18 comments
Closed

Acc drop significantly during the last epoch of stage1 #16

FANG-Xiaolin opened this issue Apr 16, 2018 · 18 comments

Comments

@FANG-Xiaolin
Copy link

Hi Xingyi,
After training the 2D hourglass component for 50+ epochs, the accuracy is approximately 83%, but after the 60th epoch, the accuracy suddenly drop to 43%.

Here's the log.
image

@xingyizhou
Copy link
Owner

Hi,
As far as I know, it should be caused by a pytorch internal bug in BN. You can comment model.eval() in testing to see if the validation acc gets better (but it still won't match the desired performance). The bug should not be reproducible. And re-train the network once more (better on another machine) should have different results. Or you can downgrade your pytorch version below 0.1.12, which is a version where I haven't met/ heard about this bug (but still not guaranteed). Please let me know if the above solutions help. Thanks!

@FANG-Xiaolin
Copy link
Author

I tried for another 3 times. The train acc is approximately 0.87 during the last epoch(the 60th epoch) but the validation acc changes every time and always lower than 0.50. The validation acc is around 0.80 in the 55th epoch so it seems that there is a sudden drop during the last epoch and I notice that the training loss gets slightly higher during the last epoch.

@xingyizhou
Copy link
Owner

Hi,
Thanks for reporting the problem. However I don't have other solutions yet and will keep looking into it. It might not be a bug of the code, since an isolated implementation of HourglassNet (I am not sure if the bug is from the network architecture) also has this problem (bearpaw/pytorch-pose#33). People there suggest using learning rate 1e-4. You can have a try to see if the bug still exists.

@FANG-Xiaolin
Copy link
Author

Hi,
Thanks for your advice. Yes it works if using LR 1e-4. The val acc is 0.80+ in this way.

@xingyizhou
Copy link
Owner

Hi,
I have investigated this problem (on another project, while I can not reproduce the bug on this project). It seems it is caused by very large intermediate features (e.g. > 10000) before batch normalization. Then the train() mode is on, it will be normalized be itself so training is OK. But when eval() mode is on, a slight difference (of the intermediate feature) with the BN mean/std from training will results in large offsets for output. I don't know the causal of the problem but it looks mathematically reasonable. However, down-grading PyTorch version to 0.1.12 will eliminate the problem. Please notify me if you have any other observations on this bug. Thanks!

@FANG-Xiaolin
Copy link
Author

Hi,
Yes I think it is reasonable. Sure I will notify you if I observe something new. Thanks for your reply!

@ssnl
Copy link

ssnl commented Jun 15, 2018

IIRC, your repo sets batch size to 1. If that is the case it's not really a PyTorch bug. Running stats with batch size = 1 is unstable itself.

@xingyizhou
Copy link
Owner

Thanks for the suggestion! The training batch size is 6 and testing is 1. When testing, eval() mode is on and the batch size does not affect the computation.

@ssnl
Copy link

ssnl commented Jun 15, 2018 via email

@xingyizhou
Copy link
Owner

Hi all,
As pointed by @leoxiaobin, turn off cudnn of BN layer resolves the issue. It can be realized by set torch.backends.cudnn.enabled = False in main.py, which disables cudnn for all layers and slows down the training by about 1.5x time, or re-build pytorch from source by hacking cudnn in BN layers https://github.com/pytorch/pytorch/blob/e8536c08a16b533fe0a9d645dd4255513f9f4fdd/aten/src/ATen/native/Normalization.cpp#L46 .

@FANG-Xiaolin
Copy link
Author

Get it. Thanks.

@xingyizhou
Copy link
Owner

Oh I still want this issue to be opened to wait for better solutions...

@FANG-Xiaolin
Copy link
Author

Sure! My bad.

@wangg12
Copy link

wangg12 commented Jul 28, 2019

@ssnl @xingyizhou Does this bug still exist with pytorch >= 1.0?

@ujsyehao
Copy link

@wangg12 I am doing experiments to observe if the bug exists in pytorch >= 1.0.

@qiangruoyu
Copy link

@ wangg12 我正在做实验,以观察pytorch> = 1.0中是否存在该错误。

Can you meet this error when the version of pytorch >= 1.0

@ygean
Copy link

ygean commented May 26, 2020

@ujsyehao 你好,请问你的实验结果如何?

@sisrfeng
Copy link

Hi all,
As pointed by @leoxiaobin, turn off cudnn of BN layer resolves the issue. It can be realized by set torch.backends.cudnn.enabled = False in main.py, which disables cudnn for all layers and slows down the training by about 1.5x time, or re-build pytorch from source by hacking cudnn in BN layers https://github.com/pytorch/pytorch/blob/e8536c08a16b533fe0a9d645dd4255513f9f4fdd/aten/src/ATen/native/Normalization.cpp#L46 .

torch.backends.cudnn.enabled = Falseinmain.py`
Should it be "torch.backends.cudnn.benchmark = False"?

If I have followed this step, I need not modify main.py, right? :
For other pytorch version, you can manually open torch/nn/functional.py and find the line with torch.batch_norm and replace the torch.backends.cudnn.enabled with False

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants