Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

training loss is Nan #49

Closed
daigang896 opened this issue Dec 13, 2021 · 9 comments
Closed

training loss is Nan #49

daigang896 opened this issue Dec 13, 2021 · 9 comments

Comments

@daigang896
Copy link

Hello, when training loss is Nan, what should I do?

@voldemortX
Copy link
Owner

@daigang896 Hi! What is your exact training script? As far as I know, by this repo's default lr settings, only LSTR sometimes produce NaN. SCNN/RESA only does that when lr is too high.

@daigang896
Copy link
Author

train script:python main_landec.py --epochs=200 --lr=0.15 --batch-size=16 --dataset=tusimple --method=scnn --backbone=vgg16 --mixed-precision --exp-name=vgg16_scnn_tusimple. However, the data used is not tusimple, but the format is tusimple format. I'll set the learning rate smaller first and then look at it.

@daigang896
Copy link
Author

@voldemortX

@voldemortX
Copy link
Owner

@daigang896 FYI, segmentation method's learning rate should be adjusted according to the total number of pixels in a batch (i.e., not only batch size, but also training resolution). The relationship is mostly linear or sqrt. Unless the exploded loss is the existence loss. Other than smaller learning rates, sometimes longer warmup can bring better performance. While in rare cases simply re-run the experiment is enough (the VGG-SCNN does have a small failure rate in training).

Note that other than typical gradient explosion caused by large learning rates, irregular labels (labels with NaN value for instance) can also cause this issue.

@daigang896
Copy link
Author

@voldemortX OK, I see. Thank you very much. I'll check it carefully.

@daigang896
Copy link
Author

@voldemortX

I checked the data carefully, and there was no Nan in the data. However, in the middle of training, the training loss is Nan. The data format is tusimple format. How should I check this situation?

The training is carried out:
python main_ landec. py --epochs=240 --lr=0.12 --batch-size=16 --dataset=tusimple --method=scnn --backbone=erfnet --mixed-precision --exp-name=erfnet_ scnn_ tusimple.

@voldemortX
Copy link
Owner

voldemortX commented Dec 21, 2021

@daigang896 What is the size of your dataset? 240 epochs seem long. Theoretically, learning rate decreases in proportion to the training length, you'll have higher lr in early stages if epochs are set longer thus easier to explode. You might consider a longer warmup by --warmup-steps.

Or just try lr=0.01 and see if it still produces NaN. For sanity checks, remove --mixed-precision.

@daigang896
Copy link
Author

@voldemortX OK, thank you. I'll try it according to your suggestion.

@voldemortX
Copy link
Owner

voldemortX commented Mar 14, 2022

@daigang896 It seems the problem is somewhat resolved and this issue happened a long time ago. And we never encounter a similar issue when refactoring the whole codebase, so it is probably not a bug. If you still can't make it work with the new master branch, feel free to reopen!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants