-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Frozen the statistics of BN layers? #17
Comments
I have discussed with isht7, and I find that the train.py in his code sets model to be evaluate mode firstly. But I find that your code sets the model to be train mode. This setting may keep changing the statistics of BN layers during training. I don't know whether I misunderstood your code. |
Yes,I keep changing the statistics of BN layers during training. This is very old code for semantic segmentation. Some details may be not maintained correctly. In fact, it's helpful to update the parameters and statistics of BN layers in the training stage with a large input size and batch size. Without sync BN, this code cannot achieve a higher performance. Maybe you could run the code with eval mode and train mode, and compare the performance. Ps. I have implemented PSPnet, Deeplabv3, and Deeplabv3+ with high performance, I plan to release the code in November. |
OK, Thanks for your reply~ |
For small batch sizes, is it possible to frozen the statistics of BN layers? with which command is it possible? |
Hello, Do you implement "frozen BN layers"? I find that the running means and running vars still keep changing during training.
The text was updated successfully, but these errors were encountered: