Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Torch version affects the network's training performance #8

Closed
mli0603 opened this issue Dec 23, 2020 · 7 comments · Fixed by #49
Closed

Torch version affects the network's training performance #8

mli0603 opened this issue Dec 23, 2020 · 7 comments · Fixed by #49
Labels
wontfix This will not be worked on

Comments

@mli0603
Copy link
Owner

mli0603 commented Dec 23, 2020

I am opening this issue because apparently depending on which version of pytorch you are using, the training result will be different. Here are the 3px error evaluation curves of on a minimal example of overfitting the network on a single image for 300 epochs:

Screenshot from 2020-12-22 20-06-19

The purple line is trained with Pytorch 1.7.0 and the orange line is trained with Pytorch 1.5.1. As you can see, with version 1.7.0 the error rate is flat 100%, while version 1.5.1 the error rate is dropping. Reason for this is that the BatchNorm function has changed between version 1.5.1 and Pytorch 1.7.0. In version 1.5.1, if I disable track_running_stats here, both evaluation and training will use batch stats. However in Pytorch 1.7.0, it is forced to use running_mean and running_var in evaluation mode, while in training the batch stats is used. With track_running_stats disabled, the running_mean is 0 and running_var is 1, which is clearly different from the batch stats.

Therefore, instead of trying to do something against torch's implementation, I will recommend to use Pytorch 1.5.1 if you want to retrain from scratch. Otherwise, if you want to use other Pytorch version, you can replace all BatchNorm with InstanceNorm and port the learnt values from BatchNorm (i.e. weight and bias). This is a wontfix problem because it is quite hard to accomodate all torch versions.

@VitorGuizilini-TRI
Copy link

Hi, do you know if this is still an issue in Pytorch 1.8? Thank you!

@mli0603
Copy link
Owner Author

mli0603 commented Aug 26, 2021

Hi @VitorGuizilini-TRI

Based on my experiment, yes, Pytorch 1.8 is still an issue.

If you can only use Pytorch 1.8 due to hardware restriction (i.e. CUDA version etc.), you can replace all BatchNorm with InstanceNorm, which should avoid this.

@EhrazImam
Copy link

will it work with Pythorch 1.6

@mli0603
Copy link
Owner Author

mli0603 commented Sep 13, 2021

Hi @EhrazImam

It looks like the answer is no. Please find the implementation of BN from 1.5.1 (which is the one I was using) here and BN from 1.6.0 here. You will see the change of funciton signature I mentioned above.

@ynjiun
Copy link

ynjiun commented Dec 27, 2021

Hi, Thank you brought up this issue out front.
First attempt of running your inference code with all prescribed installation (torch 1.5.1) with 1080Ti GPU w 11GB ==> result out of memory
Second attempt using another machine with RTX 6000 w 48GB ==> cannot install torch 1.5.1 because:

RTX A6000 with CUDA capability sm_86 is not compatible with the PyTorch v1.5.1 installation.
The PyTorch v1.5.1 install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the RTX A6000 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

So I am in delimma: use torch 1.5.1 OOM vs. use A6000 with enough memory but cannot run torch 1.5.1

For your information, the new generation of GPU like RTX 3090, A6000, etc. will run on torch 1.10.0 with Cuda 11.2 or later (which support sm_86)

I understand that it is almost impossible to support all version of pytorch, but how about to selectively support a least one version compatible with the "future" generation of GPU such as pytorch 1.10 with Cuda 11.2 or later? What do you think?

Thanks a lot for your help in advance!

@DeH40
Copy link
Contributor

DeH40 commented Jan 14, 2022

hi,@mli0603 @ynjiun ,I found a way to resolve this problem,according to pytorch/pytorch#37823 (comment) & https://discuss.pytorch.org/t/performance-highly-degraded-when-eval-is-activated-in-the-test-phase/3323/66 ,I modified the code in _disable_batchnorm_tracking , setting the mean and var variables in the batch norm to None ,which resolve the problem.

 def _disable_batchnorm_tracking(self):
        """
        disable Batchnorm tracking stats to reduce dependency on dataset (this acts as InstanceNorm with affine when batch size is 1)
        """
        for m in self.modules():
            if isinstance(m, nn.BatchNorm2d):
                m.track_running_stats = False
                m.running_mean = None
                m.running_var = None

DeH40 added a commit to DeH40/stereo-transformer that referenced this issue Jan 14, 2022
@mli0603
Copy link
Owner Author

mli0603 commented Jan 14, 2022

@DeH40

Oh nice! Thank you very much for this patch. Let me test it on my end too.

@mli0603 mli0603 linked a pull request Jan 15, 2022 that will close this issue
mli0603 pushed a commit that referenced this issue Jan 16, 2022
fix BN issue due to torch version #8
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
wontfix This will not be worked on
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants