Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EPE in sintel clean lower han the original #23

Closed
jeffbaena opened this issue May 22, 2019 · 12 comments
Closed

EPE in sintel clean lower han the original #23

jeffbaena opened this issue May 22, 2019 · 12 comments

Comments

@jeffbaena
Copy link

Dear @sniklaus thanks for this nice implementation. I find it really useful I don't have to worry about cuda versioning.
I am writing a paper which uses this implementation and to be sure of I have checked the EPE on sintel training clean. I got 1.81, which is much lower than what stated from the authors.
Is this network fine tuned on Sintel ? Does my result make sense?

Thanks,
Stefano

@sniklaus
Copy link
Owner

Thank you for your feedback! What is the EPE when you use the official repository? Specifically, the pwc_net.pth.tar and the pwc_net_chairs.pth.tar models. Thanks!

@jeffbaena
Copy link
Author

Unfortunately I cannot run the official network. It requires CUDA 8.0, but in my cluster I am forced to use CUDA 10. (if you have any hint on how to make it work it would be greatly appreciated)
So I run your net on the 1041 samples of "Sintel training clean" and I got 1.81 EPE.
According to the paper, the closest EPE value to the one I got is (red circle): meaning this could be the version fine-tuned on sintel.

Could you help me clarifying which version we are using?

Screenshot 2019-05-23 at 10 28 11

NOTE: my result 1.81 actually seems to match the value reported in your link, so I think your network is working properly: https://github.com/NVlabs/PWC-Net/tree/master/PyTorch
But in the link it is not specified which version it is.

Thanks for your help,
Stefano

@sniklaus
Copy link
Owner

This comment might be useful: NVlabs/PWC-Net#26 (comment)

However, I do not think that there is a corresponding entry in the paper. But please correct me if I am wrong, I am sure others would be happy to get an answer to this question as well!

For a related discussion, please see: #9

@jeffbaena
Copy link
Author

Thanks for the high quality references! I can confirm that your default version of PWC-Net corresponds to PWC-Net ROB of the paper "Model Matters..." (https://arxiv.org/pdf/1809.05571.pdf)
Trained on Sintel and scoring exactly 1.81 px EPE on sintel training clean.
Congratulations for your great work!
Stefano
Screen Shot 2019-05-23 at 19 50 42

@sniklaus
Copy link
Owner

Awesome, thank you for the clarification!

This was referenced Dec 4, 2019
@sniklaus sniklaus mentioned this issue Mar 9, 2022
@Etienne66
Copy link
Contributor

I manged to install Cudatools 9.2 but my card was too new for Cudatools 8.0 plus I was having to change my code too much for the older versions of PyTorch.

I still couldn't reproduce the same numbers as in the paper but at least I seem to be in the ballpark. It could be because I have Cuda 11.6.1 installed. I don't feel like downgrading that installation too and there is no longer a MiniConda for Python 2.7 available.

Older versions of nn.functional.grid_sample don't support the align_corners attribute but supposedly it was true before 1.3.0 and it is set to false in the latest version of the code. When I tried PyTorch 1.4.0 with Cudatools 9.2/10.0 I got the same results as PyTorch 1.10.2.

Cudatools / PyTorch / Conda Simtel Clean EPE
(training)
Simtel Final EPE
(training)
8.0 / 0.2 / 2.7 (1.81) (2.29)
9.2 / 1.1.0 / 37_4.11 2.80980 3.28290
10.2 / 1.10.2 / 39_4.11 1.49756 1.94308

Note: Numbers in parenthesis are from the paper for PWC-Net_ROB

@sniklaus
Copy link
Owner

Thanks for sharing your findings! So it seems like you got better results than the paper claimed as long as you use a recent PyTorch version? I wonder why this is the case, should you find any issues in my implementation then please don't hesitate to let me know.

@Etienne66
Copy link
Contributor

Thanks for sharing your findings! So it seems like you got better results than the paper claimed as long as you use a recent PyTorch version? I wonder why this is the case, should you find any issues in my implementation then please don't hesitate to let me know.

The difference in PyTorch versions seemed to be the align_corners parameter setting. The not matching the paper was probably my Nvidia CUDA Toolkit version which I didn't change for these tests. I'm sure Nvidia made improvements in the past few years. Perhaps if I had used CUDA Toolkit 9.2 along with cudatools=9.2 and PyTorch 1.4.0. I could've matched the paper.

I just do this as a hobby. Not like any of my results will be published in a paper but I will put it out here on Github. My goal is to have a lot better performance than the paper by changing up the learning even more. I've learned a lot in the past year though and I still find this stuff interesting. But if I find any more issues I'll be happy to share.

@sniklaus
Copy link
Owner

You can try the following backwarp when using an older PyTorch that doesn't support align_corners=False.

def backwarp(tenInput, tenFlow):
    if str(tenFlow.shape) not in backwarp_tenGrid:
        tenHor = torch.linspace(-1.0, 1.0, tenFlow.shape[3]).view(1, 1, 1, -1).repeat(1, 1, tenFlow.shape[2], 1)
        tenVer = torch.linspace(-1.0, 1.0, tenFlow.shape[2]).view(1, 1, -1, 1).repeat(1, 1, 1, tenFlow.shape[3])

        backwarp_tenGrid[str(tenFlow.shape)] = torch.cat([ tenHor, tenVer ], 1).cuda()
    # end

    if str(tenFlow.shape) not in backwarp_tenPartial:
        backwarp_tenPartial[str(tenFlow.shape)] = tenFlow.new_ones([ tenFlow.shape[0], 1, tenFlow.shape[2], tenFlow.shape[3] ])
    # end

    tenFlow = torch.cat([ tenFlow[:, 0:1, :, :] / ((tenInput.shape[3] - 1.0) / 2.0), tenFlow[:, 1:2, :, :] / ((tenInput.shape[2] - 1.0) / 2.0) ], 1)
    tenInput = torch.cat([ tenInput, backwarp_tenPartial[str(tenFlow.shape)] ], 1)

    tenOutput = torch.nn.functional.grid_sample(input=tenInput, grid=(backwarp_tenGrid[str(tenFlow.shape)] + tenFlow).permute(0, 2, 3, 1), mode='bilinear', padding_mode='zeros')

    tenMask = tenOutput[:, -1:, :, :]; tenMask[tenMask > 0.999] = 1.0; tenMask[tenMask < 1.0] = 0.0

    return tenOutput[:, :-1, :, :] * tenMask
# end

@Etienne66
Copy link
Contributor

Cool. You have my curiosity so I'll try that

@Etienne66
Copy link
Contributor

Etienne66 commented Mar 10, 2022

I was wrong about the Nvidia Cuda Toolkit... It doesn't even need to be installed unless you want to compile the code. Looks like PyTorch provides the runtime libraries it needs.

It seems they center cropped the images for training and didn't bother to test them without cropping.

Cudatools / PyTorch / Conda Simtel Clean EPE
(training)
Simtel Final EPE
(training)
8.0 / 0.2 / 2.7 (1.81)1 (2.29)
9.0 / 1.1.0 / 37_4.11 1.47732 1.93140
9.0 / 1.1.0 / 37_4.11 1.843212 2.277692
10.2 / 1.10.2 / 39_4.11 1.49756 1.94308
10.2 / 1.10.2 / 39_4.11 1.866612 2.294352
10.2 / 1.10.2 / 39_4.113 1.843212 2.277692

Notes:
  1. Numbers in parenthesis are from their paper for PWC-Net_ROB (Table 1)
  2. Cropped to 768x320 per their paper (Section 4.1.3)
  3. align_corners=True and same linspace as that was used for PyTorch 1.1.0

I used the following to toggle between the two versions of linspace and grid_sample

from distutils.version import LooseVersion
if LooseVersion(torch.__version__) >= LooseVersion('1.3'):

@sniklaus
Copy link
Owner

Oh wow, nice find and thanks for sharing this information!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants