-
Notifications
You must be signed in to change notification settings - Fork 124
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EPE in sintel clean lower han the original #23
Comments
Thank you for your feedback! What is the EPE when you use the official repository? Specifically, the |
Unfortunately I cannot run the official network. It requires CUDA 8.0, but in my cluster I am forced to use CUDA 10. (if you have any hint on how to make it work it would be greatly appreciated) Could you help me clarifying which version we are using? NOTE: my result 1.81 actually seems to match the value reported in your link, so I think your network is working properly: https://github.com/NVlabs/PWC-Net/tree/master/PyTorch Thanks for your help, |
This comment might be useful: NVlabs/PWC-Net#26 (comment) However, I do not think that there is a corresponding entry in the paper. But please correct me if I am wrong, I am sure others would be happy to get an answer to this question as well! For a related discussion, please see: #9 |
Thanks for the high quality references! I can confirm that your default version of PWC-Net corresponds to PWC-Net ROB of the paper "Model Matters..." (https://arxiv.org/pdf/1809.05571.pdf) |
Awesome, thank you for the clarification! |
I manged to install Cudatools 9.2 but my card was too new for Cudatools 8.0 plus I was having to change my code too much for the older versions of PyTorch. I still couldn't reproduce the same numbers as in the paper but at least I seem to be in the ballpark. It could be because I have Cuda 11.6.1 installed. I don't feel like downgrading that installation too and there is no longer a MiniConda for Python 2.7 available. Older versions of
Note: Numbers in parenthesis are from the paper for PWC-Net_ROB |
Thanks for sharing your findings! So it seems like you got better results than the paper claimed as long as you use a recent PyTorch version? I wonder why this is the case, should you find any issues in my implementation then please don't hesitate to let me know. |
The difference in PyTorch versions seemed to be the I just do this as a hobby. Not like any of my results will be published in a paper but I will put it out here on Github. My goal is to have a lot better performance than the paper by changing up the learning even more. I've learned a lot in the past year though and I still find this stuff interesting. But if I find any more issues I'll be happy to share. |
You can try the following
|
Cool. You have my curiosity so I'll try that |
I was wrong about the Nvidia Cuda Toolkit... It doesn't even need to be installed unless you want to compile the code. Looks like PyTorch provides the runtime libraries it needs. It seems they center cropped the images for training and didn't bother to test them without cropping.
Notes: I used the following to toggle between the two versions of from distutils.version import LooseVersion
if LooseVersion(torch.__version__) >= LooseVersion('1.3'): |
Oh wow, nice find and thanks for sharing this information! |
Dear @sniklaus thanks for this nice implementation. I find it really useful I don't have to worry about cuda versioning.
I am writing a paper which uses this implementation and to be sure of I have checked the EPE on sintel training clean. I got 1.81, which is much lower than what stated from the authors.
Is this network fine tuned on Sintel ? Does my result make sense?
Thanks,
Stefano
The text was updated successfully, but these errors were encountered: