Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where is the code for Global-to-Global matching? / Why not use pretrained model for TinyFlowNet? #4

Closed
shoutOutYangJie opened this issue Apr 12, 2021 · 3 comments
Labels
good first issue Good for newcomers

Comments

@shoutOutYangJie
Copy link

shoutOutYangJie commented Apr 12, 2021

I want to reproduce your result, and want to know how many GPUs you train the model on?
Thank you.

@shoutOutYangJie
Copy link
Author

Another question. Your tiny flownet is also trained. Why not use pretrained model? I mean first training a tiny flownet, and then fixing it when train RMNet.

@shoutOutYangJie
Copy link
Author

Third question. In your code, you still use global-to-global memory matching. It is strange.

@shoutOutYangJie shoutOutYangJie changed the title nice work. Can you tell me about how many GPUs you train the model on? some questions about RMnet. Apr 12, 2021
@hzxie hzxie changed the title some questions about RMnet. Where is the code for Global-to-Global matching? / Why not use pretrained model for TinyFlowNet? Apr 12, 2021
@hzxie hzxie added the good first issue Good for newcomers label Apr 12, 2021
@hzxie
Copy link
Owner

hzxie commented Apr 12, 2021

Thanks for your interest in our work.

  1. At first, we use 4 1080Ti GPUs. As the memory usage increases (because our method becomes more complex), we use 2 V100 (32GB) GPUs.
  2. Actually, the weights of TinyFlowNet are partially borrowed from FlowNet-S and fine-tuned on the precomputed optical flow (See also The problem of the precomputed optical flow #3).
  3. The matrix multiplication is highly optimized in PyTorch. We implement an extension for local-to-local matching with PyTorch API and CUBLAS, but the speed is lower than the one in PyTorch. Moreover, we find that PyTorch is optimized for our situation where there are lots of zeros in matrices.

@hzxie hzxie closed this as completed Apr 12, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

2 participants