You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Another question. Your tiny flownet is also trained. Why not use pretrained model? I mean first training a tiny flownet, and then fixing it when train RMNet.
Third question. In your code, you still use global-to-global memory matching. It is strange.
shoutOutYangJie
changed the title
nice work. Can you tell me about how many GPUs you train the model on?
some questions about RMnet.
Apr 12, 2021
hzxie
changed the title
some questions about RMnet.
Where is the code for Global-to-Global matching? / Why not use pretrained model for TinyFlowNet?
Apr 12, 2021
The matrix multiplication is highly optimized in PyTorch. We implement an extension for local-to-local matching with PyTorch API and CUBLAS, but the speed is lower than the one in PyTorch. Moreover, we find that PyTorch is optimized for our situation where there are lots of zeros in matrices.
I want to reproduce your result, and want to know how many GPUs you train the model on?
Thank you.
The text was updated successfully, but these errors were encountered: