-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add warp layer #1452
Comments
Note that this is related to #789, #189 and #95 Pytorch's There is already an initial implementation of higher order resampling here but it needs more testing and use case examples: |
Thank you for the useful information. I will try to figure out the best way to integrate both pytorch and MONAI implementations into the layer. |
It would be super helpful if anyone could answer the following questions about the MONAI implementation of resampling:
Thanks in advance. @tvercaut |
@brudfors is probably th ebest person to advice on this implementation as it stemmed from https://github.com/balbasty/nitorch |
I am happy to try to answer your questions. The grid is defined in the target/output space and should specify the coordinates to sample in the input image. The grid coordinates should be defined in the voxel space of the input image. Here is some sample code that may, or may not, shed some more light on how its used: https://github.com/balbasty/nitorch/blob/master/demo/demo_spatial.ipynb |
@brudfors Thank you for the information, that completely answers my questions. |
By the way, @brudfors is there a computational time comparison somewhere for pytorch |
Hi @tvercaut, The below Colab notebook shows the speed-up doing GPU resampling compared to CPU: https://colab.research.google.com/drive/1qICvEDn-p8RnmaG-9v0sZu__6OQafM4A?usp=sharing it is quite substantial. |
Thanks. I was more interested in GPU |
Oh yes, sorry, I added that comparison to the notebook as well: vanilla pytorch and nitorch seem to perform similarly, but there is probably a more thorough validation that could be done. |
I am reopening this issue following the comment above and the note from @wyli here: I would suggest only using @kate-sann5100: Can you adapt you warp layer accordingly? |
nitorch does differentiable splatting, not sure if pytorch does? |
@tvercaut That sounds promising. I will implement that. Thank you for the advice. |
@brudfors I think there is no splatting function in pytorch. But in the Warp layer, the warp output should have the same shape as the input image, so splatting is not required. |
You need to ensure that whatever terminal process compiling your MONAI reads that the environment variable USE_COMPILED is set to true, so it depends how you are compiling MONAI, I guess. I use PyCharm and its in-built terminal environment, so I just set the environment variable within my PyCharm MONAI project. |
the flag is defined here: MONAI/monai/config/deviceconfig.py Line 34 in 91cb8cd
it'll be True when env variable BUILD_MONAI=1 for both installing and running MONAI, for example: with MONAI installed with command: BUILD_MONAI=1 python setup.py develop --user and then
|
@brudfors
I was expecting to get |
Sorry for the slow reply. For interpolation order greater than 1, a pre-filtering step is needed to determine the b-spline coefficients. This pre-filtering is not yet implemented in nitorch. For certain applications, these coefficients are not needed (e.g., when the resampling is part of some forward model, and the coefficients are implicitly found when inverting the model). So depending on your particular use-case, interpolation > 1 might not work. |
For the record, in case we revisit higher-order interpolation. As the nitorch-based implementation only partially adresses the need, we have two options:
I think it make sense to keep this issue closed an followed up higher-order interpolation modes in #789 |
Is your feature request related to a problem? Please describe.
Current repository does not support warping base on ddf, which is indispensable for registration.
Describe the solution you'd like
Add a Warp layer.
Describe alternatives you've considered
N/A
Additional context
DeepReg implementation of Warp layer
The text was updated successfully, but these errors were encountered: