New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aggregator smaller model outputs - Ready to be reviewed #1002
base: main
Are you sure you want to change the base?
Conversation
check patch_size v patch_size in init hann check _crop_patch for Volume_padded and if overlap is bigger than patch_diff
Codecov Report
@@ Coverage Diff @@
## main #1002 +/- ##
==========================================
+ Coverage 86.47% 86.51% +0.03%
==========================================
Files 91 91
Lines 5774 5798 +24
==========================================
+ Hits 4993 5016 +23
- Misses 781 782 +1
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
to depend on patch_diffs and model_output_size Expanded test_inference for different overlap modes Tests now working except for hann
Fixed for average but other crop and hann broken check patch_size v patch_size in init hann check _crop_patch for Volume_padded and if overlap is bigger than patch_diff changed crop_patch and hann window to depend on patch_diffs and model_output_size Expanded test_inference for different overlap modes Tests now working except for hann add docstring
…ahabk/torchio into 1001-aggregator-smaller-output
This fixes #1001. I have described the issue and in detail I believe #1001, I think this is PR is ready to be merged except for the final questions. I'm still getting my feet wet with PRs so would appreciate some guidance. TLDR for why this is important: I believe tio grid sampling and aggregating should be able to handle smaller model outputs. My model predictions are terrible even with averaging or hann windowing. Unfortunately most popular model libraries (such as the great monai) only provide models with the same output size and input. But it is crucial in my application to let the model see a bigger input ROI than semantic label outputs - by padding convolutions, as this gives context for the prediction. The original unet paper uses padded convolutions for smaller outputs than inputs. Description Summary of changes:
Final questions before you agree:
Checklist
|
@fepegar This is ready now when you have time to review. Please read the second comment, the first was the draft PR. I tried squashing my commits using the tutorial, did it work correctly? Any edits or requests you have in mind let me know and I will edit the PR. |
Hi, @wahabk. Thanks for the PR. Apologies for the delay. I will take a look at all this as soon as possible. |
from ...utils import TorchioTestCase | ||
|
||
|
||
class TestInference(TorchioTestCase): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can add a test in the other file instead of creating this file which looks very similar.
for overlap_mode in [ | ||
'average', | ||
'crop', | ||
]: # not checking for 'hann' since assertiion fails |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we know why the assertion fails?
self.padding_mode = 'constant' | ||
self.patch_diffs = ( | ||
np.array(patch_size) - np.array(self.model_output_size) | ||
) // 2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if the differences are not divisible by 2?
# but remove input and output patch_diffs | ||
# If patch_overlap is not bigger than patch_diffs. | ||
# No need for cropping if output size is smaller | ||
crop_ini = half_overlap.copy() - patch_diffs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this add instead of substract? Maybe I've misunderstood.
target_spatial_shape = tuple(patch_sizes[0]) | ||
if input_spatial_shape != target_spatial_shape: | ||
# Should target size be self.patch_size? | ||
target_spatial_shape = tuple( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is it a tuple with only one element?
This is to reference #{1001}.
I have described the issue and planned fix in detail I believe, this is a draft PR to ask for some help
I have made most changes I believe, but this is a broken draft to show you my idea, let me know if the assertions are insufficient or if argument/attribute names are not clear
The bit I'm confused with is the subject padding, the subject is now forced to be padded and the padding is set to be at least the difference between input and output. and the final aggregator output tensor isn't padded
So say I have an input tensor with shape
(128.128.128)
, with a patch_size of(64,64,64)
and a model output size of(60,60,60)
The patch_difference is set to be
(4,4,4)
and overlap is made to be at least equal to this, when the subject is padded in theGridSampler
does that mean it's shape is now(136,136,136)
?Description
#1001
Checklist
CONTRIBUTING
docs and have a developer setup (especially important arepre-commit
andpytest
)pytest
make html
inside thedocs/
folder