Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

For a number of subjects there are ZERO segmentations #4

Closed
kondratevakate opened this issue Jul 25, 2022 · 1 comment
Closed

For a number of subjects there are ZERO segmentations #4

kondratevakate opened this issue Jul 25, 2022 · 1 comment

Comments

@kondratevakate
Copy link

kondratevakate commented Jul 25, 2022

Dear all,

Thank you for the dataset and code, I could imagine what a hard job it was to deliver that.

We are reproducing your pipeline and found that some subjects have zero DICE on inference.
We are trying to explore the issue with failing segmentations for a while, maybe you've already faced that. So we try to (1) reproduce your pipeline and after to train (2) nnUnet with custom data preprocessing.

(1) While reproducing your pipeline on T1 subjects we found that the network will not predict any tumour in 103 subjects. The overall inference quality will be around 0.3 DICE. During training, the quality reaches DICE 0.9.

By inference, I mean predicting the whole dataset after the network is trained.

(2) When we trained nnUnet with a similar preprocessing to yours on T1 and T2 and we get DICE 0.83 and have ~9 subjects predicted with DICE 0.

We train nnUnet from MONAI on T1 data after your preprocessing, and again 103 subjects will be poorly predicted (with not DICE 0, but DICE 0.4)

During the inference we reduce the sliding window size to

self.sliding_window_inferer_roi_size = [128, 128, 32]``` 

This was reduced to fit into GPU. 

May be you can guide us - why subjects on inference get zeo DICEs? Maybe it is becouse you were training and predicting on the whole size image?
@aaronkujawa
Copy link
Collaborator

Hi,

thanks for your interest and questions!
I'm not sure if you changed anything else in the code, but if you change the sliding_window_inferer_roi_size, you should change the patch size (pad_crop_shape) at training time accordingly. Otherwise, you will feed much smaller patches to the network at inference time than at training time and that can lead to a bad performance.

We have recently trained models with nn-UNet on this GK dataset but also on a new multi-center dataset. The corresponding paper is not out, yet, but you can find the corresponding models here:
https://zenodo.org/record/6827679#.Yt_vKi1Q3-Y

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants