-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About model training #1
Comments
Can you give a specific example, are you using XNet or other networks, semi-supervised segmentation or fully supervised segmentation? I suggest that you try to start with the fully supervised UNet. If UNet cannot be trained, it may be a problem with the hyperparameters or the dataset. |
Dear author, I use the brain ct dataset (Brats2018). Is it possible that the data should not be randomly clipped in the data loading process? |
3D images are trained based on patches. If two dataloads are used to load low-frequency images and high-frequency images respectively, there is no guarantee that the generated patches will be the same. You should use one dataload to load them at the same time to ensure that the patches generated by L, H and mask are the same. (see dataload/dataset_3d.py line 163-172) |
Thank you for your suggestion. I will follow your suggestion to implement it. |
Dear author, I made changes as you suggested, only to find that in the output, wt_dice training is normal, tc_dice and et_dice are completely zero. What could have caused this result? |
During your training process, there are normal losses and evaluation metrics, which proves that the overall training process should be correct. I think the error should appear in tc_dice and et_dice. |
Does the author test excessive category segmentation?
When I train with brain data Brats2018, the output dice is 0, but I can't find the reason all the time. Can you help me?
Thank you very much
The text was updated successfully, but these errors were encountered: