Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Synthseg+ training #65

Closed
gres05 opened this issue Sep 14, 2023 · 5 comments
Closed

Synthseg+ training #65

gres05 opened this issue Sep 14, 2023 · 5 comments

Comments

@gres05
Copy link

gres05 commented Sep 14, 2023

Hi,

Firstly, thank you for all the work you've done! Really interesting stuff

I'm trying to create my own model of Synthseg+ by following the tutorial 7 steps. I'm having a bit of difficulty with creating a dataset to train the denoiser, specifically with the sample_segmentation_pairs_d.py script. Do you have any further documentation or examples as to how you obtain the input/target segmentations for the denoiser?

Thank you

@BBillot
Copy link
Owner

BBillot commented Sep 14, 2023

Hi,

you will have to be a little bit more specific if you want me to help you ;)
About the documentation: pretty much all the info is either in the code/tutorials or in the paper. You can also have a look at the following issues, where I had a fair bit of re-explaining to do because someone was trying to make the tutorial 7 work but was not using the right inputs.

#45 ValueError: axes don't match array" in sample_segmentation_pairs_d.py
#46 ValueError: axes don't match array in prediction of S1 unit at 7-synthseg+.py

@gres05
Copy link
Author

gres05 commented Sep 15, 2023

Thanks for the quick reply. I'm getting when I attempt to create an instance of the RandomSpatialDeformation layer, please see the screenshot attached

my inputs to the sample_segmentation_pairs_d function are:

image_dir = './path_to_images_folder'
labels_dir = ['./path_to_labels_folder']
results_dir = './path_to_results_folder'
n_examples = 5
path_model = "./dice_100.h5" (trained s1 model)
segmentation_labels = "segmentation_labels.npy"
n_neutral_labels = None

please let me know if I can provide more info! Thanks

sample_segmentation_pairs_d_error

@BBillot
Copy link
Owner

BBillot commented Oct 9, 2023

Sorry for the very late answer...
You have an old version of the code, try pulling the latest version :)

@gres05
Copy link
Author

gres05 commented Oct 12, 2023

Thank you for the response, I did manage to get there myself!

I have another issue with the outputs of the sample_segmentation_pairs_d function if you could please provide some more assistance.

I am using the same parameters as outlined above, but I'll repost here:
image_dir = './path_to_image.nii.gz'
labels_dir = './path_to_corresponding_label.nii.gz'
results_dir = './path_to_results_folder'
n_examples = 5
path_model = "./dice_100.h5" (trained s1 model)
segmentation_labels = "./segmentation_labels_s1.npy" (used to train s1 model)
n_neutral_labels = None

When used to generate results, I get some paired outputs that look appropriate to train the denoiser, and other outputs that are pure noise (see below). I'm thinking that there is some parameter used to build the augmentation model that, when randomly sampled outside of a particular range, is causing the noisy output. However I'm not sure what this may be and I've tried playing around with the upper/lower bounds of the augmentation parameters without any luck so far to try and produce consistently appropriate outputs.

Would you have any potential view on what may be causing the noisy output? Any help would be great thanks!

Appropriate output
appropriate_output

Noisy output
noisy_output

@BBillot
Copy link
Owner

BBillot commented Dec 21, 2023

again, sorry for the late answer.
I also got some pretty messed up segmentations with the default parameters, but never like that. The worst ones are often almost empty, but I never observed those patterns. But then I guess it depends on the trained and fixed network that you used to obtain those segmentations. Different trainings lead to different networks that respond diffrently to extremely augmented images.
But then is it really a bad thing ? the denoiser might need to see some of those very bad examples during training, because even if they're hopeless to "denoise" (ie reconstruct), that could still help the denoiser learning what's a good or bad segmentations. So I suggest keeping those guys in for now, and see how removing them affects the training of the denoiser (accuracy, stability, etc.).

Let me know how it goes
Best
Benjamin

@BBillot BBillot closed this as completed Feb 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants