New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New multiclass model for segmenting the spinal cord and gray matter on 7T data #841
New multiclass model for segmenting the spinal cord and gray matter on 7T data #841
Comments
We have a tutorial for "how to train a model, single class, on toy dataset", here. I'm wondering how we could build on top of this, make it friendly to Marseille folks. |
Hi Julien and Charley, Thanks you for opening a discussion on this topic. Your CNN model included in SCT works very well for GM segmentation (sct_deepseg_gm command), but the SC delineation (sct_deepseg_sc command) suffers from the 7T contrast and seems to have room for improvement. We would like to share this model with you and see if it overfits our data or not ? If you are interested, as suggested by Julien, we can redo a set of training with ivadomed toolbox and following your instructions. On our side, we played on the data augmentation part and in particular with the torchio module (https://github.com/fepegar/torchio), very interesting tools to simulate realistic spatial transformations, and MRI artefact effects (like mvt, ghosting, etc...). We could also share this experience with you. Next step is whether we should do a multi-class training, or already follow your ivadomed tutorial adapted for one class (on the WM class in priority). Below is an archive containing a sample of 3 slices merged into a volume, our MCS model (.ft format), and the results of the SCT and MCS 7T segmentations. I hope to continue the discussion with you and Nilser, and thanks for all your tools that help us a lot Best |
That would be awesome indeed! The multiclass-segmentation repository is fairly old and is no more maintained (I just archived it and added a disclaimer to use ivadomed instead). Some of the benefits of using ivadomed include:
This is a great library indeed! We are also using it in some projects (eg: MS lesion segmentation).
Great question. The multi-class approach seems relevant. The closest project I can think of is the WM/GM multiclass segmentation from ex vivo high res data. The repository includes a config file. One cool feature of this project is that it uses SoftSeg, which accounts for partial volume effect on the estimated segmentation. However, it seems that your multiclass (GM/SC) approach includes voxels that are contained within two output classes (GM and SC). This is the reason we went for a multiclass WM/GM model (ie: the sum of all classes equals one, for each voxel in the image). |
Hello Julien et Charley, Today I did a Single_Class training for SC and GM (attached result. GT: Green, DL_Seg: Red),
(I'm starting to use IVADOMED, so sorry for some such basic errors). Best Nilser |
Hi @Nilser3, It was great to meet you yesterday and thank you for following-up on this.
Yes, you are correct: this is a random split, defined by a
Yes. For your application you can ignore
Absolutely, you could add a transformation called |
Looking at the config file you shared with us @Nilser3, the issue may be coming from the transformations (maybe). In particular:
--> This will resample your input data to 0.75x0.75x1 mm3 --> which is a poor resolution compared to your initial data. I would recommend you to change this with a finer resolution, eg the finer resolution encountered in your dataset.
Based on the resolution you are choosing (see above), this may change: Is the cord fully included (ie not partially cropped) when you do a center-wise cropping of 128x128 vox2? This will need to be checked. As a general advice for the optimisation of the preprocessing params as well as the DataAugmentation params, I encourage you to use this script which gives you a easy was to visualize Data Augmentation transformations. Hope it helps :-) |
Hi @charleygros Thanks for your prompt responses, I had much better results manipulating the resolution and was also able to do a Multiclass Segmentation. (Attached image) Now I am working on the .joblib files to cuztomize the Train / Valid / Test. (If they had any Tuto about it, it would help me) I still have some questions.
Because me and @arnaudletroter think that if we deactivate this package, we can load into "Train Datasets" the True Train datasets + Externally Augmented Train datasets.
Finally, I am quite happy to work with such a comfortable and intuitive framework! Ivadomed , Plus their scripts are great. Best Nilser |
Thank you for your feedback @Nilser3 and your kind words! Appreciated Also, I'm glad you got better results. I can't help you with the joblib until Tuesday, but maybe @mariehbourget could? No Data Augmentation: you just need to remove the transformations from the config file (eg RandomAffine), while keeping the Resampling, Crop, NormaliseInstance and NumoyToTensor. DA on validation: that's an interesting question. It is common not to apply DA on Validation set in order to have a realistic independent dataset as reference. Whereas DA on the training set is used to avoid overfitting. I would suggest not to DA the validation set. |
Hi @Nilser3, Regarding the
So you will have to specify two things:
Once the Please let us know if you have any questions or concerns about this procedure. Best, |
Thanks for your reply @charleygros I did a Multiclass and Singleclass(x2) training , with the same parameters, (attached image) and although the results are not the same, they are already much better. Red: Singleclass(x2) Thanks for your help, I will continue with the optimization. |
@arnaudletroter @Nilser3 @charleygros I have additional thoughts about the multiclass WM/GM vs. single class SC and single GM. If we provide users with a WM/GM multiclass model, users would need to sum both masks to obtain the SC segmentation. However, this scenario makes it more difficult to manually correct the GM mask in case the segmentation is not perfect (and as we know, 100% robust method don't exist). For example, if users manually adjust the GM prediction, then if they want to obtain the SC segmentation they cannot simply sum the WM and GM masks because there will be overlap (or missing voxels) at the interface between the WM and the GM mask. So this requires to also manually adjust the WM prediction. Whereas if users segment the SC and the GM mask (as is currently the recommended workflow in SCT), the WM mask would be obtained by subtracting the GM from the SC mask. So if they manually adjust the GM mask (or the SC mask), they can still obtain the WM mask using a simple subtraction, without worrying about missing/added voxels at the interface between the two tissues. |
Exactly @jcohenadad When we initially started working with @arnaudletroter in the Multiclass_Segmentation network, we had to train WM and GM, and finally add them together to obtain the SC. But now at IVADOMED I have been able to train SC and GM on the same model, and I agree with you, it would allow us to have a more precise WM. Now I am analyzing statistics, to see if a Multiclass SC/GM model is =, better or worse than a Singleclass_Model(x2), trained under the same parameters and datasets of course. |
Hello @charleygros @jcohenadad ! I would like to share with you the results of the Multiclass and Singleclass comparison, trained with the same parameters. Of course! Also, I have a question. When we apply Data Augmentation, is a composition made of RandomAffine and ElasticTransform, which are applied to an image, to obtain a augmented image? In other words, for every X images in Train we will have X augmented images?. Thanks a lot! Nilser |
Thank you for sharing your results in #841 (comment) @Nilser3. I will let @charleygros answer your question, but I have a few comments about this comparison:
|
Thanks to @mariehbourget I have been able to define the (train / valid / test), so for both Train I have used the same .joblib file. However, the idea of cross validation is very interesting, I will apply it right away. |
I second this idea of multiple trainings with different train / valid / test subsets. Would be valuable analysis |
For each epoch, the X training samples are augmented with different random parameters. In pseudo code it would be something like:
Hope it helps! |
Hi everyone, Thanks to the very valuable contributions of @jcohenadad , @charleygros and @mariehbourget , we were able to obtain a 7T multiclass segmentation model (SC / GM) that has very satisfactory results. Now we would like to share this model with the community, and if possible make it compatible with SCT's sct_deepseeg command. Thank you !! |
amazing! i suggest to start by creating a repos under ivadomed org where the model will be located (as done for other models), and add a link to this model in SCT's deepseg function. @kousu @joshuacwnewton @taowa could you please help? thanks!! |
I don't think @Nilser3 has the permissions to create a new repository under the @ivadomed org. But, I would be happy to create the repo on their behalf, and then grant them individual access to that repo. 🙂
This is quite easy to do! Here is an example of a recent pull request that added a model to So, I would recommend:
Then I would be happy to test out and review your changes. :) |
I've created the https://github.com/ivadomed/model_seg_gm-wm_t2star_7T_unet3d-multiclass repository, and sent out an invite to @Nilser3 that would give Maintain access to that repository. 🙂 |
Hi everyone!! Thanks @joshuacwnewton I have been able to upload our model in the https://github.com/ivadomed/model_seg_gm-wm_t2star_7T_unet3d-multiclass repository and creates a PR and I modified spinalcordtoolbox/deepseg/models.py that as fork (I'm really sorry because I am new to Git). However, I still can't get my .zip file from my repository to be a direct download for the models installation. Thanks very much for your help |
Thank you so much for the contribution! This is an excellent start. 😄
Git can be tricky to work with, even for us, so no worries whatsoever! We're grateful to have your contribution, and happy to help wherever we can, so feel free to ask questions. 😄 To get a zip file as a direct download, you just need to:
Here is an example release -- you can see the "Source code" zip downloads attached by GitHub under "Assets". |
Thank you so much @joshuacwnewton our release is ok in https://github.com/ivadomed/model_seg_gm-wm_t2star_7T_unet3d-multiclass/archive/refs/tags/v1.0.zip now I can't rename this repo https://github.com/ivadomed/model_seg_gm-wm_t2star_7T_unet3d-multiclass on the recommendation of @jcohenadad could you (please) change it for /model_seg_gm-sc_t2star_7t_unet2d-multiclass please Thanks a lot! |
My apologies! I thought "Maintain" privileges would let you do that. Got it -- I've renamed the repo! FYI -- I also renamed the release to match the date-based convention used for other models. ( The new link to the |
Motivation for the feature
7T researchers cannot use SCT's segmentation models as is because features of the data are too different than 3T data used to train the model. Moreover, it would be desired to have a multi-class model (SC + GM) instead of two separate models.
Description of the feature
Researchers from the CRMBM in Marseille (Nilser Laines, Arnaud Le Troter and Virginie Callot) trained a model which would be beneficial for SCT users. The purpose of this issue is to document for the Marseille team (and everybody else wanting to chip in) the steps involved for training such a model within the ivadomed framework. The benefits is that the model will then be compatible with SCT's
sct_deepseg
command.The text was updated successfully, but these errors were encountered: