Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training and inference discussion for active learning round 1 #35

Closed
2 of 4 tasks
rohanbanerjee opened this issue Mar 18, 2024 · 8 comments
Closed
2 of 4 tasks

Comments

@rohanbanerjee
Copy link
Collaborator

rohanbanerjee commented Mar 18, 2024

Continuation from the previous round of training: #34

What is the round 1 model

The model which was trained on ✅ as per the QCs mentioned in #34 is the round 1 model. A total of 30 images were added in the training of this model since we fine-tuned the previously trained baseline model.

The models were trained in 2 different settings (explained in #36 ) -- 1 model for the fold_all model (discussion can be found here - MIC-DKFZ/nnUNet#1364 (comment)) -- training with 126 images (baseline data + manually corrected data) which will be called re-training from now on and 1 fine-tuning model which will be called fine-tuning from now on.

A list of subjects (for later reference) used for the re-training is below: retraining.json

A list of subjects used for the fine-tuning is below: finetuning.json

The config (containing preprocessing, hyperparameters) for nnUNetv2 training is:

Config file for re-training: plans.json

Config file for fine-tuning: plans.json

The steps to reproduce the above QC results (/run inference) are the following:

  1. Clone this repos
  2. cd fmri-segmentation
  3. Download the model weights (the whole folder) from the link: https://drive.google.com/drive/folders/1WSn-15wGWz6i2_aZeQTwKls2sZ6dpfHf?usp=share_link
  4. Install dependencies
pip install -r run_nnunet_inference_requirements.txt
  1. Run the command:
python run_nnunet_inference.py --path-dataset <PATH TO FOLDER CONTAINING IMAGES, SUFFIXED WITH _0000> --path-out <PATH TO O/P FOLDER> --path-model <PATH TO DOWNLOADED WEIGHTS FOLDER>

Next steps:

@rohanbanerjee
Copy link
Collaborator Author

rohanbanerjee commented Apr 3, 2024

Regarding choosing in between, the re-training and fine-tuning, I checked the results qualitatively and found that

  1. fine-tuning performs better in most cases like segmenting the first and last slice (re-training inference misses segmenting the first and last slice in some cases).
  2. fine-tuning learns the shapes of spinal cord better than the re-training hence resulting in more precise segmentations.
  3. Discussions in Retraining vs Fine-tuning in nnUNetv2 #36 also suggest fine-tuning is a better choice.

I have therefore come to the conclusion that fine-tuning is better as a strategy for our problem, for this round of training and will be going ahead with the fine-tuning in the next rounds of iterations too.

@rohanbanerjee
Copy link
Collaborator Author

rohanbanerjee commented Apr 3, 2024

The list of 30 subjects chosen for manual corrections is below:
qc_fail.yml.zip

The QC report for the manually corrected (done by @MerveKaptan and I) segmentations for the above 30 subjects is below:

qc_round_1_corrected.zip

@jcohenadad we would like your inputs for the above manually corrected images. We will use these images for the next round of training.

CC: @MerveKaptan

@jcohenadad
Copy link
Member

I cannot 'save all' (@joshuacwnewton would you mind looking into this?)

here are the saved FAIL and ARTIFACTS: Archive.zip

@joshuacwnewton
Copy link
Member

I cannot 'save all' (@joshuacwnewton would you mind looking into this?)

It looks as though this QC report may have been generated with mixed versions of SCT? (The copy of main.js in the uploaded folder is missing some key functions needed for 'Save All' to work.)

@rohanbanerjee
Copy link
Collaborator Author

It looks as though this QC report may have been generated with mixed versions of SCT? (The copy of main.js in the uploaded folder is missing some key functions needed for 'Save All' to work.)

I did a git fetch and a git pull before using SCT to generate the above report. When I run sct_check_dependencies, it shows the this SHA: git-master-7b8600645b9df14d18e79d3f78bc0c9fe80c3199

@jcohenadad
Copy link
Member

@MerveKaptan @rohanbanerjee every time i label images as "artifact", do you add these files in the exclude.yml file? Please document these changes with a cross-ref to my comments where i link the commented QC reports

@rohanbanerjee
Copy link
Collaborator Author

rohanbanerjee commented Apr 11, 2024

@MerveKaptan @rohanbanerjee every time i label images as "artifact", do you add these files in the exclude.yml file? Please document these changes with a cross-ref to my comments where i link the commented QC reports

Yes, all the artifact images are tracked here: #25 (comment)

@rohanbanerjee
Copy link
Collaborator Author

Closing the issue since the round 1 training was successfully completed (including running inference and manual correction)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants