Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training and inference discussion for active learning round 2 #38

Closed
3 tasks done
rohanbanerjee opened this issue Apr 11, 2024 · 4 comments
Closed
3 tasks done

Comments

@rohanbanerjee
Copy link
Collaborator

rohanbanerjee commented Apr 11, 2024

Continuation from the previous round of training: #35

What is the round 2 model

The model which was fine-tuned on the manually corrected segmentations as per the QCs mentioned in #35 is the round 2 model. A total of 26 images were added in the training of this model since we fine-tuned the previously trained round 1 model.

A list of subjects used for the fine-tuning is below: finetuning.yml

The config (containing preprocessing, hyperparameters) for nnUNetv2 training is: plans.json

After the training was completed, I ran inference on the rest of the images whose segmentations have to be included in the consequent rounds of training (226 images), below is the QC. 40 subjects from these images will be chosen and included in the round 3 of training:

qc_round2_inference.zip

The steps to reproduce the above QC results (/run inference) are the following:

  1. Clone this repos
  2. cd fmri-segmentation
  3. Download the model weights (the whole folder) from the link: https://drive.google.com/drive/folders/1WSn-15wGWz6i2_aZeQTwKls2sZ6dpfHf?usp=share_link
  4. Install dependencies
pip install -r run_nnunet_inference_requirements.txt
  1. Run the command:
python run_nnunet_inference.py --path-dataset <PATH TO FOLDER CONTAINING IMAGES, SUFFIXED WITH _0000> --path-out <PATH TO O/P FOLDER> --path-model <PATH TO DOWNLOADED WEIGHTS FOLDER>

Next steps:

@rohanbanerjee
Copy link
Collaborator Author

The list of 40 subjects chosen for manual corrections is below:
qc_fail.yml.zip

The QC report for the segmentations manually corrected by me for the above 40 subjects is below:

qc_round_2_corrected.zip

@jcohenadad , I would like your inputs for the above manually corrected images. I will use these images for the next round of training.

CC: @MerveKaptan

@rohanbanerjee
Copy link
Collaborator Author

Julien: Redo KCL subjects

@jcohenadad
Copy link
Member

here you go: qc_flags.json

two issues only:

image

@rohanbanerjee
Copy link
Collaborator Author

Closing the issue since the round 2 training was successfully completed (including running inference and manual correction)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants