-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training and inference discussion for active learning round 1 #35
Comments
Regarding choosing in between, the
I have therefore come to the conclusion that |
The list of 30 subjects chosen for manual corrections is below: The QC report for the manually corrected (done by @MerveKaptan and I) segmentations for the above 30 subjects is below: @jcohenadad we would like your inputs for the above manually corrected images. We will use these images for the next round of training. CC: @MerveKaptan |
I cannot 'save all' (@joshuacwnewton would you mind looking into this?) here are the saved FAIL and ARTIFACTS: Archive.zip |
It looks as though this QC report may have been generated with mixed versions of SCT? (The copy of |
I did a git fetch and a git pull before using SCT to generate the above report. When I run |
@MerveKaptan @rohanbanerjee every time i label images as "artifact", do you add these files in the exclude.yml file? Please document these changes with a cross-ref to my comments where i link the commented QC reports |
Yes, all the artifact images are tracked here: #25 (comment) |
Closing the issue since the round 1 training was successfully completed (including running inference and manual correction) |
Continuation from the previous round of training: #34
What is the round 1 model
The model which was trained on ✅ as per the QCs mentioned in #34 is the
round 1
model. A total of 30 images were added in the training of this model since we fine-tuned the previously trainedbaseline
model.The models were trained in 2 different settings (explained in #36 ) -- 1 model for the
fold_all
model (discussion can be found here - MIC-DKFZ/nnUNet#1364 (comment)) -- training with 126 images (baseline
data + manually corrected data) which will be calledre-training
from now on and 1 fine-tuning model which will be calledfine-tuning
from now on.A list of subjects (for later reference) used for the
re-training
is below: retraining.jsonA list of subjects used for the
fine-tuning
is below: finetuning.jsonThe config (containing preprocessing, hyperparameters) for nnUNetv2 training is:
Config file for re-training: plans.json
Config file for fine-tuning: plans.json
The steps to reproduce the above QC results (/run inference) are the following:
cd fmri-segmentation
Next steps:
held-out test set
(Creation of aheld-out
test data for active learning training phase validation #33)The text was updated successfully, but these errors were encountered: