-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about BTCV performance of the Table 3 in the paper #26
Comments
hi, the table 3 is conducted using 5-cross validation. you need to do 5 times training and validation, and average the results. the dataset split is shown in |
How to evaluate the results of the test set (img0061~0080)? I couldn't find the annotations. Is it verified online on the official website? |
Thanks for reply. In my understanding, the universal model is firstly trained with the assembly datasets, where the data split follows https://github.com/ljwztc/CLIP-Driven-Universal-Model/blob/main/dataset/dataset_list/PAOT_123457891213_train.txt , right? So in order to conduct 5-cross validation and given the BTCV_folds.json, there should be five PAOT_123457891213_train.txt files? Since I find these two files share some data for every validation fold. |
Yes. It should be verified online on the official website. |
Yes. When conducting the BTCV experiment, the pre-training process should exclude the data from BTCV. |
But, in the PAOT.txt, you show the list (01_Multi-Atlas_Labeling/label/label0061.nii.gz ~ label0080.nii.gz). |
The released codebase is for MSD leaderboard main experiment. When train model for BTCV, it should exclude the BTCV data. |
Thanks for the great jobs.
Is it any specific script to reproduce the results of the universal model in Table 3?
The text was updated successfully, but these errors were encountered: