Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions regarding the test data set creation #4

Open
saskra opened this issue Oct 23, 2023 · 6 comments
Open

Questions regarding the test data set creation #4

saskra opened this issue Oct 23, 2023 · 6 comments

Comments

@saskra
Copy link

saskra commented Oct 23, 2023

I have a few questions to understand how these very good metrics came about. Do I see correctly that first the augmentations were done and then the split into the three subsets for training, validation and testing? Doesn't that lead to overfitting or mere recognition of images when augmented variants of images are split between training and testing datasets? Or was this prevented and I missed it? I think the usual approach would be to use a dataloader with Weighted Random Sampler and augmentations for training only, but leave the test dataset unchanged. How does the quality of your model (accuracy, average F1, MCC) actually look on the official test dataset? https://dataverse.harvard.edu/file.xhtml?persistentId=doi:10.7910/DVN/DBW86T/OSKJF2&version=4.0

@Woodman718
Copy link
Owner

Thank you for your attention to my work and for pointing out some issues in the comment section related to the dataset.
The official dataset is divided into three sets: the training set, the validation set, and the test set, roughly in an 8:1:1 ratio. Specifically, the original data was first randomly sampled to select 828 samples as Validation_data. The remaining samples were divided into Test_data (1006) and Train_data (8181) in a 1.1:9 ratio, with the detailed distribution as shown in the 'Images/Dis_HAM10000_GP.png'.
Without using data augmentation, the results obtained are as shown in Section 2.1 'Evaluation metrics and LCK' of the README.md. In addition, The specific code for evaluating metrics based on the confusion matrix can be found in Module/utils.py.
Furthermore, we suggest checking out the preprint of our paper if you're interested.
Preprint:https://www.techrxiv.org/articles/preprint/A_Novel_Skin_Cancer_Assisted_Diagnosis_Method_based_on_Capsule_Networks_with_CBAM/23291003/1

@Woodman718
Copy link
Owner

Woodman718 commented Oct 25, 2023

The evaluation metrics on the test set can also be improved by fine-tuning some hyperparameters. For instance, the accuracy can be improved from 96.52% to 96.92%. However, I think these are not helpful to the research of this topic, just a tip.
Although the division of the new dataset based on data augmentation is tailored to existing models and data, we are also considering further improvement strategies. Thank you once again for your attention.

@saskra
Copy link
Author

saskra commented Nov 8, 2023

Can you publish a script for preprocessing the HAM10000 data and splitting it into the three subsets? I only get 50% accuracy with my own variant, even using your code and model, so that might be due to the data.

@mfuchs93
Copy link

Can you publish a script for preprocessing the HAM10000 data and splitting it into the three subsets? I only get 50% accuracy with my own variant, even using your code and model, so that might be due to the data.

I think you can use the scripts from https://github.com/Woodman718/CapsNets/tree/main/tools to split the data.
01-Skin_Distinction.md for getting the validation set (828 images).
02-split_skin_cancer.ipynb for getting test set (1006 images)

I think that there could be a problem with the test set though. Isn't it possible that there are images to the same lesion id in the train set and test set? For the validation split, lesion id's that have multiple images are explicitly excluded.

@mfuchs93
Copy link

I have a few questions to understand how these very good metrics came about. Do I see correctly that first the augmentations were done and then the split into the three subsets for training, validation and testing? Doesn't that lead to overfitting or mere recognition of images when augmented variants of images are split between training and testing datasets? Or was this prevented and I missed it? I think the usual approach would be to use a dataloader with Weighted Random Sampler and augmentations for training only, but leave the test dataset unchanged. How does the quality of your model (accuracy, average F1, MCC) actually look on the official test dataset? https://dataverse.harvard.edu/file.xhtml?persistentId=doi:10.7910/DVN/DBW86T/OSKJF2&version=4.0

I was able to execute the code in https://github.com/Woodman718/CapsNets/blob/main/Experiment/HAM10000_9652.zip.
Had to use batch size of 64 due to limited RAM. Achieved ACC:96.26% on validation set.
I tested the model on the official HAM10000 test set and achieved an accuracy of 83.23%.

           precision    recall  f1-score   support

   AKIEC       0.78      0.62      0.69        45
     BCC       0.68      0.88      0.77        93
     BKL       0.71      0.76      0.74       217
      DF       0.92      0.27      0.42        44
     MEL       0.60      0.84      0.70       171
      NV       0.96      0.88      0.92       909
    VASC       0.78      0.83      0.81        35

accuracy                           0.83      1514
macro avg      0.78      0.73      0.72      1514
weighted avg   0.86      0.83      0.83      1514

@Woodman718
Copy link
Owner

Woodman718 commented Feb 18, 2024

I optimized the loss function by introducing matrix norms and condition numbers, making the model more sensitive to errors. Changes in various hyperparameters will affect the final results. Batch size is a crucial hyperparameter that affects both learning and inference. In a previous experiment, adjusting the β parameter in the squash function improved the accuracy from 96.52% to 96.92%. For instance, setting β to be less than the initial value of 1.45 when the model converges can further improve the score {β=1.33, Acc=96.92%}.

Similarly, adjusting the size of the LKC (Local Kernel Canonicalization) can also impact the score. When LKC=24, the accuracy reaches 96.98%, but when LKC=[11,15,24], the model fails to correctly identify DF symptoms. Furthermore, replacing adaptive max pooling with fractional max pooling can increase accuracy to 97.34%. However, the model becomes unstable, with accuracy fluctuating by over 3%, and many repeated experiments may be required to obtain consistent results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants