You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am considering to perform fine tuning of models in this repo on a couple of mammograms with image-level labels (only normal and malignant labels). I have the following concerns to seek your kind advice.
To fine tune the pretrained model, what loss function should I use? Should I use the binary CELoss of predicted output (line 183 of the run_model.py) and the ground-truth image label as the loss function or should I use the loss function given in Eq.(13) in your paper to re-train the model?
Is it reasonable to use the predicted benign probability, generated from line 183 of the run_model.py, as the probability of normal cases for fine tuning the model?
For image pre-processing, which method should be applied a). for each single training image, remove its mean value and divided by its std; b). for each batch_size training images, remove the mean value and divided by std (mean and std are calculated on all images from a mini-batch); c). for the whole training images, remove the mean value and divided by std (mean and std are calculated on all images from whole training set)
Thanks for your time and wait for your feedback.
The text was updated successfully, but these errors were encountered:
You can start with BCE but eventually, Equation 13 will give you better performance. The regularization term |A| is basically model.saliency_maps.mean().
This depends on your datasets. If all breasts in your datasets are either benign or malignant, then you might want to make the model only output a single score indicating if there is any malignant lesion. If your dataset has normal breasts that don't have any benign or malignant lesions, then you should use BCE.
Hi Yiqiu,
I am considering to perform fine tuning of models in this repo on a couple of mammograms with image-level labels (only normal and malignant labels). I have the following concerns to seek your kind advice.
To fine tune the pretrained model, what loss function should I use? Should I use the binary CELoss of predicted output (line 183 of the run_model.py) and the ground-truth image label as the loss function or should I use the loss function given in Eq.(13) in your paper to re-train the model?
Is it reasonable to use the predicted benign probability, generated from line 183 of the run_model.py, as the probability of normal cases for fine tuning the model?
For image pre-processing, which method should be applied a). for each single training image, remove its mean value and divided by its std; b). for each batch_size training images, remove the mean value and divided by std (mean and std are calculated on all images from a mini-batch); c). for the whole training images, remove the mean value and divided by std (mean and std are calculated on all images from whole training set)
Thanks for your time and wait for your feedback.
The text was updated successfully, but these errors were encountered: