Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does this model need pixel-level segmentation masks of malignant and benign lesions? #10

Closed
Steve-Pan opened this issue Sep 23, 2020 · 4 comments

Comments

@Steve-Pan
Copy link

Hi, according to your paper, it seems that training and inference of this model only requires image-level labels, no need for annotations of malignant and benign lesions. However, from the code in this repo, segmentation paths of malignant and benign lesions are required to run the code. Just wonder, if I don't have segmentation masks of malignant and benign lesions, how can I train and test your model on my own images? Waiting and appreciate for your response.

@seyiqi
Copy link
Collaborator

seyiqi commented Sep 23, 2020

Hi Steve,

You are correct that the model does not require segmentation for training.

The reason why this code repo contains segmentation is to generate visualizations that compare the saliency maps with the ground truth segmentation. You can either disable visualization in the run.sh file (unset --visualization-flag) or give random images as the ground-truth segmentation.

Hope it helps.

@seyiqi seyiqi closed this as completed Sep 23, 2020
@Steve-Pan
Copy link
Author

Steve-Pan commented Sep 24, 2020 via email

@seyiqi
Copy link
Collaborator

seyiqi commented Sep 24, 2020

Hi Steve,

Here is what I would do:

  • load the pretrained model. We provided 5 but you can select one from the 5 models.
  • change the activation function for the global / local / fusion module to softmax. It's currently sigmoid in this repo.
  • use CELoss to train your model.

Hope this help :)

@Hong-Swinburne
Copy link

Hi Yiqiu,

Thanks for your valuable advice.

Actually, I can not find what loss function is used in the current code. It would be great appreciated if there is a chance to obtain the training code.

In order to fine tune the pretrained model, should I tune all global, local and fusion modules simultaneously or just tune the fusion module? Because each module may affect the performance of the final model, what is the proper way/order to tune this model and which parameters are particularly important in the fine-tuning process.

Any suggestion is more than welcome. Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants