-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does this model need pixel-level segmentation masks of malignant and benign lesions? #10
Comments
Hi Steve, You are correct that the model does not require segmentation for training. The reason why this code repo contains segmentation is to generate visualizations that compare the saliency maps with the ground truth segmentation. You can either disable visualization in the run.sh file (unset --visualization-flag) or give random images as the ground-truth segmentation. Hope it helps. |
Hi Yiqiu,
Thanks much for your clarification. Now I understand how this code works.
Currently, we have a mammography image set, containing only image-level labels (cancer vs. no cancer) and coarse malignant lesion annotations on cancer images (no benign annotations). If we want to fine-tune your pretrained models on our own dataset for cancer vs. no cancer classification on image basis, do you have any suggestion?
…________________________________
From: Artie Shen <notifications@github.com>
Sent: Wednesday, September 23, 2020 1:39 PM
To: nyukat/GMIC <GMIC@noreply.github.com>
Cc: Hong Pan <mspanhong@hotmail.com>; Author <author@noreply.github.com>
Subject: Re: [nyukat/GMIC] Does this model need pixel-level segmentation masks of malignant and benign lesions? (#10)
Hi Steve,
You are correct that the model does not require segmentation for training.
The reason why this code repo contains segmentation is to generate visualizations that compare the saliency maps with the ground truth segmentation. You can either disable visualization in the run.sh file (unset --visualization-flag) or give random images as the ground-truth segmentation.
Hope it helps.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<#10 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AG674ICCVJK3L5WQTJCSGNLSHH3CJANCNFSM4RWTZJDQ>.
|
Hi Steve, Here is what I would do:
Hope this help :) |
Hi Yiqiu, Thanks for your valuable advice. Actually, I can not find what loss function is used in the current code. It would be great appreciated if there is a chance to obtain the training code. In order to fine tune the pretrained model, should I tune all global, local and fusion modules simultaneously or just tune the fusion module? Because each module may affect the performance of the final model, what is the proper way/order to tune this model and which parameters are particularly important in the fine-tuning process. Any suggestion is more than welcome. Thank you. |
Hi, according to your paper, it seems that training and inference of this model only requires image-level labels, no need for annotations of malignant and benign lesions. However, from the code in this repo, segmentation paths of malignant and benign lesions are required to run the code. Just wonder, if I don't have segmentation masks of malignant and benign lesions, how can I train and test your model on my own images? Waiting and appreciate for your response.
The text was updated successfully, but these errors were encountered: