New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about the classification loss #15
Comments
Hi @NeuZhangQiang ,
|
Dear @YudeWang What do you mean by "train a segmentation model on these pseudo labels in fully supervised manner"? Do you mean: use the output of CAM or PAM as the input image, and the manually labeled mask as the target, to train a model (such as Unet)? But, how can we obtain the manually labeled since the SEAM is designed for the weekly supervised segmentation? In addition, the paper sad:
It also mean: mask = CAM > threshold? In addition, in the figure in the paper: the Cls Loss is calculated using the feature from CAM. However, the code in train_SEAM.py is:
In this code, the classification loss is calculated by using the feature from PCM (cam_rv1). It also make me a little confuzed. |
@NeuZhangQiang As for cls loss, the code you given has shown that cls loss is calculated by |
The SEAM is really a excellent work. After reading the paper, I have a question:
how to get the final segmentation mask? In my understanding, the SEAM finally output a CAM map, then the Random work is used to segment the final mask? Am I right?
How to calculate the classification loss? For example, the final output is
and we can also calculate the background as:
but, how can we use the two result to calculate the loss? how can we generate the ground truth? Is img(m, n) = c (the true label) the ground truth?
Any suggestion is appreciated!
The text was updated successfully, but these errors were encountered: