New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some questions about the performance of CAM(resnet38d) in this paper #7
Comments
As Tab.1, the results of multi-crop test are 53.93, 54.90, 57.81 of res38d, r2n101, scale101, respectively. So scalenet101 works better at pseudo-mask generation. I think it's ok to use the same backbones, and scale101 is suggested at this phase. The reason of hybrid manner is that I firstly train the baseline model of res38d, and I just need rough mask to avoid noise labels in multi-crop training, so I did not train another model. You could use scale101 to train the mask for crop. I think there is not an obvious difference, because the gain of multi-crop training for scale101 is much less than other backbones. |
Hi @Eli-YiLi, Thanks for your kindly reply! |
Yes, the expected miou is 56.21 in tab1. |
If I continue to execute the refine-cam.py. I can get 57.32%. Is it right?
…---- 回复的原邮件 ----
| 发件人 | ***@***.***> |
| 日期 | 2022年03月11日 21:22 |
| 收件人 | ***@***.***> |
| 抄送至 | ***@***.******@***.***> |
| 主题 | Re: [Eli-YiLi/PMM] Some questions about the performance of CAM(resnet38d) in this paper (Issue #7) |
Yes, the expected miou is 56.21 in tab1.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
Triage notifications on the go with GitHub Mobile for iOS or Android.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
What's your results before refine-cam.py? As shown in table 2, the expected result is 58.21, and after refinement, it's 61.49. 57.32 indicates SEAM + refinement. |
Hi @Eli-YiLi , Thanks for sharing your nice work!
I notice that you report the CAM result on ResNet38d. However, in your released code, you only use the resent38d to generate CAM at training multiscale stage. Then, you use the scalenet101 as backbone to train the network at multi-crop stage. So the CAM result on ResNet38d (57.32%) is achieved with a hybrid manner (First train on resnet38d, followed by scalenet101)? I think only train with the resnet38d should be more appropriate.
The text was updated successfully, but these errors were encountered: