Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions about released code #19

Open
YeRen123455 opened this issue Dec 29, 2021 · 6 comments
Open

Some questions about released code #19

YeRen123455 opened this issue Dec 29, 2021 · 6 comments

Comments

@YeRen123455
Copy link

@jbeomlee93 ! Sorry for disturbing you again. I still have two questions about the released code.
(1) In obtain_CAM_masking_super_pixel.py. Since you have used grad-cam to generate the class activation map(i.e., CAM), why don't you use resnet50.py with grad-cam to generate outputs. Actually, you used resnet50_cam.py with grad-cam to generate the outputs.

(2) Can you share the code of "SEAM+AdvCAM" with me. I try to reproduce it by myself but the performance is not good as yours. My email address is liboyang20@nudt.edu.cn

@YeRen123455
Copy link
Author

Hi @jbeomlee93 !
I tried to reproduce "SEAM+AdvCAM" by myself but only achieved 50.96mIoU. I think the difference (50.96 vs 58.6) was caused by my wrong experimental setting. I used the following setting for "SEAM+AdvCAM":
(1) Since the output of SEAM has 21 classes, I removed the class of "background" to attack the images.
(2) I used the layer "f9" in SEAM as the target layer to generate CAM.
Maybe you have used some special settings for it. I want to reproduce it.

@jbeomlee93
Copy link
Owner

Hi @YeRen123455 , sorry for the late reply.

(1) "resnet50.py" just contains the definitions of layers of resnet50, and actual architectures for classification and CAM are included in "resnet50_cam.py". I followed the default configuration of IRN (https://github.com/jiwoon-ahn/irn).

(2) On the "SEAM+AdvCAM", you can consider these issues: #7 (comment), #8 (comment). I think you should change the values of hyper-parameters.

Thanks.

@YeRen123455
Copy link
Author

@jbeomlee93 Thanks for you reply. I will try it again.

@YeRen123455
Copy link
Author

@jbeomlee93 Since the CAM output of SEAM has 21 classes (20 classes + background), the default class label for your adv_cam is 20. So you also attck the background On the "SEAM+AdvCAM"?

@jbeomlee93
Copy link
Owner

No, I just apply adversarial climbing only for the (ground-truth) foreground labels.

@YeRen123455
Copy link
Author

Hi @jbeomlee93
Thanks for your reply. I have followed your suggestion in #7 (comment), #8 (comment) and retrained the "SEAM+Adv_cam". But the performance is even worse. I have changed 4 places of your "obtain_CAM_masking.py" to reproduce the "SEAM+Adv_cam". The changes are as follows:

[ 1] I followed your reply in combine with seam Can you provide your trained weights for MDvsFA-cGAN? #7 (comment) and set the masking threshold $T$, regularization item $\lambda$ as 8 and 2, respectively. That is:

       parser.add_argument("--AD_coeff",    default=2, type=int)
       parser.add_argument("--score_th",    default=8, type=float)     in  "obtain_CAM_masking.py"

[ 2] I tried to obey your suggestion in Question about Table 4. 文章中的错误 #8 (comment), but if the adversarial climbing was done on logit, GAP(cam), before up-sampled and before PCM module. It means I should output the cam value in forward function defined in "resnet38_SEAM.py". That is:

      def forward(self, x, separate=False):
      N, C, H, W = x.size()
      d = super().forward_as_dict(x)
      cam = self.fc8(self.dropout7(d['conv6']))
      if separate:
      return cam in "resnet38_SEAM.py"

[ 3] At the same time, I should also accumulate our localization maps (in Equation 4) using CAM after PCM module. It means I should change the code of "resnet38_SEAM.py" as:

  def forward(self, x, separate=False):
  N, C, H, W = x.size()
  d = super().forward_as_dict(x)
  cam = self.fc8(self.dropout7(d['conv6']))
  ............
  ............
  cam_rv_1 = self.PCM(cam_d_norm, f)
  if separate:
  return cam_rv_1 in "resnet38_SEAM.py"

However, the step 2 and 3 are contradictory. I can only output cam or cam_rv_1. Otherwise I can not get the value of "regions" in your "obtain_CAM_masking.py" by grad-cam (that is because I can not get the model's gradient if I output both cam and cam_rv_1).

[ 4] Since the CAM output of SEAM has 21 classes (20 classes + background), the default class label for your adv_cam is 20. I change the "gradCAM.py" as follows:

  self.logits = self. model(image, separate= True)[ :, 1:, :, :] in "gradCAM.py"

The above changes can not make me to successfully reproduce SA. Could you please help me to check these changes. I really want to follow your work! Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants