You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear Mr. Nishimura,
I have been reading your paper with interest and decided to look into the implementation (thank you very much for providing legible code) of the region backprop as defined in Eq. 5 in your paper "Weakly supervised cell instance segmentation under various conditions". However, I could not find it. The closest I got to a backpropagation based region proposal method was in the guided_model.py aroung lines 160-180, namely in the function call: img_grad.sum(1).clone().clamp(min=0).cpu().numpy(). What I read from this line is namely - the gradient used is the gradient w.r.t. the input, but with ReLU applied on it, which as far as I can see is not the same as in the paper.
Where am I wrong?
Your paper is really interesting and I would really appreciate your clarification. :)
Yours,
Erik
The text was updated successfully, but these errors were encountered:
Hi!
I appreciate your interest in my paper.
As you said, the "img_grad.sum(1).clone().clamp(min=0).cpu().numpy()" is not directly indicates Eq.5.
I think the code you are looking for is the guided_relu function.
To implement Eq. 5, I changed the backward function of the ReLU operation to the guided relu function by the following code.
def _patch(self):
for module in self.modules():
if isinstance(module, nn.ReLU):
module._original_forward = module.forward
module.forward = MethodType(guide_relu, module)
Dear Mr. Nishimura,
I have been reading your paper with interest and decided to look into the implementation (thank you very much for providing legible code) of the region backprop as defined in Eq. 5 in your paper "Weakly supervised cell instance segmentation under various conditions". However, I could not find it. The closest I got to a backpropagation based region proposal method was in the guided_model.py aroung lines 160-180, namely in the function call: img_grad.sum(1).clone().clamp(min=0).cpu().numpy(). What I read from this line is namely - the gradient used is the gradient w.r.t. the input, but with ReLU applied on it, which as far as I can see is not the same as in the paper.
Where am I wrong?
Your paper is really interesting and I would really appreciate your clarification. :)
Yours,
Erik
The text was updated successfully, but these errors were encountered: