Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use of hooks #18

Closed
wbaik opened this issue Jul 21, 2018 · 3 comments
Closed

Use of hooks #18

wbaik opened this issue Jul 21, 2018 · 3 comments

Comments

@wbaik
Copy link

wbaik commented Jul 21, 2018

Thank you for your awesome code.
I have a quick question about the use of hooks in some of your code.
In src/guided_backprop.py, it returns some tuple.
How is this being applied to the self.gradients? I tried looking up uses of hook layers, but couldn't find an example that resembles that of yours. More specifically, when update_relus returns those tuples, where do they go? I am guessing it replaces the grad_in in relu_hook_function inplace, then passed to hook_function in hook_layers, but wanted to make sure. Any guidance would be appreciated.

@utkuozbulak
Copy link
Owner

Hello, unfortunately, I don't have a good answer to you other than: autograd just expects a tuple. You can change the tuple to a tensor (or any other thing) and get the following error:

Traceback (most recent call last):
  File "guided_backprop.py", line 74, in <module>
    guided_grads = GBP.generate_gradients(prep_img, target_class)
  File "guided_backprop.py", line 59, in generate_gradients
    model_output.backward(gradient=one_hot_output)
  File ".../torch/tensor.py", line 93, in backward
    torch.autograd.backward(self, gradient, retain_graph, create_graph)
  File ".../torch/autograd/__init__.py", line 89, in backward
    ... allow_unreachable=True)  # allow_unreachable flag
TypeError: expected tuple, but hook returned 'Tensor'

From there you can trace it to torch/tensor.py and torch/autograd but in the end I couldn't find a good answer other than, its just what it is. Also, just to clarify, this hook function only affects backward pass and not forward pass, so ReLU(x) still returns modified_ReLU(x) but in the backward pass the gradients are affected.

@wbaik
Copy link
Author

wbaik commented Jul 22, 2018

Got it. Thanks. Your answer led me to read into Module class in Pytorch, where register_backward_hook actually explains it all.

One quick question. The vanilla_backprop and the guided_backprop seems to have much in common. You think refactoring could be of any use? I would be happy to do the work if you'd allow.

@wbaik wbaik closed this as completed Jul 22, 2018
@utkuozbulak
Copy link
Owner

Thanks for the offer but many implementations are quite similar, and very simple but this is by design so that people can grasp these concepts and adapt to their own need without digging through hundreds of lines of code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants