New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Saliency for regression task #15
Comments
On a regression task it should be the same technique. In the classification task we choose a "neuron", which is actually just one dimension of the logits vector, to compute gradients with respect to. Intuitively this says "if we look at a single class, let's say a deer, and we wiggle the values of a fixed image X, how much do they each affect the prediction of that deer?". In a regression task, the continuous value output that your model predicts can also be used in the same way. The same reasoning holds: "if we look at the value we're predicting, if I wiggle the pixels of a fixed image X, how do each of those pixels affect the output value". I think this code can be used out of the box: y in your case would be the predicted value: https://github.com/PAIR-code/saliency/blob/master/saliency/base.py#L85 |
@nsthorat thanks for the quick reply and the clarification. My doubt is more related to the calculation of the gradient. |
Well, it depends on what you're looking for. You could take the sum, which would say which pixels are most sensitive to all outputs, or you can focus on a single one of the output dimensions. You could take the gradient of each and do a weighted average -- or you could take the weighted average of the output values and use that as your y node. I would try both and see what happens :) |
Beautiful work. Thank you. I have a question regarding adapting your work to my use case.
Any help is appreciated :) |
Dear authors,
first of all, thanks for publishing this code and for your amazing paper.
I would like to use your technique to inspect the behavior of a neural network use for 6DoF Pose regression from a single RGB image. In particular, I would like to have a visual insight on which pixels the network could consider most important in the task of localization.
In your paper, as well as in the other cited by you, the focus is on the classification problem. Hence, I was wondering how to adapt this visualization technique in the case of regression.
Thanks in advance for your help.
The text was updated successfully, but these errors were encountered: