Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Saliency for regression task #15

Closed
ClaudioCimarelli opened this issue Mar 18, 2019 · 4 comments
Closed

Saliency for regression task #15

ClaudioCimarelli opened this issue Mar 18, 2019 · 4 comments

Comments

@ClaudioCimarelli
Copy link

Dear authors,
first of all, thanks for publishing this code and for your amazing paper.
I would like to use your technique to inspect the behavior of a neural network use for 6DoF Pose regression from a single RGB image. In particular, I would like to have a visual insight on which pixels the network could consider most important in the task of localization.
In your paper, as well as in the other cited by you, the focus is on the classification problem. Hence, I was wondering how to adapt this visualization technique in the case of regression.

Thanks in advance for your help.

@nsthorat
Copy link
Collaborator

On a regression task it should be the same technique.

In the classification task we choose a "neuron", which is actually just one dimension of the logits vector, to compute gradients with respect to. Intuitively this says "if we look at a single class, let's say a deer, and we wiggle the values of a fixed image X, how much do they each affect the prediction of that deer?".

In a regression task, the continuous value output that your model predicts can also be used in the same way. The same reasoning holds: "if we look at the value we're predicting, if I wiggle the pixels of a fixed image X, how do each of those pixels affect the output value".

I think this code can be used out of the box: y in your case would be the predicted value: https://github.com/PAIR-code/saliency/blob/master/saliency/base.py#L85

@ClaudioCimarelli
Copy link
Author

ClaudioCimarelli commented Mar 18, 2019

@nsthorat thanks for the quick reply and the clarification. My doubt is more related to the calculation of the gradient.
More precisely, in the case of classification (as you noted), the label is one dimensional; hence, the gradient is one value per input dimension. Instead, in the case of regression, I have n values (one for each dimension of the output).
After you pointed me to the code I see that you use tf.gradients(), which does the sum along the output dimensions; is this the correct way? I don't know if it's possible to take the gradients of each dimension separately and take a weighted average for example...

@nsthorat
Copy link
Collaborator

Well, it depends on what you're looking for. You could take the sum, which would say which pixels are most sensitive to all outputs, or you can focus on a single one of the output dimensions.

You could take the gradient of each and do a weighted average -- or you could take the weighted average of the output values and use that as your y node. I would try both and see what happens :)

amilkh added a commit to amilkh/cs230-fer that referenced this issue Mar 1, 2020
@bwedin bwedin closed this as completed Jul 28, 2021
@Aishuvenkat09
Copy link

Beautiful work. Thank you. I have a question regarding adapting your work to my use case.
I have my custom model which takes gender and image as input to predict age (Image regression) I am interested to improve performance using guided integrated gradients and also generate saliency maps to see what features my current model is focusing on.
How can I make changes to the call_model_function to give me the regression score (age in this case)?
I have changed the layer is relu. I see output tensors, but I'm interested in generating Mean absolute values (nn.L1Loss).


target_label = data[0]['label'] #190

def call_model_function(images, call_model_args=None, expected_keys=None):
    images = PreprocessImages(images)
    target_class_idx =  call_model_args[class_idx_str]
    output = model(images)
    m = torch.nn.ReLU()
    output = m(output)

    if saliency.base.INPUT_OUTPUT_GRADIENTS in expected_keys:
        outputs = output[:,target_class_idx]
        grads = torch.autograd.grad(outputs, images, grad_outputs=torch.ones_like(outputs))
        grads = torch.movedim(grads[0], 1, 3)
        gradients = grads.detach().numpy()
        return {saliency.base.INPUT_OUTPUT_GRADIENTS: gradients}
    else:
        one_hot = torch.zeros_like(output)
        one_hot[:,target_class_idx] = 1
        model.zero_grad()
        output.backward(gradient=one_hot, retain_graph=True)
        return conv_layer_outputs

Any help is appreciated :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants