Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why not scale values when attribution values smaller than 1e-5? #393

Closed
andreimargeloiu opened this issue May 29, 2020 · 8 comments
Closed
Assignees

Comments

@andreimargeloiu
Copy link

When displaying the attribution, you normalise and scale the values.

However, do you skip normalising if the scaling factor (which is the max value after the outliers) is below 1e-5?

def _normalize_scale(attr: ndarray, scale_factor: float):
    if abs(scale_factor) < 1e-5:
        warnings.warn(
            "Attempting to normalize by value approximately 0, skipping normalization."
            "This likely means that attribution values are all close to 0."
        )
     ....
@andreimargeloiu
Copy link
Author

Is there any update on this?

@vivekmig
Copy link
Contributor

vivekmig commented Jun 3, 2020

Hi @margiki , we have this check to try to catch instances where attribution values are all approximately 0 and avoid cases where the user may be misled by visual artifacts in the attribution maps which might be magnifying small magnitude differences when normalizing (e.g. noise or floating point error). By not normalizing in these cases, the visualization would indicate that values are all approximately 0, and if outliers exist, they would be particularly salient.

Do you have a use-case where normalization below this magnitude is meaningful? You can also alternatively normalize attributions prior to calling visualize_image_attr and set the outlier_perc argument to 0.

@andreimargeloiu
Copy link
Author

Use-case
My use-case is interpreting robust model (they are trained using adversarial training [1]). Such models are trained on adversarial inputs.

On robust models, the gradients with respect to the input are very small (see picture below), where the s axis represents the attributions before rescaling. Notice that the range is around 1e-3. Using SmoothGrad, the gradients are around 1e-5, 1e-6 -> which creates issues with Captum.

image

Issue with current warning
For people investigation interpretability on robust models, it's essential to be able to plot them, despite potential errors associated with floating-point arithmetic.

In Jupyter this warning wasn't printed, which took me hours to dig into Captum and understand why the saliency map was essentially white (because the inputs weren't scaled)

Potential solution:
It would be good to allow power-user to bypass this warning (either through a parameter), or simply disable the check.

[1] https://arxiv.org/pdf/1706.06083.pdf

@bilalsal
Copy link
Contributor

bilalsal commented Jun 9, 2020

Thank you very much @margiki for the useful insights.
Indeed we need to give user the choice instead of a silent warning.
We will plan this for the next release.

@andreimargeloiu
Copy link
Author

Awesome! Maybe the best way for users is to do the scaling anyways, and get a warning if the values were small.

What do you think? I'm happy to make a pull request/

@vivekmig
Copy link
Contributor

vivekmig commented Jun 9, 2020

@margiki Thanks for the details on your use case, makes sense! I agree, the cleanest solution is probably just to do the scaling regardless and update the warning message accordingly. If you want to make the pull request with the change, that would be great, thanks!

@NarineK
Copy link
Contributor

NarineK commented Aug 22, 2020

@margiki, @vivekmig , do you still want to work on the PR ? Can we close this issue ?

@andreimargeloiu
Copy link
Author

Thank you for the heads up! @vivekmig, please go ahead as you initially proposed and plan this change for a future release :)

facebook-github-bot pushed a commit that referenced this issue Aug 26, 2020
Summary:
Addresses issue #393 , continue scaling attributions with small magnitude with a warning, only asserting when scale factor is 0.

Pull Request resolved: #458

Reviewed By: bilalsal

Differential Revision: D23347489

Pulled By: vivekmig

fbshipit-source-id: 816a0ca98119a4fe7726325fcbd63dd0ce21f3c6
NarineK pushed a commit to NarineK/captum-1 that referenced this issue Nov 19, 2020
Summary:
Addresses issue pytorch#393 , continue scaling attributions with small magnitude with a warning, only asserting when scale factor is 0.

Pull Request resolved: pytorch#458

Reviewed By: bilalsal

Differential Revision: D23347489

Pulled By: vivekmig

fbshipit-source-id: 816a0ca98119a4fe7726325fcbd63dd0ce21f3c6
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants