Skip to content

Conversation

vivekmig
Copy link
Contributor

Creating Captum v0.3.1 to patch release with recent minor updates. Includes all changes since v0.3.0 except LRP.

vivekmig and others added 20 commits January 21, 2021 12:23
Summary:
Pull Request resolved: meta-pytorch#528

Allows output to be on a different device than input by moving output difference to input device.

Reviewed By: miguelmartin75

Differential Revision: D25001273

fbshipit-source-id: a9b6d8e8bb585d5360c53272a5502f4e8f257459
Summary:
I pasted the README into a Google Doc to see if the spelling and grammar check would find anything. It was able to find a few mistakes that I've now corrected.

Pull Request resolved: meta-pytorch#542

Reviewed By: vivekmig

Differential Revision: D25221538

Pulled By: NarineK

fbshipit-source-id: c53358b507c5edd6e2f2cd8e7a147aab4d7dbdaf
Summary:
Addding multiple layer support in LayerIntegratedGradients

Added two test cases. Both compare output to regular IG. One patches multiple embedding layers, the other patches layers of a new model. Added a new model due to existing models not having layers/sub-modules accepting multiple arguments.

Updated documentation.

Reviewed By: vivekmig

Differential Revision: D25042339

fbshipit-source-id: cdfb6c03040fa9e049697d6b54765d27c4d0287b
…ch#551)

Summary:
This PR adds the ability to compare multiple models in Captum Insights.

![Screenshot of model comparison](https://user-images.githubusercontent.com/13208038/101406612-869ed600-388e-11eb-9520-62797a9ae3db.png)

In order to test this, I went through two scenarios. First, I made sure there are no regressions to single model workflows like this:
1. Start the Insights example with `python3 -m captum.insights.example`
2. Ensure that the original functionality is still working and there are no changes, other than the visual changes for column headers

Then, I tested comparing multiple models by duplicating the existing example one:

1. Go to `example.py`
2. Duplicate the example model, by changing `models=[model]` to `models=[model, model, model]`
3. Check to make sure that it renders properly, and that selecting different target classes works to properly update the data for each visualization

Pull Request resolved: meta-pytorch#551

Reviewed By: edward-io

Differential Revision: D25379744

Pulled By: Reubend

fbshipit-source-id: 4999c1ef0f18b8f735cd47a890cef413a7c6548e
Summary: Pull Request resolved: meta-pytorch#554

Reviewed By: NarineK

Differential Revision: D25418107

Pulled By: edward-io

fbshipit-source-id: 960ed22c5f6845ac9fedff8793196660d6fd5529
)

Summary:
Related issue: meta-pytorch#544

Adds flag `return_html`to `visualize_text()` that allows for the IPython HTML to be returned if the flag is set to `True`.

This is useful for cases where users may want to save the output from outside of a notebook etc.

Usage looks like:

```python
from captum.attr import visualization as viz
.... # get attributions and data record

html_obj = viz.visualize_text([score_viz])
```

Pull Request resolved: meta-pytorch#548

Reviewed By: vivekmig

Differential Revision: D25424019

Pulled By: bilalsal

fbshipit-source-id: 27a90f2775d90cbc848858fe1b439ddc2855cba4
Summary:
As also mentioned in meta-pytorch#549, we had conflicting argument names in the API. In this PR we deprecate and rename those argument names. More specifically:
1. `n_samples` in NoiseTunnel is being renamed to `nt_samples`
2. `n_perturbed_samples` in Lime and KernelSHAP are being renamed to `n_samples` in order to remain consistent with Shapely Values (Sampling)

Pull Request resolved: meta-pytorch#558

Reviewed By: edward-io

Differential Revision: D25514136

Pulled By: NarineK

fbshipit-source-id: 142c974da2a8430be234fe4ffc79e36faf2bf8d9
Summary:
Pull Request resolved: meta-pytorch#534

Introduces a utility class called `ModelInputWrapper` to wrap over a model in order to treat inputs as separate layers.

This does so by mapping each input fed to `forward` using an `Identity` operation. This way if attribute_to_inputs=True or False it should work.

Add two tests:
- Test whether _foward_layer_eval retrieves the appropriate input values
- Compare regular IG with layer IG and layer wrapped inputs

Updated tutorial and documentation

Reviewed By: NarineK

Differential Revision: D25110896

fbshipit-source-id: bb8dd4947ae88e183af94c09cf906f9687fbe8ff
Summary:
Fixes GPU test failures on master by upgrading pip version.

Pull Request resolved: meta-pytorch#568

Reviewed By: bilalsal

Differential Revision: D25686863

Pulled By: vivekmig

fbshipit-source-id: c1c860fb0666fb529e1d0b4462acaca5e5cea6b6
Summary:
Pull Request resolved: meta-pytorch#570

This makes Lime work appropriately with int / long features; currently input only worked appropriately with float features.

Reviewed By: bilalsal

Differential Revision: D25693888

fbshipit-source-id: b96477f8c6805f554b324ffadbb00e971c12051f
Summary:
Pull Request resolved: meta-pytorch#569

Update assertion to warning when using sklearn below 0.23.0

Reviewed By: bilalsal

Differential Revision: D25686004

fbshipit-source-id: c5ae1aec5361716ed866cb8d0b25090b64e83926
Summary:
Adding support for batch_size in NoiseTunnel as proposed in: meta-pytorch#497

Pull Request resolved: meta-pytorch#555

Reviewed By: vivekmig

Differential Revision: D25700056

Pulled By: NarineK

fbshipit-source-id: ea34899035486798b1cf3c49ce850291d1f1e76c
…torch#575)

Summary:
As I understand, Captum uses Python 3, which means that classes don't need to inherit from object as it done implicitly already.

Pull Request resolved: meta-pytorch#575

Reviewed By: vivekmig, bilalsal

Differential Revision: D25716435

Pulled By: NarineK

fbshipit-source-id: 4983e375dfc81a6c03388b009778f90b9ca34a6d
Summary:
Since raised in the issue, meta-pytorch#564, adding a solution about it to FAQ.

Pull Request resolved: meta-pytorch#576

Reviewed By: edward-io

Differential Revision: D25725280

Pulled By: NarineK

fbshipit-source-id: 376fc3c98f4cc742242842bb2a1e8df1828d7b4d
Summary:
Adding DLRM tutorial and the KDD presentation slides related to it.
The actual model is 2.1GB, which it pretty big. Git allows 100MB maximum. Either we need to override max size or put the model elsewhere.

Update
............
Storing the model on aws S3

Pull Request resolved: meta-pytorch#531

Reviewed By: vivekmig

Differential Revision: D25730733

Pulled By: NarineK

fbshipit-source-id: 318c7b606f3fb98d245d9a381fc92c4e21819209
Summary:
- Clean up some JS warnings
- remove unpkg
- switch to plotlyjs-basic-dist-min

Pull Request resolved: meta-pytorch#556

Reviewed By: NarineK

Differential Revision: D25506078

Pulled By: edward-io

fbshipit-source-id: f133975579c1ab3376f243499aca6520aafbb568
Summary: Pull Request resolved: meta-pytorch#588

Reviewed By: miguelmartin75

Differential Revision: D25920600

Pulled By: bilalsal

fbshipit-source-id: 6e99e4da74e3ffb4a3a4cc21d39e1115ef9d7938
Summary:
This commit reduces the size of Captum Insights by

- Replacing the old graphing library with a more lightweight one
- In the standalone app, using compression in the Flask server
- In the notebook extension, excluding unused dependencies-of-dependencies

![Graph screenshot](https://user-images.githubusercontent.com/13208038/102558400-31c74080-4082-11eb-93b2-9b5c474fa0ae.png)

For the standalone app, it reduces the size significantly:

![size comparison](https://user-images.githubusercontent.com/13208038/102558239-de54f280-4081-11eb-9718-24b9174d408b.png)

For the notebook extension, there's a similar size reduction of `index.js` from 1090 KB to 449 KB.

Testing: I used `titanic.py` to test this change, making sure that the graphs are working as before and that the other functionality is unnafected.

Pull Request resolved: meta-pytorch#562

Reviewed By: edward-io

Differential Revision: D25628623

Pulled By: Reubend

fbshipit-source-id: ef8a0d9ec8c7e0df6955b69dd7a96656defc37e8
Summary:
CircleCI conda tests are currently failing due to the missing dependency of flask-compress; this adds flask-compress to conda test setup.

Pull Request resolved: meta-pytorch#589

Reviewed By: Reubend

Differential Revision: D25962860

Pulled By: vivekmig

fbshipit-source-id: e66a79f4598bf9392566f90b874b5cfc8162f90c
@bilalsal bilalsal self-requested a review January 21, 2021 21:58
@vivekmig vivekmig merged commit ea1461b into meta-pytorch:v0.3.1 Jan 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants