No description, website, or topics provided.
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
.gitignore
README.md
Untitled.ipynb
conf_figure.py
conf_figures.ipynb
nets.py
new_rawr.py
nlp_utils.py
rawr.py
run_text_classifier.py
text_datasets.py
train_text_classifier.py

README.md

Pathologies of Neural Models Make Interpretations Difficult

This is the code for the 2018 EMNLP Paper, Pathologies of Neural Models Make Interpretations Difficult.

This repository contains the code for input reduction. If you want to apply input reduction to your task or model, we recommend taking a look at the rawr.py file. From there, you can learn how to calculate gradients of the prediction with the respect to the input.

Dependencies

This code is written in python using the highly underrated Chainer framework. If you know PyTorch, you will love it =).

Dependencies include:

A portion of the code is built off Chainers text classification example. See their documentation and code to understand the basic layout of our project.

References

Please consider citing [1] if you found this code or our work beneficial to your research.

Pathologies of Neural Models Make Interpretations Difficult

[1] Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer, Pedro Rodriguez, Jordan Boyd-Graber Pathologies of Neural Models Make Interpretations Difficult.

@article{feng2018pathologies,
  title={Pathologies of Neural Models Make Interpretations Difficult},
  author={Shi Feng and Eric Wallace and Alvin Grissom II and Mohit Iyyer and Pedro Rodriguez and Jordan Boyd-Graber},
  journal={Empirical Methods in Natural Language Processing},  
  year={2018},  
}

Contact

For issues with code or suggested improvements, feel free to open a pull request.

To contact the authors, reach out to Shi Feng (shifeng@cs.umd.edu).