Notes on interpretability
Miller. Explanation in Artificial Intelligence: Insights from the Social Sciences. In AIJ 2018.
- Section 2.6 in Molner discusses Miller's work.
- Very related; Mittelstadt et al. Explaining Explanations in AI. In *FAT 2019.
Murdoch et al. Interpretable machine learning: definitions, methods, and applications. arxiv 2019.
Barredo Arrieta et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. arxiv 2019.
Guidotti et al. A Survey Of Methods For Explaining Black Box Models. arxiv 2018.
Ras et al. Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges . arxiv 2018.
Gilpin et al. Explaining Explanations: An Overview of Interpretability of Machine Learning. In DSAA 2018.
Kleinberg and Mullainathan. Simplicity Creates Inequity: Implications for Fairness, Stereotypes, and Interpretability. video. In ACM EC 2019.
Ribera and Lapedriza. Can we do better explanations? A proposal of User-Centered Explainable AI. In *FAT 2019.
Lage et al. An Evaluation of the Human-Interpretability of Explanation. arxiv 2019.
Yang et al. Evaluating Explanation Without Ground Truth in Interpretable Machine Learning. arxiv 2019.
- This paper defines the problem od evaluating explanations and systematically reviews the existing efforts.
- The authors summarize three general aspects of explanation: predictability, fidelity, and persuasibility.
Tomsett et al. Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems. In WHI 2018.
Poursabzi-Sangdeh et al. Manipulating and Measuring Model Interpretability. arxiv 2018.
- This paper found no significant difference in multiple measures of trust when manipulating interpretability.
- Increased transparency hampered people's ability to detect when a model had made a sizeable mistake.
Building interpretable machine learning models is not a purely computational model [...] what is or is not "interpretable" is defined by people, not algorithms.
Preece et al. Stakeholders in Explainable AI. In AAAI 2018 Fall Symposium Series.
Doshi-Velez and Kim. Towards A Rigorous Science of Interpretable Machine Learning. arxiv 2017.
de Graaf and Malle. How People Explain Action (and Autonomous Intelligent Systems Should Too). In AAAI Fall Symposium Series 2017.
Dhurandhar et al. A Formal Framework to Characterize Interpretability of Procedures. In WHI 2017.
Herman. The Promise and Peril of Human Evaluation for Model Interpretability. In NeurIPS 2017 Symposium on Interpretable Machine Learning.
- They propose a distinction between descriptive and persuasive explanations.
Weller. Transparency: Motivations and Challenges. In WHI 2017.
Lipton. The Mythos of Model Interpretability. In WHI 2016.
- The umbrella term "Explainable AI" encompasses at least three distinct notions : transparency, explainability, and interpretability.
Benefits of learning with explanations
Strout et al. Do Human Rationales Improve Machine Explanations?. In ACL 2019.
- This paper shows that learning with rationales can also improve the quality of the machine's explanations as evaluated by human judges.
Ray et al. Can You Explain That? Lucid Explanations Help Human-AI Collaborative Image Retrieval. In AAAI 2019.
Selvaraju et al. Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded. In ICCV 2019.
Evaluation critera and pitfalls of explanatory methods
Camburu et al. Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations. In NeurIPS 2019 Workshop on Safety and Robustness in Decision Making.
Heo et al. Fooling Neural Network Interpretations via Adversarial Model Manipulation. In NeurIPS 2019.
Saliency interpretation methods can be fooled via adversarial model manipulation---a model finetuning step that aims to radically alter the explanation without hurting the accuracy of the original model.
More adversarial examples:
Zhang et al. Interpretable Deep Learning under Fire. In USENIX Security Symposium 2020.
Zheng et al. Analyzing the Interpretability Robustness of Self-Explaining Models. In ICML 2019 Security and Privacy of Machine Learning Workshop.
Ghorbani et al. Interpretation of Neural Networks is Fragile. In AAAI 2019.
Wiegreffe and Pinter. Attention is not not Explanation. In EMNLP 2019.
- Deteching the attention scores obtained by parts of the model degredes the model itself. A reliable adversary must also be trained.
- Attention scores are used as poviding an explanation; not the explanation.
Serrano and Smith. Is Attention Interpretable?. In ACL 2019.
Jain and Wallace. Attention is not Explanation. In NAACL 2019.
Attention provides an important way to explain the workings of neural models. Implicit in this is the assumption that the inputs (e.g., words) accorded high attention weights are responsible for model output.
- Attention is not strongly correlated with other, well-grounded feature-importance metrics.
- Alternative distributions exist for which the model outputs near-identical prediction scores.
Laugel et al. Issues with post-hoc counterfactual explanations: a discussion. In HILL 2019.
Laugel et al. The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations. In IJCAI 2019.
Aïvodji et al. Fairwashing: the risk of rationalization. In ICML 2019.
- Fairwashing is prooting the false perception that a machine learning model respects some ethical values.
- This paper shows that it is possible to forge a fairer explanation from a truly unfair black box trough a process that the authors coin as rationalization.
Ustun et al. Actionable Recourse in Linear Classification. IN *FAT 2019.
- In this paper, the authors introduce recourse--the ability of a person to change the decision of the model through actionable input variables such as income vs. gender, age, or marital status.
- Transparency and explainability do not guarantee recourse.
- Interesting broader discussion:
- Recourse vs. strategic manipulation.
- Policy implications.
- Related work:
- Karimi et al. [Model-Agnostic Counterfactual Explanations for Consequential Decisions]. arxiv 2019.
- Tolomei et al. Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking. In KDD 2017.
Adebayo et al. Sanity Checks for Saliency Maps. In NeurIPS 2018.
Chandrasekaran et al. Do explanations make VQA models more predictable to a human?. In EMNLP 2018.
- This paper measures how well a human "understands" a VQA model. The paper shows that people get better at predicting VQA model's behaviour using a few "training" examples, but that exisiting explanation modalities do not help make its failures or responses more predictable.
Jiang et al. To Trust Or Not To Trust A Classifier. In NeurIPS 2018.
Feng et al. Pathologies of Neural Models Make Interpretations Difficult. In EMNLP 2018.
- Input reduction iteratively removes the least important word from the input.
- The remaining words appear nonsensical to humans and are not the ones determined as important by interpretation method.
Poerner et al. Evaluating neural network explanation methods using hybrid documents and morphosyntactic agreement. In ACL 2018.
- Important characterization of explanation:
A good explanation method should not reflect what humans attend to, but what task methods attend to.
- Interpretability differs between small contexts NLP tasks and large context tasks.
Kindermans et al. The (Un)reliability of saliency methods. arxiv 2017.
Evaluating the reliability of saliency methods is complicated by a lack of ground truth, as ground truth would depend upon full transparency into how a model arrives at a decision---the very problem we are trying to solve for in the first place.
- A new evaluation criterion, input invariance, requires that the saliency method mirrors the sensitivity of model with respect to transformations of the input. Input transformations that do not change network's prediction, should not change the attribution either.
Sundararajan et al. Axiomatic Attribution for Deep Networks. In ICML 2017.
Implementation invariance: the attributions should be identical for two functionally equivalent networks (their outputs are equal for all inputs, despite having very different implementations).
Sensitivity: if network assigns different predictions to two examples that differ in only one feature then the differing feature should be given a non-zero attribution.
Das et al. Human Attention in Visual Question Answering: Do Humans and Deep Networks look at the same regions?. In EMNLP 2016.
- Current attention models in VQA do not seem to be looking at the same regions as humans.
Self-explanatory models / Model-based intepretability
Bastings et al. Interpretable Neural Predictions with Differentiable Binary Variables. In ACL 2019.
Vedantam et al. Probabilistic Neural-symbolic Models for Interpretable Visual Question Answering. In ICML 2019.
Alvarez-Melis and Jaakkola. Towards Robust Interpretability with Self-Explaining Neural Networks. In NeurIPS 2018.
Yang et al. Commonsense Justification for Action Explanation. In EMNLP 2018.
Kim et al. Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV). In ICML 2018.
Textual explanation generation
Ehsan et al. Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions. in ACM IUI 2019.
Kim et al. Textual Explanations for Self-Driving Vehicles. In ECCV 2018.
Hendricks et al. Grounding Visual Explanations. In ECCV 2018.
Hendricks et al. Generating Counterfactual Explanations with Natural Language. In WHI 2018.
Hendricks et al. Generating Visual Explanations. In ECCV 2016.
Multimodal explanation generation
Wu and Mooney. Faithful Multimodal Explanation for Visual Question Answering. In ACL 2019.
Park et al. Multimodal Explanations: Justifying Decisions and Pointing to the Evidence. In CVPR 2018.
Wachter et al. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. In Harvard Journal of Law & Technology 2018.
Wachter et al. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation. In International Data Privacy Law 2017.
Edwards and Veale. Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For. In 16 Duke Law & Technology Review 18 (2017).
Goodman and Flaxman. European Union regulations on algorithmic decision-making and a "right to explanation". In WHI 2016.
Bellini et al. Knowledge-aware Autoencoders for Explainable Recommender Sytems. In ACM Workshop on Deep Learning for Recommender Systems 2018.