Are Data-Driven Explanations Robust Against Out-of-Distribution Data? |
|
|
|
Uncertainty-Aware Unsupervised Image Deblurring with Deep Residual Prior |
|
|
|
Teaching Matters: Investigating the Role of Supervision in Vision Transformers |
|
|
➖ |
Adversarial Counterfactual Visual Explanations |
|
|
|
SketchXAI: A First Look at Explainability for Human Sketches |
|
|
|
Doubly Right Object Recognition: A why Prompt for Visual Rationales |
|
|
|
Overlooked Factors in Concept-based Explanations: Dataset Choice, Concept Learnability, and Human Capability |
|
|
|
Initialization Noise in Image Gradients and Saliency Maps |
|
|
|
Learning Bottleneck Concepts in Image Classification |
|
|
|
Zero-Shot Model Diagnosis |
|
|
|
OCTET: Object-Aware Counterfactual Explanations |
|
|
➖ |
X-Pruner: eXplainable Pruning for Vision Transformers |
|
|
|
Don't Lie to Me! Robust and Efficient Explainability with Verified Perturbation Analysis |
➖ |
|
➖ |
CRAFT: Concept Recursive Activation FacTorization for Explainability |
|
|
|
Grounding Counterfactual Explanation of Image Classifiers to Textual Concept Space |
➖ |
|
➖ |
Explaining Image Classifiers with Multiscale Directional Image Representation |
|
|
|
IDGI: A Framework to Eliminate Explanation Noise from Integrated Gradients |
|
|
|
Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification |
|
|
|
Gradient-based Uncertainty Attribution for Explainable Bayesian Deep Learning |
➖ |
|
|
PIP-Net: Patch-based Intuitive Prototypes for Interpretable Image Classification |
|
|
|
Shortcomings of Top-Down Randomization-based Sanity Checks for Evaluations of Deep Neural Network Explanations |
➖ |
|
|
Spatial-Temporal Concept based Explanation of 3D ConvNets |
|
|
|
A Practical Upper Bound for the Worst-Case Attribution Deviations |
➖ |
|
➖ |
Adversarial Normalization: I Can Visualize Everything (ICE) |
|
|
|