Official code for the CVPR 2022 (oral) paper "OrphicX: A Causality-Inspired Latent Variable Model for Interpreting Graph Neural Networks."
-
Updated
Apr 2, 2022 - Python
Official code for the CVPR 2022 (oral) paper "OrphicX: A Causality-Inspired Latent Variable Model for Interpreting Graph Neural Networks."
Explainable Speaker Recognition
Build a Neural net from scratch without keras or pytorch just by using numpy for calculus, pandas for data loading.
Visualization methods to interpret CNNs and Vision Transformers, trained in a supervised or self-supervised way. The methods are based on CAM or on the attention mechanism of Transformers. The results are evaluated qualitatively and quantitatively.
MICCAI 2022 (Oral): Interpretable Graph Neural Networks for Connectome-Based Brain Disorder Analysis
Explainable AI: From Simple Rules to Complex Generative Models
My PhD thesis in NUS. Making it public so that future graduate students may benefit.
[ICCV 2023] Learning Support and Trivial Prototypes for Interpretable Image Classification
Work on combining Logit model with an information granulation method for better interpretability
Official code of "Discover and Cure: Concept-aware Mitigation of Spurious Correlation" (ICML 2023)
Implementation of the gradient-based t-SNE sttribution method described in our GLBIO oral presentation: 'Towards Computing Attributions for Dimensionality Reduction Techniques'
[KDD'22] Source codes of "Graph Rationalization with Environment-based Augmentations"
Interpretability: Methods for Identification and Retrieval of Concepts in CNN Networks
Codebase the paper "The Remarkable Robustness of LLMs: Stages of Inference?"
Semi-supervised Concept Bottleneck Models (SSCBM)
Interpretable Anomaly Severity Detection on UAV Flight Log Messages
TraceFL is a novel mechanism for Federated Learning that achieves interpretability by tracking neuron provenance. It identifies clients responsible for global model predictions, achieving 99% accuracy across diverse datasets (e.g., medical imaging) and neural networks (e.g., GPT).
This repository collects all relevant resources about interpretability in LLMs
Add a description, image, and links to the interpretability-and-explainability topic page so that developers can more easily learn about it.
To associate your repository with the interpretability-and-explainability topic, visit your repo's landing page and select "manage topics."