Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
-
Updated
Jul 12, 2024 - Python
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Model interpretability and understanding for PyTorch
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Material related to my book Intuitive Machine Learning. Some of this material is also featured in my new book Synthetic Data and Generative AI.
The code of NeurIPS 2021 paper "Scalable Rule-Based Representation Learning for Interpretable Classification" and TPAMI paper "Learning Interpretable Rules for Scalable Data Representation and Classification"
Modular Python Toolbox for Fairness, Accountability and Transparency Forensics
Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
ProtoTorch is a PyTorch-based Python toolbox for bleeding-edge research in prototype-based machine learning algorithms.
Concept activation vectors for Keras
Explainability of Deep Learning Models
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
Python library to explain Tree Ensemble models (TE) like XGBoost, using a rule list.
Investigating the reproducibility of federated GNN models
Implementation of Beyond Neural Scaling beating power laws for deep models and prototype-based models
Fact-check rationalization paper @ ACL 2021.
NAISR: A 3D Neural Additive Model for Interpretable Shape Representation
A python project for prototype-based feature selection
IN PROGRESS - after the paper "Shapley-Lorenz decompositions in eXplainable Artificial Intelligence" by Giudici and Raffinetti - 2020
A Multimodal Transformer: Fusing Clinical Notes With Structured EHR Data for Interpretable In-Hospital Mortality Prediction
Add a description, image, and links to the interpretable-ai topic page so that developers can more easily learn about it.
To associate your repository with the interpretable-ai topic, visit your repo's landing page and select "manage topics."