Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
-
Updated
Apr 29, 2024 - Python
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Model interpretability and understanding for PyTorch
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
The code of NeurIPS 2021 paper "Scalable Rule-Based Representation Learning for Interpretable Classification" and TPAMI paper "Learning Interpretable Rules for Scalable Data Representation and Classification"
Material related to my book Intuitive Machine Learning. Some of this material is also featured in my new book Synthetic Data and Generative AI.
Modular Python Toolbox for Fairness, Accountability and Transparency Forensics
Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Python library to explain Tree Ensemble models (TE) like XGBoost, using a rule list.
SIDU: SImilarity Difference and Uniqueness method for explainable AI
Explainability of Deep Learning Models
Implementation of Beyond Neural Scaling beating power laws for deep models and prototype-based models
A PyTorch implementation of constrained optimization and modeling techniques
A Multimodal Transformer: Fusing Clinical Notes With Structured EHR Data for Interpretable In-Hospital Mortality Prediction
Find the samples, in the test data, on which your (generative) model makes mistakes.
ProtoTorch is a PyTorch-based Python toolbox for bleeding-edge research in prototype-based machine learning algorithms.
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
[ICCV 2023] Learning Support and Trivial Prototypes for Interpretable Image Classification
NAISR: A 3D Neural Additive Model for Interpretable Shape Representation
Concept activation vectors for Keras
Add a description, image, and links to the interpretable-ai topic page so that developers can more easily learn about it.
To associate your repository with the interpretable-ai topic, visit your repo's landing page and select "manage topics."