Interpretable machine learning based on Shapley values
-
Updated
Jul 20, 2021 - Python
Interpretable machine learning based on Shapley values
A repository to study the interpretability of time series networks(LSTM)
Deep Classiflie is a framework for developing ML models that bolster fact-checking efficiency. As a POC, the initial alpha release of Deep Classiflie generates/analyzes a model that continuously classifies a single individual's statements (Donald Trump) using a single ground truth labeling source (The Washington Post). For statements the model d…
Accompanying code for the paper "Discrete representations in neural models of spoken language" (https://aclanthology.org/2021.blackboxnlp-1.11)
a module to obtain diverse real-world-grounded features for sentences for large-scale benchmarking
Official Implementation of ARACHNET: INTERPRETABLE SUB-ARACHNOID SPACE SEGMENTATION USING AN ADDITIVE CONVOLUTIONAL NEURAL NETWORK
A CT-scan of your CNN
The purpose of this repository is to demonstrate how to use NLP explanation/interpretability tools.
Temporal Attention Bottleneck for VAE is informative? (ICML 2023)
Explain model and feature dependencies by decomposition of SHAP values
Techniques for interpreting ConvNets
A machine learning python package for learning ensembles of subgroups for predictive tasks.
Repository for the 'best student paper award' winning paper at the IEEE 35th International Symposium on Computer Based Medical Systems (CBMS 2022), Exploring LRP and Grad-CAM visualization to interpret multi-label-multi-class pathology prediction using chest radiography, Mahbub Ul Alam, Jón Rúnar Baldvinsson and Yuxia Wang. https://doi.org/10.11…
StellarGraph - Machine Learning on Graphs
This code is part of the paper: "A Deep Dive Into Neural Synchrony Evaluation for Audio-visual Translation" published at ACM ICMI 2022.
Maximal Linkability metric to evaluate the linkability of (protected) biometric templates. Paper: "Measuring Linkability of Protected Biometric Templates using Maximal Leakage", IEEE-TIFS, 2023.
Official Implementation of the paper guided attention for interpretable motion captioning
Interpretability of Image Keras Models
Interpreting blackbox text classifiers with LDA-based topic models
TIC is a library that acts as a Toolbox for Interpretability Comparison.
Add a description, image, and links to the interpretability topic page so that developers can more easily learn about it.
To associate your repository with the interpretability topic, visit your repo's landing page and select "manage topics."