Summarization of static graphs using the Minimum Description Length principle
-
Updated
Dec 9, 2015 - Python
Summarization of static graphs using the Minimum Description Length principle
Code for Interpretable Adversarial Perturbation in Input Embedding Space for Text, IJCAI 2018.
This code repository is associated with the paper "A Case-based Interpretable Deep Learning Model for Classification of Mass Lesions in Digital Mammography." Nature Machine Intelligence, 2021. https://www.nature.com/articles/s42256-021-00423-x
Pytorch implementation of various neural network interpretability methods
graph neural networks, information theory, AI for Sciences
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.
This repo contains code for Invariant Grounding for Video Question Answering
Trustworthy LoS Prediction Based on Multi-modal Data (AIME 2023)
📍 Interactive Studio for Explanatory Model Analysis
Visual explanations of supervised classification models
SurvSHAP(t): Time-dependent explanations of machine learning survival models
Maximal Linkability metric to evaluate the linkability of (protected) biometric templates. Paper: "Measuring Linkability of Protected Biometric Templates using Maximal Leakage", IEEE-TIFS, 2023.
Mechanistically interpretable neurosymbolic AI (Nature Comput Sci 2024): losslessly compressing NNs to computer code and discovering new algorithms which generalize out-of-distribution and outperform human-designed algorithms
💡 Adversarial attacks on explanations and how to defend them
Implements the Tsetlin Machine, Convolutional Tsetlin Machine, Regression Tsetlin Machine, Weighted Tsetlin Machine, and Embedding Tsetlin Machine, with support for continuous features, multigranularity, clause indexing, and literal budget
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Framework for material structure exploration
Boosting the AI research efficiency
Add a description, image, and links to the interpretable topic page so that developers can more easily learn about it.
To associate your repository with the interpretable topic, visit your repo's landing page and select "manage topics."