(WWW'21) ATON - an Outlier Interpreation / Outlier explanation method
-
Updated
Jul 17, 2022 - Python
(WWW'21) ATON - an Outlier Interpreation / Outlier explanation method
Codebase for "Demystifying Black-box Models with Symbolic Metamodels", NeurIPS 2019.
CAVES-dataset accepted at SIGIR'22
[TMLR] "Can You Win Everything with Lottery Ticket?" by Tianlong Chen, Zhenyu Zhang, Jun Wu, Randy Huang, Sijia Liu, Shiyu Chang, Zhangyang Wang
Transform the way you work with boolean logic by forming them from discrete propositions. This enables you to dynamically generate custom output, such as providing explanations about the causes behind a result.
Comprehensible Convolutional Neural Networks via Guided Concept Learning
List of papers in the area of Explainable Artificial Intelligence Year wise
We introduce XBrainLab, an open-source user-friendly software, for accelerated interpretation of neural patterns from EEG data based on cutting-edge computational approach.
tornado plots for model sensitivity analysis
TS4NLE is converts the explanation of an eXplainable AI (XAI) system into natural language utterances comprehensible by humans.
Code for ER-Test, accepted to the Findings of EMNLP 2022
Experiments to explain entity resolution systems
The mechanisms behind image classification using a pretrained CNN model in high-dimensional spaces 🏞️
A framework for evaluating auto-interp pipelines, i.e., natural language explanations of neurons.
ML Pipeline. Detail documentation of the project in README. Click on actions to see the script.
Domestic robot example configured for the multi-level explainability framework
A project in an AI seminar
Add a description, image, and links to the explanability topic page so that developers can more easily learn about it.
To associate your repository with the explanability topic, visit your repo's landing page and select "manage topics."