Pytorch implementation of 'Explaining text classifiers with counterfactual representations' (Lemberger & Saillenfest, 2024)
-
Updated
Jul 8, 2024 - Python
Pytorch implementation of 'Explaining text classifiers with counterfactual representations' (Lemberger & Saillenfest, 2024)
An XGBoost model in Python that classifies if a customer will cancel his/her hotel booking or not. I also use counterfactuals guided by prototypes from the Alibi package to explore the minimum changes needed to flip a prediction from canceled to not canceled and vice versa.
[Autumn 2022] Specialization project leading up to main thesis in MSc Applied Physics and Mathematics at NTNU.
Experiments of the bachlor's thesis "Quantitive Evaluation of the Expected Antagonism of Explainability and Privacy". Two explainers are tested against privacy attacks.
multiplayer game and chat for collecting data on human counterfactual explanations in a collaborative learning task
Code for Master's thesis in Applied Physics and Mathematics at NTNU.
This is the official repository of the paper "RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model".
Counterfactual Explanations, a project for CITS4404 - Artificial Intelligence and Adaptive Systems.
Repository for "Endogenous Macrodynamics in Algorithmic Recourse" (Altmeyer et al., 2023)
CELS: Counterfactual Explanation for Time Series Data via Learned Saliency Maps (2023 Big data)
Creating a pipeline for generating semi-factual and counter-factual explanations for computer vision tasks.
Counterfactual causal analysis
Code for paper "Consequence-aware Sequential Counterfactual Generation" (https://link.springer.com/chapter/10.1007/978-3-030-86520-7_42), @ ECML PKDD 2021 (). Repository maintained by Philip Naumann.
Parsing a resume, predicting whether it will get screened for a given job description or not and generating counterfactual suggestions.
Counterfactual explanations for the identification of the features with the highest relevance on the shape of response curves generated by neural network black boxes
Recourse Explanation Library in JAX
This repository is dedicated to PhD Research (Human-centered XAI)
CEnt: An Entropy-based Model-agnostic Explainability Framework to Contrast Classifiers’ Decisions; preview paper at: https://arxiv.org/abs/2301.07941
Survival-Patterns-based counterfactual explanations of survival models
Attention-based Counterfactual Explanation for Multivariate Time Series (DaWak 2023)
Add a description, image, and links to the counterfactual-explanations topic page so that developers can more easily learn about it.
To associate your repository with the counterfactual-explanations topic, visit your repo's landing page and select "manage topics."