Skip to content

Commit

Permalink
Merge pull request #4 from capitalone/fix
Browse files Browse the repository at this point in the history
remove scan config
  • Loading branch information
ssharpe42 committed Aug 19, 2022
2 parents dcc8c00 + f979b50 commit 1845ae1
Show file tree
Hide file tree
Showing 3 changed files with 1 addition and 26 deletions.
9 changes: 0 additions & 9 deletions .whitesource

This file was deleted.

16 changes: 0 additions & 16 deletions Cxfile

This file was deleted.

2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ A library to assess the effectiveness of XAI methods through ablation.

Explainable artificial intelligence (XAI) methods lack ground truth. In its place, method developers have relied on axioms to determine desirable properties for their explanations behavior. For high stakes uses of machine learning that require explainability, it is not sufficient to rely on axioms, as the implementation, or its usage, can fail to live up to the ideal. A procedure frequently used to assess their utility, and to some extent their fidelity, is an *ablation study*. By perturbing the input variables in rank order of importance, the goal is to assess the sensitivity of the model's performance.

This implementation can be used to reproduce the experiments in "[BASED-XAI: Breaking Ablation Studies Down for Explainable Artificial Intelligence](https://arxiv.org/abs/2207.05566).
This implementation can be used to reproduce the experiments in [BASED-XAI: Breaking Ablation Studies Down for Explainable Artificial Intelligence](https://arxiv.org/abs/2207.05566).

### Installation

Expand Down

0 comments on commit 1845ae1

Please sign in to comment.