Skip to content

liuyilin950623/SHAP_on_Autoencoder

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SHAP_on_Autoencoder

Explaining Anomalies Detected by Autoencoders Using SHAP

Dataset: Boston Housing Dataset

Machine Learning Methods: Autoencoder, Kernel SHAP

Paper: Explaining Anomalies Detected by Autoencoders Using SHAP https://arxiv.org/pdf/1903.02407.pdf

The implementation has 3 steps.

  1. Select the top features with largest reconstruction errors.
  2. For each feature in the list of top features:
    • We want to explain what features (other than itself) have led to the reconstruction error
    • Set the weights in the autoencoder that is specific to multiply the feature and keep all other weights
    • Use model agnostic Kernal SHAP to calculate the Shapley values
  3. We then decide whether the feature is a contributing feature or an offsetting feature (depending on the sign of the reconstruction error) Here, I made some minor adjustments to the original paper for the ease of interpretatbility. Contributing factors are marked as postive Shapley values.

About

Explaining Anomalies Detected by Autoencoders Using SHAP

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published