Skip to content

hovinh/LayeredExplanationsFramework

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Layered Explanations Framework

Layered Explanations Framework aims to exploit the intrinsic architecture of neural networks, namely hidden units, to generate an interpretable explanation of the algorithmic decision. It consists of 3 main steps:

  1. Identify the greatest-explanatory layer and
  2. Influential units using numerical influence measures, then we
  3. Reconstruct relevant input regions responsible for activating these influential units.

Layered Explanations Framework

Documents

Interesting readers can find more details in following documents:

Source code:

Results presented in Presentation can be found in:

Preliminary experiments of MIM on textual input can be found in:

If the file is too big for GitHub to render, you can try to view with nbviewer.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages