Skip to content

The purpose of this repository is to demonstrate how to use NLP explanation/interpretability tools.

Notifications You must be signed in to change notification settings

Zarharan/NLP-Transformers-Interpretability

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

NLP Transformers' Interpretability

The purpose of this repository is to demonstrate how to use NLP explanation/interpretability tools. In this project, I use the stance detection task, but you can change it to your own custom NLP task if you wish. This repository will be updated in the future, but for now, I just use SHAP as an explanation tool.

Model Explanation (SHAP)

The result of SHAP explanation on the Persian stance detection A red area increases the probability of that class, and a blue area decreases it (SHAP).

About

The purpose of this repository is to demonstrate how to use NLP explanation/interpretability tools.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages