Skip to content

bvanaken/explain-BERT-QA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

How Does BERT Answer Questions?

This repository contains source code for the experiments from the paper presented at CIKM 2019.

Find our interactive demo that visualizes the results on three Question Answering datasets here: https://visbert.demo.datexis.com

Edge Probing Experiments

For probing the language abilities in BERT's layers, we used the Jiant Probing Suite by Wang et al. We added two additional tasks to their suite: Question Type Classification and Supporting Fact Extraction. The code for creating these tasks can be found in the probing directory.

Visualizing Token Transformations

To train and evaluate BERT QA models we used the 🤗 Transformers framework by Huggingface. A simple way to visualize how tokens are transformed by a QA transformer model can be found in the visualization directory. We use a single question as input and output the token representations for each layer of the model within a 2D vector space.

Cite

When building up on our work, please cite our paper as follows:

@article{van_Aken_2019,
   title={How Does BERT Answer Questions?},
   journal={Proceedings of the 28th ACM International Conference on Information and Knowledge Management  - CIKM  ’19},
   publisher={ACM Press},
   author={van Aken, Betty and Winter, Benjamin and Löser, Alexander and Gers, Felix A.},
   year={2019}
}

About

Code for the CIKM 2019 Paper: How Does BERT Answer Questions? A Layer-Wise Analysis of Transformer Representations

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages