Skip to content

Experimenting with using Attention+LSTM classifier to interpret learned model representations

Notifications You must be signed in to change notification settings

siddsach/Interpreting-Attention

Repository files navigation

#(In Progress)

What are we Transferring Anyway?: Using Attention Weights to Interpret domain choice in Transfer Learning

Implemented

  • Data Collection and Cleaning
  • Modified Word Language Model
  • LSTM Classifier
  • Self-Attention Embedding
  • Key-Value Attention
  • Language Model Pretraining
  • Bayesian Optimization
  • Classification Tuning
  • Attention Classification Tuning

To Do

  • Model Comparison

Data

Language Model Datasets

  • Wikitext-2
  • Gigaword
  • Penn Tree Bank

Text Classification Datasets

  • IMDB Sentiment Classification
  • MPQA Subjectivity Classification

Word Vectors

  • CharNGram
  • Google News Word2Vec
  • GloVe

About

Experimenting with using Attention+LSTM classifier to interpret learned model representations

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published