Skip to content
No description, website, or topics provided.
Python Shell
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.

Query Based Abstractive Summarization


Implementation of diversity based attention model for query based abstractive summarization task. The implementation is adapted from the seq2seq package in Tensorflow.

The implementation is based on this work.

Diversity based attention models for Query based abstractive summarization

Preksha Nema, Mitesh M. Khapra, Anirban Laha, Balaraman Ravindran

ACL, 2017


Data Download and Preprocessing

  • One data set split is already provided in data/ folder.
  • To make 10 folds from existing dataset use: python ../../all_points 10 ../../data

To crawl the data from scratch :

  • cd src/dataextraction_scripts
  • The model will extract the data for the categories mentioned in file debatepedia_categories
  • sh
  • python ../../data <number_of_folds> <new_dir_for_10_folds>
  • By default run : python ../../data 10 ../../data

Get the Glove embeddings:

mkdir Embedding
cd Embedding
echo 2196017 300 > temp_metadata
cat temp_metadata glove.840B.300d.txt > embeddings
rm temp_metadata

Configuration file:

 * The hyper parameters could be changed in the config.txt file.
 * The influence of eah hyperarameter have been explained in detail in the comments in config.txt

Train the model

  sh ./ config.txt

Inference: will refer to the config.txt to initialize the graph that will be used for creating the graph for inference.
 sh ./ config.txt output/test_final_results

If output/test_final_results, exists, just run the following command:

python postscripts/ output/test_final_results

Universal Sentence Encoder:

Including the USE embeddings for document and query sentences, helps in improving the model performance. The USE embeddings are concatenated with word embedding and passed through the RNN. Also the sentence embedding is one of the vectors based on which the contextuals representations are attended to along with query and decoder state.

You can’t perform that action at this time.