Skip to content

Models and dataset necessary to run the code

Compare
Choose a tag to compare
@tingofurro tingofurro released this 25 Jun 04:26
· 12 commits to master since this release
fda918d

We release models and data needed to run the Summary Loop and use the models we trained.

Initial models

Here are the models needed to run the train_summary_loop.py:

  • keyword_extractor.joblib: An sklearn pipeline that will extract can be used to compute tf-idf scores of words according to the BERT vocabulary, which is used by the Masking Procedure,
  • bert_coverage.bin: A bert-base-uncased finetuned model on the task of Coverage for the news domain,
  • fluency_news_bs32.bin: A GPT2 (base) model finetuned on a large corpus of news articles, used as the Fluency model,
  • gpt2_copier23.bin: A GPT2 (base) model that can be used as an initial point for the Summarizer model.

Sample dataset

We release a sample dataset of Wikinews news articles to get researchers started using the Summary Loop: wikinews.db.
We cannot release the full dataset we used for copyright reasons. We note that we do not expect this to be enough to train to best performance, and recommend finding larger datasets (such as Newsroom or CNN/DM) for full-fledged training.

Final models

We release 3 Summarizer models obtained through the Summary Loop procedure for 3 target lengths: summary_loop_length_12.bin, summary_loop_length_27.bin, summary_loop_length_61.bin