Skip to content

Multi-task learning of Abstractive Summarization with Entailment Generation implemented using PyTorch

Notifications You must be signed in to change notification settings

quettabit/pytorch_mtl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Information and Instructions about the code

  • To run the code, Python 3.6 and PyTorch 0.2 are needed
  • The training code is present in the file mtl_learning.py whereas the testing code is present in the file mtl_testing.py
  • The code in mtl_learning.py is pretty much self documented
  • For the training, the datasets should be present as pickle files in the pickles/ directory. The embeddings file should be present in the data/ directory.
  • In case of reloading the models and continuing the training, the models should be placed in reloads/ directory.
  • When all the necessary files and data are present, simply run python mtl_learning.py for training and python mtl_testing.py for testing.
  • The outputs of the training would be present in a file named outputs.txt and that of testing would be present in test_output.txt

Credits

The project is inspired from Pasunuru et al's (2017) work on "Towards Improving Abstractive Summarization via Entailment Generation" and Sean Robertson's tutorial on seq2seq translation.

About

Multi-task learning of Abstractive Summarization with Entailment Generation implemented using PyTorch

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages