Skip to content

zhuyunqi96/ngramObj

Repository files navigation

PyTorch implementation of Differentiable N-gram Objective on Abstractive Summarization

Differentiable N-gram Objective on Abstractive Summarization

Dependencies of our run

Anaconda python=3.8

transformers 4.10.3

datasets 1.12.1

rouge-score 0.0.4

rouge 1.0.1

NLTK

To run the code

CNN/DM

python bart_test.py run_cnndm.json
# update json file to fit your required batch size, available devices etc.

XSUM

python bart_test.py run_xsum.json

To control running with/without n-gram objective

# line 1434 to line 1443 in bart_custom.py, in default.
USE_NgramsLoss = True # if False, any n-gram objective will be skipped
USE_1GramLoss = False
USE_2GramLoss = True
USE_3GramLoss = False
USE_4GramLoss = False
USE_5GramLoss = False
USE_BoN = False
USE_Ngrams_reward = False
USE_p2loss = False

to activate BoN, set USE_BoN as True, others will be skipped.

to activate P-P2, set USE_p2loss as True, make sure USE_BoN is False, and ten others will be skipped.

to activate n-gram rewards objective, set any of USE_XGramLoss as True, make sure USE_BoN and USE_p2loss are both False, and USE_Ngrams_reward as True.

to activate n-gram matches objective, set any of USE_XGramLoss as True, make sure USE_BoN and USE_p2loss are both False, and USE_Ngrams_reward as False

Code of bart model came from https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/modeling_tf_bart.py

Code of BoN came from https://github.com/ictnlp/BoN-NAT/blob/master/model.py

Code of P-P2 came from https://github.com/ictnlp/GS4NMT/blob/master/models/losser.py

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages