Transformer-based Summarization by Exploiting Relevant User Comments, as proposed by us. Our model simulates the nature relationship between relevant user posts and the content of the main documents by sharing information in terms of important words or tokens.
To do that, we empower the model with the equipment of two important aspects: utilizing social information and using the power of transformers, i.e. BERT. More precisely, relevant user posts are used to enrich the information of sentences in the main documents. The enrichment is the combination of hidden features of input sentences and user posts learned from BERT.
To capture more fine-grained hidden representation, we stack an additional convolution neural network (CNN) on the top of BERT for classification. The final summary is created by selecting top m ranked sentences based on their importance denoted as probabilities.
This simple wrapper based on Transformers (for managing BERT model), and Sentence-Transformer (for managing Sentence-BERT model) and PyTorch achieves 0.284 ROUGE-1 Score on the USA-CNN and 0.372 ROUGE-1 Score on the SoLSCSum dataset.
Here we created a custom classification head on top of the BERT backbone. The sequence of a sentence and the title was fed into BERT and the relevant user post was fed into sentenceBERT. [CLS-C]
token represents the final vector of the relevant user post in the final layer of sentenceBERT. We concatenated the 5 hidden representations, and fed it to a convolution neural network (CNN) for classification.
Python: 3.6
Torch version: 1.6.0
Transformers: 2.11.0
requirements.txt
exposes the library dependencies
You need to create directories according to the path to evaluate model by ROUGE-1.5.5:
To perform training, run the following command:
(You can also change the hyperparameters)
python main.py
You can also use this notebook to train on Google Colaboratory.
Datasets | ROUGE-1 | ROUGE-2 |
---|---|---|
SoLSCSum | 0.372 | 0.300 |
USA-CNN | 0.284 | 0.093 |