Skip to content

tencent-ailab/Lodoss

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Toward Unifying Text Segmentation and Long Document Summarization (Lodoss)

Our paper, "Toward Unifying Text Segmentation and Long Document Summarization", is accepted at EMNLP 2022. If you find the code useful, please cite the following paper.

Citation 🔖

@inproceedings{cho-etal-2022-toward,
  title={Toward Unifying Text Segmentation and Long Document Summarization},
  author={Sangwoo Cho, Kaiqiang Song, Xiaoyang Wang, Fei Liu, and Dong Yu},
  booktitle={Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing},
  year={2022}
}

Quick Links

Overview 🔭

We propose a method learning robust sentence representations by performing summarization and segmentation simultaneously, which is further enhanced by an optimizationbased regularizer to promote selection of diverse summary sentences.

  1. Lodoss-base: This is a traditional extractive summarization method which predicts probabilities of each sentence being included in a summary.
  2. Lodoss-joint: This model is based on joint prediction of summarization and segmentation of higher context (paragraph, section, etc.).
  3. Lodoss-full: This approach is further regularized by the DPP loss to be able to predict better summary sentences.

Environment 🌐

Create the environment with conda and pip.

conda env create -f environment.yml
conda activate lodoss

We have done experiments with the environment with python 3.9 and cuda 11.3.
(For other CUDA version, please install the corresponding packages)

Data 💾

We provide the processed data: PubMed, arXiv, VT-SSum

  • Original dataset can be downloaded from PubMed, arXiv, or VT-SSum
  • Dataset format is based on Hugging Face Datasets.
  • Keys for data instance
    • article_id: str
    • abstract_list: List[str] / reference summary
    • section_list: List[str] / document segmented by sections
    • section_names: List[str] / names for each section
    • selected_ids: List[int] / oracle summary ids

Train 🚅

Please run the scripts under /run. For example:

cd Lodoss
bash run/train_pubmed_16K_base.sh

You need to update --data_dir with your data directory.

Some parameters to consider:

  • --max_input_len 16384: change 4096 to limit the input length
  • --is_longer_seq: this should be enabled for the sequence length longer than 4096
  • --is_seg: for Lodoss-joint model
  • --is_dpp: for Lodoss-full model, set --dpp_weight 0.1 as well
  • --fp16: use this for Lodoss-base and Lodoss-joint models / due to numerical stability issue of computing the DPP loss with Lodoss-full, it is not recommended to use this parameter (fp32)
  • --num_sent_inf 7: number of sentences to select for inference / use 7 for PubMed, 5 for arXiv (which is the average number of the reference summary for each dataset) for the best results

Checkpoint ☁️

If you'd like to download our trained models on Pubmed and arXiv, they are available here:

Model PubMed arXiv
16K_Lodoss-base ⬇️ ⬇️
16K_Lodoss-joint ⬇️ ⬇️
16K_Lodoss-full ⬇️ ⬇️
16K_Lodoss-full-LG ⬇️ ⬇️

Inference 🤔

With your own trained model or our provided pretrined models above, you can run inference on PubMed or arXiv.

cd Lodoss
bash run/test_pubmed_16K_base.sh

Please refer to the scripts under /run for running inference with other models.
You can also save the summary by enabling --save_summary.

License 📜

Copyright 2022 Tencent

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

   http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Disclaimer

This repo is only for research purpose. It is not an officially supported Tencent product.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published