Skip to content

Code for our optimizer which takes scored sentences and extract the best summary according to the ROUGE approximation.

License

Notifications You must be signed in to change notification settings

UKPLab/acl2016-optimizing-rouge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Optimizing an approximation of ROUGE for Multi-Document summarization

In this project, an approximation of ROUGE-N is derived. This approximation is linearly factorizable into the individual scores of sentences which can be then optimize via Integer Linear Programming (ILP). This repositery contains the code for our optimizer which takes scored sentences and extract the best summary according to the ROUGE approximation.

If you reuse this software, please use the following citation:

@InProceedings{peyrard:2016:ACL,
  author = {Peyrard, Maxime and Eckle-Kohler, Judith},
  title = {Optimizing an Approximation of ROUGE - a Problem-Reduction Approach to Extractive Multi-Document Summarization},
  month = {aug},
  year = {2016},
  publisher = {Association for Computational Linguistics},
  booktitle = {Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016)},
  pages = {1825--1836},
  volume = {Volume 1: Long Papers},
  location = {Berlin, Germany},
}

Abstract: This paper presents a problem-reduction approach to extractive multi-document summarization, where we propose a reduction to the problem of scoring individual sentences with their ROUGE scores based on supervised learning. For the summarization, which builds upon the sentence scores, we derive an approximation of the ROUGE score of a set of sentences, and define a principled discrete optimization problem for sentence selection. We perform a detailed experimental evaluation on two DUC datasets to demonstrate the validity of our approach and will make the code freely available. Our approach establishes a prerequisite for leveraging and exploiting a variety of existing techniques from machine learning for multi-document summarization.

Contact person: Maxime Peyrard, peyrard@aiphes.tu-darmstadt.de

http://www.ukp.tu-darmstadt.de/

http://www.tu-darmstadt.de/

Don't hesitate to send us an e-mail or report an issue, if something is broken (and it shouldn't be) or if you have further questions.

This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.

Requirements

Expected results

To check the installation, a trivial toy example is computed by running: python ilp_r.py

Parameter description

  • sentences

    • a list of the sentences with their scores [(sentence, score)].
  • length_constraint

    • the length constraint of the summary in number of words.
  • [overlap_discount]

    • hyper-parameter to specify the discount in the pairwise interactions between sentences.
  • [damping]

    • factor to give some control on the relative importance between sentence scores and pairwise interactions.
  • [max_depth]

    • Use for computation efficiency, only consider the top max_depth sentences.
  • [N]

    • N for N-grams. Indicates whether the optimizer should work with unigrams, bigrams, ...

About

Code for our optimizer which takes scored sentences and extract the best summary according to the ROUGE approximation.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages