From bee48c417845dd748f4e1bbd4d7bc112c7800119 Mon Sep 17 00:00:00 2001 From: peterjliu Date: Tue, 7 Jan 2020 14:12:16 -0800 Subject: [PATCH 1/3] Add Pegasus results. --- english/summarization.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/english/summarization.md b/english/summarization.md index 1ed16427..97c83655 100644 --- a/english/summarization.md +++ b/english/summarization.md @@ -64,6 +64,7 @@ The first table covers Extractive Models, while the second covers abstractive ap | Model | ROUGE-1 | ROUGE-2 | ROUGE-L | METEOR | Paper / Source | Code | | --------------- | :-----: | :-----: | :-----: | :----: | -------------- | ---- | +| PEGASUS (Zhang et al., 2019) | 44.17 | 21.47 | 41.11 | - | [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf) | - | | BART (Lewis et al., 2019) | 44.16 | 21.28 | 40.90 | - | [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) | [Official](https://github.com/pytorch/fairseq/tree/master/examples/bart) | | T5 (Raffel et al., 2019) | 43.52 | 21.55 | 40.69 | - | [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) | [Official](https://github.com/google-research/text-to-text-transfer-transformer) | | UniLM (Dong et al., 2019) | 43.33 | 20.21 | 40.51 | - | [Unified Language Model Pre-training for Natural Language Understanding and Generation](https://arxiv.org/pdf/1905.03197.pdf) | [Official](https://github.com/microsoft/unilm) | @@ -94,6 +95,7 @@ Below Results are ranking by ROUGE-2 Scores. | Model | ROUGE-1 | ROUGE-2* | ROUGE-L | Paper / Source | Code | | --------------- | :-----: | :-----: | :-----: | -------------- | ---- | | ControlCopying (Song et al., 2020) | 39.08 | 20.47 | 36.69 | [Controlling the Amount of Verbatim Copying in Abstractive Summarizatio](https://arxiv.org/pdf/1911.10390.pdf) | [Official](https://github.com/ucfnlp/control-over-copying) | +| PEGASUS (Zhang et al., 2019) | 39.12 | 19.86 | 36.24 | - | [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf) | - | | UniLM (Dong et al., 2019) | 38.90 | 20.05 | 36.00 | [Unified Language Model Pre-training for Natural Language Understanding and Generation](https://arxiv.org/pdf/1905.03197.pdf) | [Official](https://github.com/microsoft/unilm) | | BiSET (Wang et al., 2019) | 39.11 | 19.78 | 36.87 | [BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization](https://www.aclweb.org/anthology/P19-1207) | [Official](https://github.com/InitialBug/BiSET) | | MASS (Song et al., 2019) | 38.73 | 19.71 | 35.96 | [MASS: Masked Sequence to Sequence Pre-training for Language Generation](https://arxiv.org/pdf/1905.02450v5.pdf) | [Official](https://github.com/microsoft/MASS) | @@ -127,6 +129,7 @@ Evaluation metrics are ROUGE-1, ROUGE-2 and ROUGE-L. | Model | ROUGE-1 | ROUGE-2 | ROUGE-L | Paper / Source | Code | | --------------- | :-----: | :-----: | :-----: | -------------- | ---- | +| PEGASUS (Zhang et al., 2019) | 47.21 | 24.56 | 39.25 | - | [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf) | - | | BART (Lewis et al., 2019) | 45.14 | 22.27 | 37.25 | [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) | [Official](https://github.com/pytorch/fairseq/tree/master/examples/bart) | | BertSumExtAbs (Liu et al., 2019) | 38.81 | 16.50 | 31.27 | [Text Summarization with Pretrained Encoders](https://arxiv.org/pdf/1908.08345.pdf) | [Official](https://github.com/nlpyang/PreSumm) | | T-ConvS2S | 31.89 | 11.54 | 25.75 | [Don’t Give Me the Details, Just the Summary!](https://arxiv.org/pdf/1808.08745.pdf) | [Official](https://github.com/EdinburghNLP/XSum) | From c09d06a8340b9c9866853e0aaee8b9e354de39a5 Mon Sep 17 00:00:00 2001 From: peterjliu Date: Tue, 7 Jan 2020 14:30:25 -0800 Subject: [PATCH 2/3] Fix tables. --- english/summarization.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/english/summarization.md b/english/summarization.md index 97c83655..5e603e5c 100644 --- a/english/summarization.md +++ b/english/summarization.md @@ -95,7 +95,7 @@ Below Results are ranking by ROUGE-2 Scores. | Model | ROUGE-1 | ROUGE-2* | ROUGE-L | Paper / Source | Code | | --------------- | :-----: | :-----: | :-----: | -------------- | ---- | | ControlCopying (Song et al., 2020) | 39.08 | 20.47 | 36.69 | [Controlling the Amount of Verbatim Copying in Abstractive Summarizatio](https://arxiv.org/pdf/1911.10390.pdf) | [Official](https://github.com/ucfnlp/control-over-copying) | -| PEGASUS (Zhang et al., 2019) | 39.12 | 19.86 | 36.24 | - | [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf) | - | +| PEGASUS (Zhang et al., 2019) | 39.12 | 19.86 | 36.24 | [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf) | - | | UniLM (Dong et al., 2019) | 38.90 | 20.05 | 36.00 | [Unified Language Model Pre-training for Natural Language Understanding and Generation](https://arxiv.org/pdf/1905.03197.pdf) | [Official](https://github.com/microsoft/unilm) | | BiSET (Wang et al., 2019) | 39.11 | 19.78 | 36.87 | [BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization](https://www.aclweb.org/anthology/P19-1207) | [Official](https://github.com/InitialBug/BiSET) | | MASS (Song et al., 2019) | 38.73 | 19.71 | 35.96 | [MASS: Masked Sequence to Sequence Pre-training for Language Generation](https://arxiv.org/pdf/1905.02450v5.pdf) | [Official](https://github.com/microsoft/MASS) | @@ -129,7 +129,7 @@ Evaluation metrics are ROUGE-1, ROUGE-2 and ROUGE-L. | Model | ROUGE-1 | ROUGE-2 | ROUGE-L | Paper / Source | Code | | --------------- | :-----: | :-----: | :-----: | -------------- | ---- | -| PEGASUS (Zhang et al., 2019) | 47.21 | 24.56 | 39.25 | - | [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf) | - | +| PEGASUS (Zhang et al., 2019) | 47.21 | 24.56 | 39.25 | [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf) | - | | BART (Lewis et al., 2019) | 45.14 | 22.27 | 37.25 | [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/pdf/1910.13461.pdf) | [Official](https://github.com/pytorch/fairseq/tree/master/examples/bart) | | BertSumExtAbs (Liu et al., 2019) | 38.81 | 16.50 | 31.27 | [Text Summarization with Pretrained Encoders](https://arxiv.org/pdf/1908.08345.pdf) | [Official](https://github.com/nlpyang/PreSumm) | | T-ConvS2S | 31.89 | 11.54 | 25.75 | [Don’t Give Me the Details, Just the Summary!](https://arxiv.org/pdf/1808.08745.pdf) | [Official](https://github.com/EdinburghNLP/XSum) | From 1533c905ee8bfc2a3da4e8fb5fc8b5fbb73699e4 Mon Sep 17 00:00:00 2001 From: peterjliu Date: Tue, 7 Jan 2020 14:34:44 -0800 Subject: [PATCH 3/3] Fix ordering gigaword. --- english/summarization.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/english/summarization.md b/english/summarization.md index 5e603e5c..63fd92ef 100644 --- a/english/summarization.md +++ b/english/summarization.md @@ -95,8 +95,8 @@ Below Results are ranking by ROUGE-2 Scores. | Model | ROUGE-1 | ROUGE-2* | ROUGE-L | Paper / Source | Code | | --------------- | :-----: | :-----: | :-----: | -------------- | ---- | | ControlCopying (Song et al., 2020) | 39.08 | 20.47 | 36.69 | [Controlling the Amount of Verbatim Copying in Abstractive Summarizatio](https://arxiv.org/pdf/1911.10390.pdf) | [Official](https://github.com/ucfnlp/control-over-copying) | -| PEGASUS (Zhang et al., 2019) | 39.12 | 19.86 | 36.24 | [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf) | - | | UniLM (Dong et al., 2019) | 38.90 | 20.05 | 36.00 | [Unified Language Model Pre-training for Natural Language Understanding and Generation](https://arxiv.org/pdf/1905.03197.pdf) | [Official](https://github.com/microsoft/unilm) | +| PEGASUS (Zhang et al., 2019) | 39.12 | 19.86 | 36.24 | [PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization](https://arxiv.org/pdf/1912.08777.pdf) | - | | BiSET (Wang et al., 2019) | 39.11 | 19.78 | 36.87 | [BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization](https://www.aclweb.org/anthology/P19-1207) | [Official](https://github.com/InitialBug/BiSET) | | MASS (Song et al., 2019) | 38.73 | 19.71 | 35.96 | [MASS: Masked Sequence to Sequence Pre-training for Language Generation](https://arxiv.org/pdf/1905.02450v5.pdf) | [Official](https://github.com/microsoft/MASS) | | Re^3 Sum (Cao et al., 2018) | 37.04 | 19.03 | 34.46 | [Retrieve, Rerank and Rewrite: Soft Template Based Neural Summarization](http://aclweb.org/anthology/P18-1015) | |