From 322a0410e10874ed4ecc0f2a78984ede8d58d766 Mon Sep 17 00:00:00 2001 From: prabod Date: Thu, 11 May 2023 19:14:34 +0700 Subject: [PATCH 01/10] Add model 2023-05-11-distilbart_cnn_12_6_en --- .../2023-05-11-distilbart_cnn_12_6_en.md | 86 +++++++++++++++++++ 1 file changed, 86 insertions(+) create mode 100644 docs/_posts/prabod/2023-05-11-distilbart_cnn_12_6_en.md diff --git a/docs/_posts/prabod/2023-05-11-distilbart_cnn_12_6_en.md b/docs/_posts/prabod/2023-05-11-distilbart_cnn_12_6_en.md new file mode 100644 index 00000000000000..d6523d5da15cf1 --- /dev/null +++ b/docs/_posts/prabod/2023-05-11-distilbart_cnn_12_6_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: Abstractive Summarization by BART - DistilBART CNN +author: John Snow Labs +name: distilbart_cnn_12_6 +date: 2023-05-11 +tags: [bart, summarization, cnn, distil, text_to_text, en, open_source, tensorflow] +task: Summarization +language: en +edition: Spark NLP 4.4.2 +spark_version: [3.2, 3.0] +supported: true +engine: tensorflow +annotator: BartTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +"BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension Transformer" The Facebook BART (Bidirectional and Auto-Regressive Transformer) model is a state-of-the-art language generation model that was introduced by Facebook AI in 2019. It is based on the transformer architecture and is designed to handle a wide range of natural language processing tasks such as text generation, summarization, and machine translation. + +This pre-trained model is DistilBART fine-tuned on the Extreme Summarization (CNN) Dataset. + +## Predicted Entities + + + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbart_cnn_12_6_en_4.4.2_3.2_1683807053526.zip){:.button.button-orange} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbart_cnn_12_6_en_4.4.2_3.2_1683807053526.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +bart = BartTransformer.pretrained("distilbart_cnn_12_6") \ + .setTask("summarize:") \ + .setMaxOutputLength(200) \ + .setInputCols(["documents"]) \ + .setOutputCol("summaries") +``` +```scala +val bart = BartTransformer.pretrained("distilbart_cnn_12_6") + .setTask("summarize:") + .setMaxOutputLength(200) + .setInputCols("documents") + .setOutputCol("summaries") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbart_cnn_12_6| +|Compatibility:|Spark NLP 4.4.2+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|870.4 MB| + +## Benchmarking + +```bash +### Metrics for DistilBART models +| Model Name | MM Params | Inference Time (MS) | Speedup | Rouge 2 | Rouge-L | +|:---------------------------|------------:|----------------------:|----------:|----------:|----------:| +| distilbart-xsum-12-1 | 222 | 90 | 2.54 | 18.31 | 33.37 | +| distilbart-xsum-6-6 | 230 | 132 | 1.73 | 20.92 | 35.73 | +| distilbart-xsum-12-3 | 255 | 106 | 2.16 | 21.37 | 36.39 | +| distilbart-xsum-9-6 | 268 | 136 | 1.68 | 21.72 | 36.61 | +| bart-large-xsum (baseline) | 406 | 229 | 1 | 21.85 | 36.50 | +| distilbart-xsum-12-6 | 306 | 137 | 1.68 | 22.12 | 36.99 | +| bart-large-cnn (baseline) | 406 | 381 | 1 | 21.06 | 30.63 | +| distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 | +| distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 | +| distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 | +``` \ No newline at end of file From d393d5bfcca687ffa2fbed1ef35aef71711098db Mon Sep 17 00:00:00 2001 From: prabod Date: Thu, 11 May 2023 19:18:03 +0700 Subject: [PATCH 02/10] Add model 2023-05-11-distilbart_cnn_6_6_en --- .../2023-05-11-distilbart_cnn_6_6_en.md | 86 +++++++++++++++++++ 1 file changed, 86 insertions(+) create mode 100644 docs/_posts/prabod/2023-05-11-distilbart_cnn_6_6_en.md diff --git a/docs/_posts/prabod/2023-05-11-distilbart_cnn_6_6_en.md b/docs/_posts/prabod/2023-05-11-distilbart_cnn_6_6_en.md new file mode 100644 index 00000000000000..3af32aaaa84d39 --- /dev/null +++ b/docs/_posts/prabod/2023-05-11-distilbart_cnn_6_6_en.md @@ -0,0 +1,86 @@ +--- +layout: model +title: Abstractive Summarization by BART - DistilBART CNN +author: John Snow Labs +name: distilbart_cnn_6_6 +date: 2023-05-11 +tags: [bart, summarization, cnn, distil, text_to_text, en, open_source, tensorflow] +task: Summarization +language: en +edition: Spark NLP 4.4.2 +spark_version: [3.2, 3.0] +supported: true +engine: tensorflow +annotator: BartTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +"BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension Transformer" The Facebook BART (Bidirectional and Auto-Regressive Transformer) model is a state-of-the-art language generation model that was introduced by Facebook AI in 2019. It is based on the transformer architecture and is designed to handle a wide range of natural language processing tasks such as text generation, summarization, and machine translation. + +This pre-trained model is DistilBART fine-tuned on the Extreme Summarization (CNN) Dataset. + +## Predicted Entities + + + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbart_cnn_6_6_en_4.4.2_3.2_1683807295608.zip){:.button.button-orange} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbart_cnn_6_6_en_4.4.2_3.2_1683807295608.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +bart = BartTransformer.pretrained("distilbart_cnn_6_6") \ + .setTask("summarize:") \ + .setMaxOutputLength(200) \ + .setInputCols(["documents"]) \ + .setOutputCol("summaries") +``` +```scala +val bart = BartTransformer.pretrained("distilbart_cnn_6_6") + .setTask("summarize:") + .setMaxOutputLength(200) + .setInputCols("documents") + .setOutputCol("summaries") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbart_cnn_6_6| +|Compatibility:|Spark NLP 4.4.2+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|551.9 MB| + +## Benchmarking + +```bash +### Metrics for DistilBART models +| Model Name | MM Params | Inference Time (MS) | Speedup | Rouge 2 | Rouge-L | +|:---------------------------|------------:|----------------------:|----------:|----------:|----------:| +| distilbart-xsum-12-1 | 222 | 90 | 2.54 | 18.31 | 33.37 | +| distilbart-xsum-6-6 | 230 | 132 | 1.73 | 20.92 | 35.73 | +| distilbart-xsum-12-3 | 255 | 106 | 2.16 | 21.37 | 36.39 | +| distilbart-xsum-9-6 | 268 | 136 | 1.68 | 21.72 | 36.61 | +| bart-large-xsum (baseline) | 406 | 229 | 1 | 21.85 | 36.50 | +| distilbart-xsum-12-6 | 306 | 137 | 1.68 | 22.12 | 36.99 | +| bart-large-cnn (baseline) | 406 | 381 | 1 | 21.06 | 30.63 | +| distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 | +| distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 | +| distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 | +``` \ No newline at end of file From a0527e30a96b9b7cd0a1b4a94f8fac31529f36be Mon Sep 17 00:00:00 2001 From: prabod Date: Thu, 11 May 2023 19:23:35 +0700 Subject: [PATCH 03/10] Add model 2023-05-11-distilbart_xsum_12_6_en --- .../2023-05-11-distilbart_xsum_12_6_en.md | 90 +++++++++++++++++++ 1 file changed, 90 insertions(+) create mode 100644 docs/_posts/prabod/2023-05-11-distilbart_xsum_12_6_en.md diff --git a/docs/_posts/prabod/2023-05-11-distilbart_xsum_12_6_en.md b/docs/_posts/prabod/2023-05-11-distilbart_xsum_12_6_en.md new file mode 100644 index 00000000000000..eeb71ba9060ede --- /dev/null +++ b/docs/_posts/prabod/2023-05-11-distilbart_xsum_12_6_en.md @@ -0,0 +1,90 @@ +--- +layout: model +title: Abstractive Summarization by BART - DistilBART XSUM +author: John Snow Labs +name: distilbart_xsum_12_6 +date: 2023-05-11 +tags: [bart, summarization, text_to_text, xsum, distil, en, open_source, tensorflow] +task: Summarization +language: en +edition: Spark NLP 4.4.2 +spark_version: [3.2, 3.0] +supported: true +engine: tensorflow +annotator: BartTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +"BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension Transformer" The Facebook BART (Bidirectional and Auto-Regressive Transformer) model is a state-of-the-art language generation model that was introduced by Facebook AI in 2019. It is based on the transformer architecture and is designed to handle a wide range of natural language processing tasks such as text generation, summarization, and machine translation. + +This pre-trained model is DistilBART fine-tuned on the Extreme Summarization (XSum) Dataset. + +## Predicted Entities + + + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbart_xsum_12_6_en_4.4.2_3.2_1683807498835.zip){:.button.button-orange} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbart_xsum_12_6_en_4.4.2_3.2_1683807498835.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +bart = BartTransformer.pretrained("distilbart_xsum_12_6") \ + .setTask("summarize:") \ + .setMaxOutputLength(200) \ + .setInputCols(["documents"]) \ + .setOutputCol("summaries") +``` +```scala +val bart = BartTransformer.pretrained("distilbart_xsum_12_6") + .setTask("summarize:") + .setMaxOutputLength(200) + .setInputCols("documents") + .setOutputCol("summaries") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbart_xsum_12_6| +|Compatibility:|Spark NLP 4.4.2+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|733.7 MB| + +## References + +https://huggingface.co/sshleifer/distilbart-xsum-12-6 + +## Benchmarking + +```bash +### Metrics for DistilBART models +| Model Name | MM Params | Inference Time (MS) | Speedup | Rouge 2 | Rouge-L | +|:---------------------------|------------:|----------------------:|----------:|----------:|----------:| +| distilbart-xsum-12-1 | 222 | 90 | 2.54 | 18.31 | 33.37 | +| distilbart-xsum-6-6 | 230 | 132 | 1.73 | 20.92 | 35.73 | +| distilbart-xsum-12-3 | 255 | 106 | 2.16 | 21.37 | 36.39 | +| distilbart-xsum-9-6 | 268 | 136 | 1.68 | 21.72 | 36.61 | +| bart-large-xsum (baseline) | 406 | 229 | 1 | 21.85 | 36.50 | +| distilbart-xsum-12-6 | 306 | 137 | 1.68 | 22.12 | 36.99 | +| bart-large-cnn (baseline) | 406 | 381 | 1 | 21.06 | 30.63 | +| distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 | +| distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 | +| distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 | +``` \ No newline at end of file From 3221b8cc19102f9bbecfcbca0c314bb331145a7b Mon Sep 17 00:00:00 2001 From: prabod Date: Thu, 11 May 2023 19:27:58 +0700 Subject: [PATCH 04/10] Add model 2023-05-11-distilbart_xsum_6_6_en --- .../2023-05-11-distilbart_xsum_6_6_en.md | 90 +++++++++++++++++++ 1 file changed, 90 insertions(+) create mode 100644 docs/_posts/prabod/2023-05-11-distilbart_xsum_6_6_en.md diff --git a/docs/_posts/prabod/2023-05-11-distilbart_xsum_6_6_en.md b/docs/_posts/prabod/2023-05-11-distilbart_xsum_6_6_en.md new file mode 100644 index 00000000000000..ca5152e640f3b6 --- /dev/null +++ b/docs/_posts/prabod/2023-05-11-distilbart_xsum_6_6_en.md @@ -0,0 +1,90 @@ +--- +layout: model +title: Abstractive Summarization by BART - DistilBART XSUM +author: John Snow Labs +name: distilbart_xsum_6_6 +date: 2023-05-11 +tags: [bart, summarization, xsum, distil, text_to_text, en, open_source, tensorflow] +task: Summarization +language: en +edition: Spark NLP 4.4.2 +spark_version: [3.2, 3.0] +supported: true +engine: tensorflow +annotator: BartTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +"BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension Transformer" The Facebook BART (Bidirectional and Auto-Regressive Transformer) model is a state-of-the-art language generation model that was introduced by Facebook AI in 2019. It is based on the transformer architecture and is designed to handle a wide range of natural language processing tasks such as text generation, summarization, and machine translation. + +This pre-trained model is DistilBART fine-tuned on the Extreme Summarization (XSum) Dataset. + +## Predicted Entities + + + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbart_xsum_6_6_en_4.4.2_3.2_1683807832345.zip){:.button.button-orange} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbart_xsum_6_6_en_4.4.2_3.2_1683807832345.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + + + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +bart = BartTransformer.pretrained("distilbart_xsum_6_6") \ + .setTask("summarize:") \ + .setMaxOutputLength(200) \ + .setInputCols(["documents"]) \ + .setOutputCol("summaries") +``` +```scala +val bart = BartTransformer.pretrained("distilbart_xsum_6_6") + .setTask("summarize:") + .setMaxOutputLength(200) + .setInputCols("documents") + .setOutputCol("summaries") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|distilbart_xsum_6_6| +|Compatibility:|Spark NLP 4.4.2+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|551.7 MB| + +## References + +https://huggingface.co/sshleifer/distilbart-xsum-6-6 + +## Benchmarking + +```bash +### Metrics for DistilBART models +| Model Name | MM Params | Inference Time (MS) | Speedup | Rouge 2 | Rouge-L | +|:---------------------------|------------:|----------------------:|----------:|----------:|----------:| +| distilbart-xsum-12-1 | 222 | 90 | 2.54 | 18.31 | 33.37 | +| distilbart-xsum-6-6 | 230 | 132 | 1.73 | 20.92 | 35.73 | +| distilbart-xsum-12-3 | 255 | 106 | 2.16 | 21.37 | 36.39 | +| distilbart-xsum-9-6 | 268 | 136 | 1.68 | 21.72 | 36.61 | +| bart-large-xsum (baseline) | 406 | 229 | 1 | 21.85 | 36.50 | +| distilbart-xsum-12-6 | 306 | 137 | 1.68 | 22.12 | 36.99 | +| bart-large-cnn (baseline) | 406 | 381 | 1 | 21.06 | 30.63 | +| distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 | +| distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 | +| distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 | +``` \ No newline at end of file From c26551fd9b9cc3aebb36ec820283df97bfdbf8ad Mon Sep 17 00:00:00 2001 From: prabod Date: Thu, 11 May 2023 19:33:36 +0700 Subject: [PATCH 05/10] Add model 2023-05-11-bart_large_cnn_en --- .../prabod/2023-05-11-bart_large_cnn_en.md | 74 +++++++++++++++++++ 1 file changed, 74 insertions(+) create mode 100644 docs/_posts/prabod/2023-05-11-bart_large_cnn_en.md diff --git a/docs/_posts/prabod/2023-05-11-bart_large_cnn_en.md b/docs/_posts/prabod/2023-05-11-bart_large_cnn_en.md new file mode 100644 index 00000000000000..269d8a5551b7f5 --- /dev/null +++ b/docs/_posts/prabod/2023-05-11-bart_large_cnn_en.md @@ -0,0 +1,74 @@ +--- +layout: model +title: BART (large-sized model), fine-tuned on CNN Daily Mail +author: John Snow Labs +name: bart_large_cnn +date: 2023-05-11 +tags: [bart, summarization, cnn, text_to_text, en, open_source, tensorflow] +task: Summarization +language: en +edition: Spark NLP 4.4.2 +spark_version: [3.2, 3.0] +supported: true +engine: tensorflow +annotator: BartTransformer +article_header: + type: cover +use_language_switcher: "Python-Scala-Java" +--- + +## Description + +BART model pre-trained on English language, and fine-tuned on [CNN Daily Mail](https://huggingface.co/datasets/cnn_dailymail). It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository (https://github.com/pytorch/fairseq/tree/master/examples/bart). + +Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team. + +### Model description + +BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text. + +BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering). This particular checkpoint has been fine-tuned on CNN Daily Mail, a large collection of text-summary pairs. + +## Predicted Entities + + + +{:.btn-box} + + +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bart_large_cnn_en_4.4.2_3.2_1683808096812.zip){:.button.button-orange} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bart_large_cnn_en_4.4.2_3.2_1683808096812.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} + +## How to use + +You can use this model for text summarization. + +
+{% include programmingLanguageSelectScalaPythonNLU.html %} +```python +bart = BartTransformer.pretrained("bart_large_cnn") \ + .setTask("summarize:") \ + .setMaxOutputLength(200) \ + .setInputCols(["documents"]) \ + .setOutputCol("summaries") +``` +```scala +val bart = BartTransformer.pretrained("bart_large_cnn") + .setTask("summarize:") + .setMaxOutputLength(200) + .setInputCols("documents") + .setOutputCol("summaries") +``` +
+ +{:.model-param} +## Model Information + +{:.table-model} +|---|---| +|Model Name:|bart_large_cnn| +|Compatibility:|Spark NLP 4.4.2+| +|License:|Open Source| +|Edition:|Official| +|Language:|en| +|Size:|975.3 MB| \ No newline at end of file From 7760eb81098538aac9346fe28b6f1033eaae24fb Mon Sep 17 00:00:00 2001 From: Maziyar Panahi Date: Thu, 11 May 2023 15:06:24 +0200 Subject: [PATCH 06/10] Update 2023-05-11-bart_large_cnn_en.md --- docs/_posts/prabod/2023-05-11-bart_large_cnn_en.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/_posts/prabod/2023-05-11-bart_large_cnn_en.md b/docs/_posts/prabod/2023-05-11-bart_large_cnn_en.md index 269d8a5551b7f5..bbb4f79f416e91 100644 --- a/docs/_posts/prabod/2023-05-11-bart_large_cnn_en.md +++ b/docs/_posts/prabod/2023-05-11-bart_large_cnn_en.md @@ -8,7 +8,7 @@ tags: [bart, summarization, cnn, text_to_text, en, open_source, tensorflow] task: Summarization language: en edition: Spark NLP 4.4.2 -spark_version: [3.2, 3.0] +spark_version: 3.0 supported: true engine: tensorflow annotator: BartTransformer @@ -36,8 +36,8 @@ BART is particularly effective when fine-tuned for text generation (e.g. summari {:.btn-box} -[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bart_large_cnn_en_4.4.2_3.2_1683808096812.zip){:.button.button-orange} -[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bart_large_cnn_en_4.4.2_3.2_1683808096812.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/bart_large_cnn_en_4.4.2_3.0_1683808096812.zip){:.button.button-orange} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/bart_large_cnn_en_4.4.2_3.0_1683808096812.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} ## How to use @@ -71,4 +71,4 @@ val bart = BartTransformer.pretrained("bart_large_cnn") |License:|Open Source| |Edition:|Official| |Language:|en| -|Size:|975.3 MB| \ No newline at end of file +|Size:|975.3 MB| From f657255e8e740efc6479ba8c37918e23839668d1 Mon Sep 17 00:00:00 2001 From: Maziyar Panahi Date: Thu, 11 May 2023 15:06:43 +0200 Subject: [PATCH 07/10] Update 2023-05-11-distilbart_cnn_12_6_en.md --- docs/_posts/prabod/2023-05-11-distilbart_cnn_12_6_en.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/_posts/prabod/2023-05-11-distilbart_cnn_12_6_en.md b/docs/_posts/prabod/2023-05-11-distilbart_cnn_12_6_en.md index d6523d5da15cf1..3bc524718d97a6 100644 --- a/docs/_posts/prabod/2023-05-11-distilbart_cnn_12_6_en.md +++ b/docs/_posts/prabod/2023-05-11-distilbart_cnn_12_6_en.md @@ -8,7 +8,7 @@ tags: [bart, summarization, cnn, distil, text_to_text, en, open_source, tensorfl task: Summarization language: en edition: Spark NLP 4.4.2 -spark_version: [3.2, 3.0] +spark_version: 3.0 supported: true engine: tensorflow annotator: BartTransformer @@ -30,8 +30,8 @@ This pre-trained model is DistilBART fine-tuned on the Extreme Summarization (CN {:.btn-box} -[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbart_cnn_12_6_en_4.4.2_3.2_1683807053526.zip){:.button.button-orange} -[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbart_cnn_12_6_en_4.4.2_3.2_1683807053526.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbart_cnn_12_6_en_4.4.2_3.0_1683807053526.zip){:.button.button-orange} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbart_cnn_12_6_en_4.4.2_3.0_1683807053526.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} ## How to use @@ -83,4 +83,4 @@ val bart = BartTransformer.pretrained("distilbart_cnn_12_6") | distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 | | distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 | | distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 | -``` \ No newline at end of file +``` From b1e000fd7e4b92ac72f6fa47b08f638f2cf18b60 Mon Sep 17 00:00:00 2001 From: Maziyar Panahi Date: Thu, 11 May 2023 15:07:23 +0200 Subject: [PATCH 08/10] Update 2023-05-11-distilbart_cnn_6_6_en.md --- docs/_posts/prabod/2023-05-11-distilbart_cnn_6_6_en.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/_posts/prabod/2023-05-11-distilbart_cnn_6_6_en.md b/docs/_posts/prabod/2023-05-11-distilbart_cnn_6_6_en.md index 3af32aaaa84d39..72730f4e25c11a 100644 --- a/docs/_posts/prabod/2023-05-11-distilbart_cnn_6_6_en.md +++ b/docs/_posts/prabod/2023-05-11-distilbart_cnn_6_6_en.md @@ -8,7 +8,7 @@ tags: [bart, summarization, cnn, distil, text_to_text, en, open_source, tensorfl task: Summarization language: en edition: Spark NLP 4.4.2 -spark_version: [3.2, 3.0] +spark_version: 3.0 supported: true engine: tensorflow annotator: BartTransformer @@ -30,8 +30,8 @@ This pre-trained model is DistilBART fine-tuned on the Extreme Summarization (CN {:.btn-box} -[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbart_cnn_6_6_en_4.4.2_3.2_1683807295608.zip){:.button.button-orange} -[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbart_cnn_6_6_en_4.4.2_3.2_1683807295608.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbart_cnn_6_6_en_4.4.2_3.0_1683807295608.zip){:.button.button-orange} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbart_cnn_6_6_en_4.4.2_3.0_1683807295608.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} ## How to use @@ -83,4 +83,4 @@ val bart = BartTransformer.pretrained("distilbart_cnn_6_6") | distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 | | distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 | | distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 | -``` \ No newline at end of file +``` From 15582d7818ef38289e750a7227c110b4e6b2d1a7 Mon Sep 17 00:00:00 2001 From: Maziyar Panahi Date: Thu, 11 May 2023 15:07:49 +0200 Subject: [PATCH 09/10] Update 2023-05-11-distilbart_xsum_12_6_en.md --- docs/_posts/prabod/2023-05-11-distilbart_xsum_12_6_en.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/_posts/prabod/2023-05-11-distilbart_xsum_12_6_en.md b/docs/_posts/prabod/2023-05-11-distilbart_xsum_12_6_en.md index eeb71ba9060ede..45d5ac6fc19a34 100644 --- a/docs/_posts/prabod/2023-05-11-distilbart_xsum_12_6_en.md +++ b/docs/_posts/prabod/2023-05-11-distilbart_xsum_12_6_en.md @@ -8,7 +8,7 @@ tags: [bart, summarization, text_to_text, xsum, distil, en, open_source, tensorf task: Summarization language: en edition: Spark NLP 4.4.2 -spark_version: [3.2, 3.0] +spark_version: 3.0 supported: true engine: tensorflow annotator: BartTransformer @@ -30,8 +30,8 @@ This pre-trained model is DistilBART fine-tuned on the Extreme Summarization (XS {:.btn-box} -[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbart_xsum_12_6_en_4.4.2_3.2_1683807498835.zip){:.button.button-orange} -[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbart_xsum_12_6_en_4.4.2_3.2_1683807498835.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbart_xsum_12_6_en_4.4.2_3.0_1683807498835.zip){:.button.button-orange} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbart_xsum_12_6_en_4.4.2_3.0_1683807498835.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} ## How to use @@ -87,4 +87,4 @@ https://huggingface.co/sshleifer/distilbart-xsum-12-6 | distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 | | distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 | | distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 | -``` \ No newline at end of file +``` From 5f74af8b9df681c1d8865770909339f4e11fef77 Mon Sep 17 00:00:00 2001 From: Maziyar Panahi Date: Thu, 11 May 2023 15:08:18 +0200 Subject: [PATCH 10/10] Update 2023-05-11-distilbart_xsum_6_6_en.md --- docs/_posts/prabod/2023-05-11-distilbart_xsum_6_6_en.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/_posts/prabod/2023-05-11-distilbart_xsum_6_6_en.md b/docs/_posts/prabod/2023-05-11-distilbart_xsum_6_6_en.md index ca5152e640f3b6..a18c746a5cf4ae 100644 --- a/docs/_posts/prabod/2023-05-11-distilbart_xsum_6_6_en.md +++ b/docs/_posts/prabod/2023-05-11-distilbart_xsum_6_6_en.md @@ -8,7 +8,7 @@ tags: [bart, summarization, xsum, distil, text_to_text, en, open_source, tensorf task: Summarization language: en edition: Spark NLP 4.4.2 -spark_version: [3.2, 3.0] +spark_version: 3.0 supported: true engine: tensorflow annotator: BartTransformer @@ -30,8 +30,8 @@ This pre-trained model is DistilBART fine-tuned on the Extreme Summarization (XS {:.btn-box} -[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbart_xsum_6_6_en_4.4.2_3.2_1683807832345.zip){:.button.button-orange} -[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbart_xsum_6_6_en_4.4.2_3.2_1683807832345.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} +[Download](https://s3.amazonaws.com/auxdata.johnsnowlabs.com/public/models/distilbart_xsum_6_6_en_4.4.2_3.0_1683807832345.zip){:.button.button-orange} +[Copy S3 URI](s3://auxdata.johnsnowlabs.com/public/models/distilbart_xsum_6_6_en_4.4.2_3.0_1683807832345.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3} ## How to use @@ -87,4 +87,4 @@ https://huggingface.co/sshleifer/distilbart-xsum-6-6 | distilbart-12-3-cnn | 255 | 214 | 1.78 | 20.57 | 30.00 | | distilbart-12-6-cnn | 306 | 307 | 1.24 | 21.26 | 30.59 | | distilbart-6-6-cnn | 230 | 182 | 2.09 | 20.17 | 29.70 | -``` \ No newline at end of file +```