From ccd4180f8afdd65e0b31b739998dfc6bf04308b9 Mon Sep 17 00:00:00 2001 From: NielsRogge <48327001+NielsRogge@users.noreply.github.com> Date: Wed, 27 Jul 2022 10:08:59 +0200 Subject: [PATCH] [EncoderDecoder] Improve docs (#18271) * Improve docs * Improve docs of speech one as well * Apply suggestions from code review Co-authored-by: Niels Rogge --- docs/source/en/model_doc/encoder-decoder.mdx | 31 ++++- .../en/model_doc/speech-encoder-decoder.mdx | 92 ++++++++++++- .../en/model_doc/vision-encoder-decoder.mdx | 126 +++++++++++++++++- 3 files changed, 237 insertions(+), 12 deletions(-) diff --git a/docs/source/en/model_doc/encoder-decoder.mdx b/docs/source/en/model_doc/encoder-decoder.mdx index 865abc6b26b57..8130b4945d4cc 100644 --- a/docs/source/en/model_doc/encoder-decoder.mdx +++ b/docs/source/en/model_doc/encoder-decoder.mdx @@ -27,9 +27,9 @@ any other models (see the examples for more information). An application of this architecture could be to leverage two pretrained [`BertModel`] as the encoder and decoder for a summarization model as was shown in: [Text Summarization with Pretrained Encoders](https://arxiv.org/abs/1908.08345) by Yang Liu and Mirella Lapata. -## Randomly initializing [`EncoderDecoderModel`] from model configurations. +## Randomly initializing `EncoderDecoderModel` from model configurations. -[`EncoderDecoderModel`] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [`BertModel`] configuration for both the encoder and the decoder. +[`EncoderDecoderModel`] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [`BertModel`] configuration for the encoder and the default [`BertForCausalLM`] configuration for the decoder. ```python >>> from transformers import BertConfig, EncoderDecoderConfig, EncoderDecoderModel @@ -41,7 +41,7 @@ and decoder for a summarization model as was shown in: [Text Summarization with >>> model = EncoderDecoderModel(config=config) ``` -## Initialising [`EncoderDecoderModel`] from a pretrained encoder and a pretrained decoder. +## Initialising `EncoderDecoderModel` from a pretrained encoder and a pretrained decoder. [`EncoderDecoderModel`] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained auto-encoding model, *e.g.* BERT, can serve as the encoder and both pretrained auto-encoding models, *e.g.* BERT, pretrained causal language models, *e.g.* GPT2, as well as the pretrained decoder part of sequence-to-sequence models, *e.g.* decoder of BART, can be used as the decoder. Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. @@ -55,14 +55,32 @@ To do so, the `EncoderDecoderModel` class provides a [`EncoderDecoderModel.from_ >>> model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased") ``` -## Loading an existing [`EncoderDecoderModel`] checkpoint. +## Loading an existing `EncoderDecoderModel` checkpoint and perform inference. -To load fine-tuned checkpoints of the `EncoderDecoderModel` class, ['EncoderDecoderModel`] provides the `from_pretrained(...)` method just like any other model architecture in Transformers. +To load fine-tuned checkpoints of the `EncoderDecoderModel` class, [`EncoderDecoderModel`] provides the `from_pretrained(...)` method just like any other model architecture in Transformers. + +To perform inference, one uses the [`generate`] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling. ```python ->>> from transformers import EncoderDecoderModel +>>> from transformers import AutoTokenizer, EncoderDecoderModel +>>> # load a fine-tuned seq2seq model and corresponding tokenizer >>> model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail") +>>> tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail") + +>>> # let's perform inference on a long piece of text +>>> ARTICLE_TO_SUMMARIZE = ( +... "PG&E stated it scheduled the blackouts in response to forecasts for high winds " +... "amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were " +... "scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow." +... ) +>>> input_ids = tokenizer(ARTICLE_TO_SUMMARIZE, return_tensors="pt").input_ids + +>>> # autoregressively generate summary (uses greedy decoding by default) +>>> generated_ids = model.generate(input_ids) +>>> generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] +>>> print(generated_text) +nearly 800 thousand customers were affected by the shutoffs. the aim is to reduce the risk of wildfires. nearly 800, 000 customers were expected to be affected by high winds amid dry conditions. pg & e said it scheduled the blackouts to last through at least midday tomorrow. ``` ## Loading a PyTorch checkpoint into `TFEncoderDecoderModel`. @@ -116,6 +134,7 @@ target sequence). >>> # the forward function automatically creates the correct decoder_input_ids >>> loss = model(input_ids=input_ids, labels=labels).loss ``` + Detailed [colab](https://colab.research.google.com/drive/1WIk2bxglElfZewOHboPFNj8H44_VAyKE?usp=sharing#scrollTo=ZwQIEhKOrJpl) for training. This model was contributed by [thomwolf](https://github.com/thomwolf). This model's TensorFlow and Flax versions diff --git a/docs/source/en/model_doc/speech-encoder-decoder.mdx b/docs/source/en/model_doc/speech-encoder-decoder.mdx index a0dd20bb4dee3..9aee71ed66696 100644 --- a/docs/source/en/model_doc/speech-encoder-decoder.mdx +++ b/docs/source/en/model_doc/speech-encoder-decoder.mdx @@ -12,7 +12,7 @@ specific language governing permissions and limitations under the License. # Speech Encoder Decoder Models -The [`SpeechEncoderDecoderModel`] can be used to initialize a speech-sequence-to-text-sequence model +The [`SpeechEncoderDecoderModel`] can be used to initialize a speech-to-text model with any pretrained speech autoencoding model as the encoder (*e.g.* [Wav2Vec2](wav2vec2), [Hubert](hubert)) and any pretrained autoregressive model as the decoder. The effectiveness of initializing speech-sequence-to-text-sequence models with pretrained checkpoints for speech @@ -20,9 +20,95 @@ recognition and speech translation has *e.g.* been shown in [Large-Scale Self- a Translation](https://arxiv.org/abs/2104.06678) by Changhan Wang, Anne Wu, Juan Pino, Alexei Baevski, Michael Auli, Alexis Conneau. -An example of how to use a [`SpeechEncoderDecoderModel`] for inference can be seen in -[Speech2Text2](speech_to_text_2). +An example of how to use a [`SpeechEncoderDecoderModel`] for inference can be seen in [Speech2Text2](speech_to_text_2). +## Randomly initializing `SpeechEncoderDecoderModel` from model configurations. + +[`SpeechEncoderDecoderModel`] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [`Wav2Vec2Model`] configuration for the encoder +and the default [`BertForCausalLM`] configuration for the decoder. + +```python +>>> from transformers import BertConfig, Wav2Vec2Config, SpeechEncoderDecoderConfig, SpeechEncoderDecoderModel + +>>> config_encoder = Wav2Vec2Config() +>>> config_decoder = BertConfig() + +>>> config = SpeechEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) +>>> model = SpeechEncoderDecoderModel(config=config) +``` + +## Initialising `SpeechEncoderDecoderModel` from a pretrained encoder and a pretrained decoder. + +[`SpeechEncoderDecoderModel`] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based speech model, *e.g.* [Wav2Vec2](wav2vec2), [Hubert](hubert) can serve as the encoder and both pretrained auto-encoding models, *e.g.* BERT, pretrained causal language models, *e.g.* GPT2, as well as the pretrained decoder part of sequence-to-sequence models, *e.g.* decoder of BART, can be used as the decoder. +Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. +Initializing [`SpeechEncoderDecoderModel`] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in [the *Warm-starting-encoder-decoder blog post*](https://huggingface.co/blog/warm-starting-encoder-decoder). +To do so, the `SpeechEncoderDecoderModel` class provides a [`SpeechEncoderDecoderModel.from_encoder_decoder_pretrained`] method. + +```python +>>> from transformers import SpeechEncoderDecoderModel + +>>> model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained( +... "facebook/hubert-large-ll60k", "bert-base-uncased" +... ) +``` + +## Loading an existing `SpeechEncoderDecoderModel` checkpoint and perform inference. + +To load fine-tuned checkpoints of the `SpeechEncoderDecoderModel` class, [`SpeechEncoderDecoderModel`] provides the `from_pretrained(...)` method just like any other model architecture in Transformers. + +To perform inference, one uses the [`generate`] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling. + +```python +>>> from transformers import Wav2Vec2Processor, SpeechEncoderDecoderModel +>>> from datasets import load_dataset +>>> import torch + +>>> # load a fine-tuned speech translation model and corresponding processor +>>> model = SpeechEncoderDecoderModel.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15") +>>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15") + +>>> # let's perform inference on a piece of English speech (which we'll translate to German) +>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") +>>> input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values + +>>> # autoregressively generate transcription (uses greedy decoding by default) +>>> generated_ids = model.generate(input_values) +>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] +>>> print(generated_text) +Mr. Quilter ist der Apostel der Mittelschicht und wir freuen uns, sein Evangelium willkommen heißen zu können. +``` + +## Training + +Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (speech, text) pairs. +As you can see, only 2 inputs are required for the model in order to compute a loss: `input_values` (which are the +speech inputs) and `labels` (which are the `input_ids` of the encoded target sequence). + +```python +>>> from transformers import Wav2Vec2Processor, SpeechEncoderDecoderModel +>>> from datasets import load_dataset + +>>> processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") +>>> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") +>>> model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained( +... "facebook/wav2vec2-base-960h", "bert-base-uncased" +... ) + +>>> model.config.decoder_start_token_id = processor.tokenizer.cls_token_id +>>> model.config.pad_token_id = processor.tokenizer.pad_token_id + +>>> # load a speech input +>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") +>>> input_values = processor(ds[0]["audio"]["array"], return_tensors="pt").input_values + +>>> # load its corresponding transcription +>>> with processor.as_target_processor(): +... labels = processor(ds[0]["text"], return_tensors="pt").input_ids + +>>> # the forward function automatically creates the correct decoder_input_ids +>>> loss = model(input_values, labels=labels).loss +>>> loss.backward() +``` ## SpeechEncoderDecoderConfig diff --git a/docs/source/en/model_doc/vision-encoder-decoder.mdx b/docs/source/en/model_doc/vision-encoder-decoder.mdx index 987924d4ad7c0..3b386868e91d0 100644 --- a/docs/source/en/model_doc/vision-encoder-decoder.mdx +++ b/docs/source/en/model_doc/vision-encoder-decoder.mdx @@ -12,16 +12,136 @@ specific language governing permissions and limitations under the License. # Vision Encoder Decoder Models -The [`VisionEncoderDecoderModel`] can be used to initialize an image-to-text-sequence model with any -pretrained Transformer-based vision autoencoding model as the encoder (*e.g.* [ViT](vit), [BEiT](beit), [DeiT](deit), [Swin](swin)) +## Overview + +The [`VisionEncoderDecoderModel`] can be used to initialize an image-to-text model with any +pretrained Transformer-based vision model as the encoder (*e.g.* [ViT](vit), [BEiT](beit), [DeiT](deit), [Swin](swin)) and any pretrained language model as the decoder (*e.g.* [RoBERTa](roberta), [GPT2](gpt2), [BERT](bert), [DistilBERT](distilbert)). The effectiveness of initializing image-to-text-sequence models with pretrained checkpoints has been shown in (for example) [TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models](https://arxiv.org/abs/2109.10282) by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei. -An example of how to use a [`VisionEncoderDecoderModel`] for inference can be seen in [TrOCR](trocr). +After such a [`VisionEncoderDecoderModel`] has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples below +for more information). + +An example application is image captioning, in which the encoder is used to encode the image, after which an autoregressive language model generates +the caption. Another example is optical character recognition. Refer to [TrOCR](trocr), which is an instance of [`VisionEncoderDecoderModel`]. + +## Randomly initializing `VisionEncoderDecoderModel` from model configurations. + +[`VisionEncoderDecoderModel`] can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default [`ViTModel`] configuration for the encoder +and the default [`BertForCausalLM`] configuration for the decoder. + +```python +>>> from transformers import BertConfig, ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel + +>>> config_encoder = ViTConfig() +>>> config_decoder = BertConfig() + +>>> config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) +>>> model = VisionEncoderDecoderModel(config=config) +``` + +## Initialising `VisionEncoderDecoderModel` from a pretrained encoder and a pretrained decoder. + +[`VisionEncoderDecoderModel`] can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained Transformer-based vision model, *e.g.* [Swin](swin), can serve as the encoder and both pretrained auto-encoding models, *e.g.* BERT, pretrained causal language models, *e.g.* GPT2, as well as the pretrained decoder part of sequence-to-sequence models, *e.g.* decoder of BART, can be used as the decoder. +Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. +Initializing [`VisionEncoderDecoderModel`] from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in [the *Warm-starting-encoder-decoder blog post*](https://huggingface.co/blog/warm-starting-encoder-decoder). +To do so, the `VisionEncoderDecoderModel` class provides a [`VisionEncoderDecoderModel.from_encoder_decoder_pretrained`] method. + +```python +>>> from transformers import VisionEncoderDecoderModel + +>>> model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( +... "microsoft/swin-base-patch4-window7-224-in22k", "bert-base-uncased" +... ) +``` + +## Loading an existing `VisionEncoderDecoderModel` checkpoint and perform inference. + +To load fine-tuned checkpoints of the `VisionEncoderDecoderModel` class, [`VisionEncoderDecoderModel`] provides the `from_pretrained(...)` method just like any other model architecture in Transformers. + +To perform inference, one uses the [`generate`] method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling. + +```python +>>> import requests +>>> from PIL import Image + +>>> from transformers import GPT2TokenizerFast, ViTFeatureExtractor, VisionEncoderDecoderModel + +>>> # load a fine-tuned image captioning model and corresponding tokenizer and feature extractor +>>> model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning") +>>> tokenizer = GPT2TokenizerFast.from_pretrained("nlpconnect/vit-gpt2-image-captioning") +>>> feature_extractor = ViTFeatureExtractor.from_pretrained("nlpconnect/vit-gpt2-image-captioning") + +>>> # let's perform inference on an image +>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" +>>> image = Image.open(requests.get(url, stream=True).raw) +>>> pixel_values = feature_extractor(image, return_tensors="pt").pixel_values + +>>> # autoregressively generate caption (uses greedy decoding by default) +>>> generated_ids = model.generate(pixel_values) +>>> generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] +>>> print(generated_text) +a cat laying on a blanket next to a cat laying on a bed +``` + +## Loading a PyTorch checkpoint into `TFVisionEncoderDecoderModel`. + +[`TFVisionEncoderDecoderModel.from_pretrained`] currently doesn't support initializing the model from a +PyTorch checkpoint. Passing `from_pt=True` to this method will throw an exception. If there are only PyTorch +checkpoints for a particular vision encoder-decoder model, a workaround is: + +```python +>>> from transformers import VisionEncoderDecoderModel, TFVisionEncoderDecoderModel + +>>> _model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning") + +>>> _model.encoder.save_pretrained("./encoder") +>>> _model.decoder.save_pretrained("./decoder") + +>>> model = TFVisionEncoderDecoderModel.from_encoder_decoder_pretrained( +... "./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True +... ) +>>> # This is only for copying some specific attributes of this particular model. +>>> model.config = _model.config +``` + +## Training + +Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model on a dataset of (image, text) pairs. +As you can see, only 2 inputs are required for the model in order to compute a loss: `pixel_values` (which are the +images) and `labels` (which are the `input_ids` of the encoded target sequence). + +```python +>>> from transformers import ViTFeatureExtractor, BertTokenizer, VisionEncoderDecoderModel +>>> from datasets import load_dataset + +>>> feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k") +>>> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") +>>> model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained( +... "google/vit-base-patch16-224-in21k", "bert-base-uncased" +... ) + +>>> model.config.decoder_start_token_id = tokenizer.cls_token_id +>>> model.config.pad_token_id = tokenizer.pad_token_id + +>>> dataset = load_dataset("huggingface/cats-image") +>>> image = dataset["test"]["image"][0] +>>> pixel_values = feature_extractor(image, return_tensors="pt").pixel_values + +>>> labels = tokenizer( +... "an image of two cats chilling on a couch", +... return_tensors="pt", +... ).input_ids + +>>> # the forward function automatically creates the correct decoder_input_ids +>>> loss = model(pixel_values=pixel_values, labels=labels).loss +``` +This model was contributed by [nielsr](https://github.com/nielsrogge). This model's TensorFlow and Flax versions +were contributed by [ydshieh](https://github.com/ydshieh). ## VisionEncoderDecoderConfig