Skip to content

Latest commit

 

History

History
185 lines (123 loc) · 6.44 KB

funnel.md

File metadata and controls

185 lines (123 loc) · 6.44 KB

Funnel Transformer

Overview

The Funnel Transformer model was proposed in the paper Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing. It is a bidirectional transformer model, like BERT, but with a pooling operation after each block of layers, a bit like in traditional convolutional neural networks (CNN) in computer vision.

The abstract from the paper is the following:

With the success of language pretraining, it is highly desirable to develop more efficient architectures of good scalability that can exploit the abundant unlabeled data at a lower cost. To improve the efficiency, we examine the much-overlooked redundancy in maintaining a full-length token-level presentation, especially for tasks that only require a single-vector presentation of the sequence. With this intuition, we propose Funnel-Transformer which gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost. More importantly, by re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, we further improve the model capacity. In addition, to perform token-level predictions as required by common pretraining objectives, Funnel-Transformer is able to recover a deep representation for each token from the reduced hidden sequence via a decoder. Empirically, with comparable or fewer FLOPs, Funnel-Transformer outperforms the standard Transformer on a wide variety of sequence-level prediction tasks, including text classification, language understanding, and reading comprehension.

This model was contributed by sgugger. The original code can be found here.

Usage tips

  • Since Funnel Transformer uses pooling, the sequence length of the hidden states changes after each block of layers. This way, their length is divided by 2, which speeds up the computation of the next hidden states. The base model therefore has a final sequence length that is a quarter of the original one. This model can be used directly for tasks that just require a sentence summary (like sequence classification or multiple choice). For other tasks, the full model is used; this full model has a decoder that upsamples the final hidden states to the same sequence length as the input.
  • For tasks such as classification, this is not a problem, but for tasks like masked language modeling or token classification, we need a hidden state with the same sequence length as the original input. In those cases, the final hidden states are upsampled to the input sequence length and go through two additional layers. That's why there are two versions of each checkpoint. The version suffixed with “-base” contains only the three blocks, while the version without that suffix contains the three blocks and the upsampling head with its additional layers.
  • The Funnel Transformer checkpoints are all available with a full version and a base version. The first ones should be used for [FunnelModel], [FunnelForPreTraining], [FunnelForMaskedLM], [FunnelForTokenClassification] and [FunnelForQuestionAnswering]. The second ones should be used for [FunnelBaseModel], [FunnelForSequenceClassification] and [FunnelForMultipleChoice].

Resources

FunnelConfig

[[autodoc]] FunnelConfig

FunnelTokenizer

[[autodoc]] FunnelTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary

FunnelTokenizerFast

[[autodoc]] FunnelTokenizerFast

Funnel specific outputs

[[autodoc]] models.funnel.modeling_funnel.FunnelForPreTrainingOutput

[[autodoc]] models.funnel.modeling_tf_funnel.TFFunnelForPreTrainingOutput

FunnelBaseModel

[[autodoc]] FunnelBaseModel - forward

FunnelModel

[[autodoc]] FunnelModel - forward

FunnelModelForPreTraining

[[autodoc]] FunnelForPreTraining - forward

FunnelForMaskedLM

[[autodoc]] FunnelForMaskedLM - forward

FunnelForSequenceClassification

[[autodoc]] FunnelForSequenceClassification - forward

FunnelForMultipleChoice

[[autodoc]] FunnelForMultipleChoice - forward

FunnelForTokenClassification

[[autodoc]] FunnelForTokenClassification - forward

FunnelForQuestionAnswering

[[autodoc]] FunnelForQuestionAnswering - forward

TFFunnelBaseModel

[[autodoc]] TFFunnelBaseModel - call

TFFunnelModel

[[autodoc]] TFFunnelModel - call

TFFunnelModelForPreTraining

[[autodoc]] TFFunnelForPreTraining - call

TFFunnelForMaskedLM

[[autodoc]] TFFunnelForMaskedLM - call

TFFunnelForSequenceClassification

[[autodoc]] TFFunnelForSequenceClassification - call

TFFunnelForMultipleChoice

[[autodoc]] TFFunnelForMultipleChoice - call

TFFunnelForTokenClassification

[[autodoc]] TFFunnelForTokenClassification - call

TFFunnelForQuestionAnswering

[[autodoc]] TFFunnelForQuestionAnswering - call