Skip to content

Commit

Permalink
(fix facebookresearch#2177) Erase the encoder_embed_dim default (face…
Browse files Browse the repository at this point in the history
…bookresearch#2213)

Summary:
Fix fairinternal/fairseq-py#2177 for the transformer conversion to Hydra.

The way the defaults are dealt with now is different so when you use the legacy Namespace configuration, you end up with a default encoder_embed_dim, which in the VGG case sets up a encoder attention in the TransformerDecoderLayer with the wrong dimentions.
The easiest solution is to erase the default value for encoder_embed_dim (by forcing it to None) when converting the VGG config to the raw Namespace for the decoder layer.

Tested with:
`pytest tests/speech_recognition/test_vggtransformer.py -k Transformer`

Pull Request resolved: fairinternal/fairseq-py#2213

Test Plan: pytest tests/speech_recognition/test_vggtransformer.py -k Transformer

Reviewed By: sshleifer

Differential Revision: D30425143

Pulled By: Mortimerp9

fbshipit-source-id: 92f6dea2ffbb68e441700bcc55274b3167a587b3
  • Loading branch information
Mortimerp9 authored and Søren Winkel Holm committed Oct 4, 2021
1 parent 07fb9c6 commit f1d4d9c
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions examples/speech_recognition/models/vggtransformer.py
Original file line number Diff line number Diff line change
Expand Up @@ -203,6 +203,7 @@ def prepare_transformer_decoder_params(
relu_dropout,
):
args = argparse.Namespace()
args.encoder_embed_dim = None
args.decoder_embed_dim = input_dim
args.decoder_attention_heads = num_heads
args.attention_dropout = attention_dropout
Expand Down

0 comments on commit f1d4d9c

Please sign in to comment.