Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
…bookresearch#2213) Summary: Fix fairinternal/fairseq-py#2177 for the transformer conversion to Hydra. The way the defaults are dealt with now is different so when you use the legacy Namespace configuration, you end up with a default encoder_embed_dim, which in the VGG case sets up a encoder attention in the TransformerDecoderLayer with the wrong dimentions. The easiest solution is to erase the default value for encoder_embed_dim (by forcing it to None) when converting the VGG config to the raw Namespace for the decoder layer. Tested with: `pytest tests/speech_recognition/test_vggtransformer.py -k Transformer` Pull Request resolved: fairinternal/fairseq-py#2213 Test Plan: pytest tests/speech_recognition/test_vggtransformer.py -k Transformer Reviewed By: sshleifer Differential Revision: D30425143 Pulled By: Mortimerp9 fbshipit-source-id: 92f6dea2ffbb68e441700bcc55274b3167a587b3
- Loading branch information