Skip to content
Go to file

Latest commit


Failed to load latest commit information.
Latest commit message
Commit time


Here we present the instructions to reproduce the machine translation results from our ICML 2020 paper PowerNorm: Rethinking Batch Normalization in Transformers, video. The PowerNorm is implemented here.

Here is the illustration plot of batch/power normalization (left) and layer normalization (right). The entries colored in blue show the components used for calculating the statistics.

The codes are based on open-sourced fairseq (v0.8.0). Follow this link for a detailed document about the original code base and this link for some examples of training baseline Transformer models for machine translation with fairseq.

We also provide pre-trained models for several benchmark translation datasets.

Requirements and Installation

The fairseq library we use requires PyTorch version >= 1.2.0. Please follow the instructions here.

After PyTorch is installed, you can install fairseq with:

conda env create --file env.yml
python build develop


The scripts for training and testing PowerNorm is located at trans-scripts folder. Please refer to this page to preprocess and get binarized data or use the data we provided in the next section. To reproduce the results for Table.1 by yourself:

# IWSLT14 De-En
## To train the model
./trans-scripts/train/ encoder_norm_self_attn encoder_norm_ffn decoder_norm_self_attn decoder_norm_ffn
$ CUDA_VISIBLE_DEVICES=0 ./trans-scripts/train/ power power layer layer
$ CUDA_VISIBLE_DEVICES=0 ./trans-scripts/train/ batch batch layer layer
$ CUDA_VISIBLE_DEVICES=0 ./trans-scripts/train/ layer layer layer layer

## To test a checkpoint
$ CUDA_VISIBLE_DEVICES=0 ./trans-scripts/test/ output_directory

# WMT14 En-De big
## To train the model, we are using 128 GPUs for our experiments.
./trans-scripts/train/ encoder_norm_self_attn encoder_norm_ffn decoder_norm_self_attn decoder_norm_ffn
$ CUDA_VISIBLE_DEVICES=0,1,2,3 ./trans-scripts/train/ power power layer layer

## To test a checkpoint
$ CUDA_VISIBLE_DEVICES=0 ./trans-scripts/test/ output_directory

Pre-trained Models

We provide following pre-trained models and pre-processed, binarized datasets for reproduction:

Description Dataset Model Test set(s)
Transformer-PN small IWSLT14 German-English download (.tbz2) IWSLT14 test set (shared vocab):
download (.tbz2)

Example usage:

# IWSLT14 De-En
## at trans-net/translation/, after download the tbz2 file
$ tar xf powernorm_pretrain_iwslt.tbz2 
$ OUTPUT_DIR=iwslt14_de_en/powernorm_pretrain_iwslt
$ CUDA_VISIBLE_DEVICES=0 ./trans-scripts/test/ $OUTPUT_DIR $CKPT
| Generate test with beam=5: BLEU4 = 35.87, 69.5/44.2/30.1/20.9 (BP=0.961, ratio=0.962, syslen=126196, reflen=131156)


PowerNorm has been developed as part of the following paper. We appreciate it if you would please cite the following paper if you found the library useful for your work:

  title={PowerNorm: Rethinking Batch Normalization in Transformers},
  author={Shen, Sheng and Yao, Zhewei and Gholami, Amir and Mahoney, Michael and Keutzer, Kurt},


[ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers"




No releases published


No packages published


You can’t perform that action at this time.