Skip to content
Go to file

Latest commit


Git stats


Failed to load latest commit information.

Decorrelated Batch Normalization

Code for reproducing the results in the following paper:

Decorrelated Batch Normalization
Lei Huang, Dawei Yang, Bo Lang, Jia Deng
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. arXiv:1804.08450

Requirements and Dependency

  • Install MAGMA (you can find the instructions in  'Install' ). Note: MAGMA is required for SVD on GPU. Without MAGMA, you can run the code on CPU only, while all the CNN experiments in the paper are run on GPU.
  • Install Torch with CUDA (for GPU). Note that cutorch should be compiled with MAGMA support if you have installed MAGMA and set the environments correctly.
  • Install cudnn v5.
  • Install the dependency optnet by:
luarocks install optnet


1. Reproduce the results for PCA whitening:

  • Run:

This script will download MNIST automatically and you should put the mnist.t7/ under ./dataset/. The experiment results will be saved at ./set_result/MLP/.

2. Reproduce the results for MLP architecture:

(1) FIM experiments on YaleB dataset
  • Prepare the data: download the YaleB dataset here, and put the data files under /dataset/ so that the paths look like ./dataset/YaleB/YaleB_train.dat and ./dataset/YaleB/YaleB_test.dat.
  • Run:

The experiment results will be saved at directory: 'set_result/MLP/'.

You can experiment with different hyperparameters by running these scripts -- and

(2) Experiments on PIE dataset
  • Prepare the data: download the PIE dataset here, and put the data file under ./dataset/ such that the paths look like ./dataset/PIE/PIE_train.dat and ./dataset/PIE/PIE_test.dat.
  • To experiment with different group sizes, run:
  • To obtain different baseline performances, execute:

Note that the experiments until this point can be run on CPU, so MAGMA is not needed in above experiments.

3. Reproduce the results for VGG-A architecture on CIFAR-10:

  • Prepare the data: follow the instructions for CIFAR-10 in this project . It will generate a preprocessed dataset and save a 1400MB file. Put this file cifar_provider.t7 under ./dataset/.
  • Run:

Note that if your machine has fewer than 4 GPUs, the environment variable CUDA_VISIBLE_DEVICES should be changed accordingly.

4. Analyze the properties of DBN on CIFAR-10 datset:

  • Prepare the data: same as in VGG-A experiments.
  • Run:
bash exp_Conv_4Splain_1deep.lua
bash exp_Conv_4Splain_2large.lua

5. Reproduce the ResNet experiments on CIFAR-10 datset:

  • Prepare the data: download CIFAR-10 and CIFAR-100, and put the data files under ./dataset/.
  • Run:

6. Reproduce the ImageNet experiments.

  • Clone Facebook's ResNet repo here.
  • Download ImageNet and put it in: /tmp/dataset/ImageNet/ (you can also customize the path in opts.lua)
  • Install the DBN module to Torch as a Lua package: go to the directory ./models/imagenet/cuSpatialDBN/ and run luarocks make cudbn-1.0-0.rockspec.
  • Copy the model definitions in ./models/imagenet/ (resnet_BN.lua, resnet_DBN_scale_L1.lua and init.lua) to ./models directory in the cloned repo fb.resnet.torch, for reproducing the results reported in the paper. You also can compare the pre-activation version of residual networks introduced in the paper (using the model files preresnet_BN.lua and preresnet_DBN_scale_L1.lua).
  • Use the default configuration and our models to run experiments.


Email: Any discussions and suggestions are welcome!


Code for Decorrelated Batch Normalization




No releases published


No packages published
You can’t perform that action at this time.