Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Code for NIPS 2015 paper on Max-margin Deep Generative Models (MMDGM)

Chongxuan Li, Jun Zhu, Tianlin Shi and Bo Zhang, Max-margin Deep Generative Models Advances in Neural Information Processing Systems (NIPS15), Montreal
Please cite this paper when using this code for your research.

For questions and bug reports, please send me an e-mail at chongxuanli1991[at]


  1. Some libs we used in our experiments:

    • Python (version 2.7)
    • Numpy
    • Scipy
    • Theano (version 0.7.0)
    • Cuda (7.0)
    • Cudnn (optional, see details below, version 6.5)
    • Pylearn2 (for pre-processing of SVHN)
  2. GPU: TITAN X Black

    • memory of GPU is at least 9G for SVHN

Some matters

  1. We found that theano is computationally unstable with Cudnn. For MNIST, we do NOT use Cudnn and the error rate should be exact 0.45% with same version of libs and machine. For SVHN, we DO use Cudnn for faster training and there exists additional randomness even given fixed random seed. However, we run this experiments for 5 times and choose the lowest accuracy to report. Typically, the generative results won't change much given different version of libs.

  2. In MLP case, we use code from Kingma. In CNN case, we use code from Goodfellow to do local contrast normalization (LCN) for SVHN data (pylearn2). We also use code from the tutorial on

  3. I didn't upload our trained model to github and you may need to train models following the command below and put models in the right place. For a full version of code with trained models, you can access

MLP results

# We did our experiments based on Kingma's code[]. 
Export data path: 
export ML_DATA_PATH="[dir]/mmdgm/mlp_mmdgm/data"

# VA on MNIST with pre-training, lower bound:
# Train the model without pre-training by setting the [pretrain] flag to 0 in the .sh file.

# VA on MNIST, error rate:
python dir/full_latent.mat mnist

# MMVA on MNIST with pre-training:

# MSE results with missing value on MNIST: 
    - mse_va: python models/va_3000/ 3 12 100 _best
    - mse_mmva: python models/mmva_3000/ 3 12 100 _best
    - visualization: python models/mmva_3000/ 3 12 25
    Generate data by yourself:
        - rectangle: python 3 12 (size of rectangle, an even number less than 28)
        - random drop: python 4 0.8 (drop ratio, a real number in range (0, 1))
        - half: python 5 0 14 (integer less than 28)

CNN results on MNIST

# CVA on MNIST, lower bound:

# CVA on MNIST, error rate:

# CMMVA on MNIST with default value of C:
# You could set D=1,1e-1,1e-2,1e-4 to obtain Table2 in the paper.

# MSE results with missing value on MNIST:
    - generate data at first: python 3 12
    - cva:
    - cmmva:
    Generate other types of data by yourself:
        - rectangle: python 3 12 (size of rectangle, an even number less than 28)
        - random drop: python 4 0.8 (drop ratio, a real number in range (0, 1))
        - half: python 5 0 14 (integer less than 28)

# Classification results with missing value on MNIST:
    - cnn:
    - cva:
    - cmmva:
    Train cnn model:

CNN results on SVHN

# The data is too large to upload. Firstly, download the online dataset in .mat format and run to preprocess the data.

# This pre-processing procedure should be done WITHOUT Cudnn to obtain a stable version of data.

# CVA on SVHN, lower bound:

# CVA on SVHN, error rate:

# CMMVA on SVHN with default value of C:
# We pre-trained our recognition model separately without dropout 10 epochs for fast convergence. Pre-train this model by yourself:

# Missing value imputation: 
    - cmmva:
    Generate data by yourself:
        python 3 12


Code of "Max-margin Deep Generative Models" (NIPS15)







No releases published


No packages published