No description, website, or topics provided.
Switch branches/tags
Nothing to show
Clone or download
Latest commit ef11c19 Nov 13, 2015
Failed to load latest commit information.
Model CorrNet with mnistExample Oct 25, 2015
mnistExample CorrNet with mnistExample Oct 25, 2015
.gitignore CorrNet with mnistExample Oct 25, 2015 Update Nov 12, 2015


This is an implementation of Correlational Neural Network (CorrNet) described in the following paper :

Sarath Chandar, Mitesh M Khapra, Hugo Larochelle, Balaraman Ravindran. Correlational Neural Networks. To appear in Neural Computation, 2015.

Please site this paper if you are using this code for any of your publications.


To run the representation learning code, you need Python and Theano.

To run the MNIST example, you need scikit-learn.

Running MNIST example

Refer section 5 in the paper for details about the two-view setup for MNIST and the transfer learning experiment. First you should download the dataset from this link and extract all the files to some directory say MNIST_DIR. You also need a target directory where the models will be saved, say TGT_DIR.

In terminal, go to mnistExample folder.

To create the dataset for training, run the following command:

$ python MNIST_DIR/

Next, to train the CorrNet, run the following command.

$ python MNIST_DIR/ TGT_DIR/

To project the data to the learnt space, run the following command.

$ python MNIST_DIR/ TGT_DIR/

To evaluate the learnt model for transfer learning task, run the following command.

$ python tl TGT_DIR/

With batch_size=100, training_epochs=50, l_rate=0.01, optimization="rmsprop", tied=True, n_hidden=50, lambda=2, hidden_activation=sigmoid, output_activation=sigmoid, loss_fn = "squarrederror", you should get 77.05% accuracy for view1 to view2 and 78.81% accuracy for view2 to view1.

To compute sum correlation in the projected space, run the following command.

$ python corr TGT_DIR/

With the same configuration as above, you should get 42.57 as test correlation.

Bilingual Word Representation Learning

Scripts to run the bilingual word representation learning algorithm will be made available soon.