Skip to content
Reproducing code for the paper "Learning Discrete Representations via Information Maximizing Self-Augmented Training"
Branch: master
Clone or download
Latest commit 9a9b286 Jul 20, 2017
Type Name Latest commit message Commit time
Failed to load latest commit information.
mnist code Feb 28, 2017 update readme Jul 20, 2017 code Feb 28, 2017 Remove unused variables Feb 28, 2017 code Feb 28, 2017

Information Maximizing Self Augmented Training (IMSAT)

This is a reproducing code for IMSAT [1]. IMSAT is a method for discrete representation learning using deep neural networks. It can be applied to clustering and hash learning to achieve the state-of-the-art results. This is the work performed while Weihua Hu was interning at Preferred Networks.


You must have the following already installed on your system.

  • Python 2.7
  • Chainer 1.21.0, sklearn, munkres

Quick start

For reproducing the experiments on MNIST datasets in [1], run the following codes.

  • Clustering with MNIST: python
  • Hash learning with MNIST: python can be used to calculate the perturbation range for Virtual Adversarial Training [2]. For MNIST dataset, we have already calculated the range.


[1] Weihua Hu, Takeru Miyato, Seiya Tokui, Eiichi Matsumoto and Masashi Sugiyama. Learning Discrete Representations via Information Maximizing Self-Augmented Training. In ICML, 2017. Available at

[2] Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, and Shin Ishii. Distributional smoothing with virtual adversarial training. In ICLR, 2016.

You can’t perform that action at this time.