Simple tricks to improve visual domain adaptation for MNIST -> SVHN. For more advanced tricks, check out DIRT-T
A typical implementation of convnets trained on MNIST and tested in SVHN usually yields ~20% accuracy. That being said, it's pretty easy to get to ~40% accuracy on MNIST-> SVHN by applying the following tricks:
- Apply instance normalization to the input
- Use batch normalization
- Use an exponential moving average of the parameter trajectory chosen by your optimizer
- Add gaussian noise after dropout
I didn't do an extensive ablation test, so it's hard to say which of these contributed the most to the performance increase.
Download data first
python download_svhn.py
python download_mnist.py
Run code
python run_classifier.py