Skip to content
/ vda-hax Public

Simple tricks to improve visual domain adaptation for MNIST -> SVHN

Notifications You must be signed in to change notification settings

RuiShu/vda-hax

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

vda-hax

Simple tricks to improve visual domain adaptation for MNIST -> SVHN. For more advanced tricks, check out DIRT-T

A typical implementation of convnets trained on MNIST and tested in SVHN usually yields ~20% accuracy. That being said, it's pretty easy to get to ~40% accuracy on MNIST-> SVHN by applying the following tricks:

  1. Apply instance normalization to the input
  2. Use batch normalization
  3. Use an exponential moving average of the parameter trajectory chosen by your optimizer
  4. Add gaussian noise after dropout

I didn't do an extensive ablation test, so it's hard to say which of these contributed the most to the performance increase.

Run code

Download data first

python download_svhn.py
python download_mnist.py

Run code

python run_classifier.py

About

Simple tricks to improve visual domain adaptation for MNIST -> SVHN

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages