Code repository for the small image experiments our paper 'Self-ensembling for Domain Adaptation'
Switch branches/tags
Nothing to show
Clone or download

Self-ensembling for visual domain adaptation (small images)

Implementation of the paper Self-ensembling for visual domain adaptation, accepted as a poster at ICLR 2018.

For small image datasets including MNIST, USPS, SVHN, CIFAR-10, STL, GTSRB, etc.

For the VisDA experiments go to


You will need:

  • Python 3.6 (Anaconda Python recommended)
  • OpenCV with Python bindings
  • PyTorch

First, install OpenCV and PyTorch as pip may have trouble with these.

OpenCV with Python bindings

On Linux, install using conda:

> conda install opencv

On Windows, go to and download the OpenCV wheel file and install with:

> pip install <path_of_opencv_file>


On Linux:

> conda install pytorch torchvision -c pytorch

On Windows:

> conda install -c peterjc123 pytorch cuda90

The rest

Use pip like so:

> pip install -r requirements.txt


Domain adaptation experiments are run via the Python program.

The experiments in our paper can be re-created by running the shell script like so:

bash <GPU> <RUN>

Where <GPU> is an integer identifying the GPU to use and enumerates the experiment number so that you can keep logs of multiple repeated runs separate, e.g.:

bash 0 01

Will run on GPU 0 and will generate log files with names suffixed with run01.

To re-create the supervised baseline experiments:

bash <GPU> <RUN

Please see the contents of the shell scripts to see the command line options used to control the experiments.

Syn-Digits, GTSTB and Syn-Signs datasets

You will need to download the Syn-Digits, GTSRB and Syn-signs datasets. After this you will need to create the file domain_datasets.cfg to tell the software where to find them.

The following assumes that you have a directory called data in which you will store these three datasets.


Download Syn-digits from, on which you will find a Google Drive link to a file called Create a directory call syndigits within data and unzip within it.


Download GTSRB from and get the training 'Images and annotations' (, Test 'images and annotations' ( and the test 'extended annotations including class IDs' (

Unzip the three files within the data directory. You should end up with the following directory structure:

GTSRB/Final_Training/Images/   -- training set images
GTSRB/Final_Training/Images/00000/   -- one directory for each class, contains image files
GTSRB/Final_Test/Images/   -- test set images
GTSRB/GT-final_test.csv   -- test set ground truths

Prepare GTSRB

Convert GTSRB to the required format using:

> python


You will need to acquire this dataset from someone else who has it, since there is no download link. Its normally called Create a directory called synsigns within data and unzip within data/synsigns to get the following:

synthetic_data/train/   -- contains the images as PNGs
synthetic_data/train_labelling.txt   -- ground truths

Prepare Syn-signs

Convert Syn-signs to the required format using:

> python

Create domain_datasets.cfg

Create the configuration file domain_datasets.cfg within the same directory as the experiment scripts. Put the following into it (change the paths if they are different):