Skip to content


Repository files navigation

Autoencoder Weight Transfer Network (AE-WTN)

This is the code to replicate the AE-WTN experiments in the Scaling Object Detection by Transferring Classification Weights paper accepted as an oral paper at ICCV 2019.

Please consider citing this paper in your publications if it helps your research.

   title = {Scaling Object Detection by Transferring Classification Weights},
   author = {Kuen, Jason and Perazzi, Federico and Lin, Zhe and Zhang, Jianming and Tan, Yap-Peng},
   booktitle = {ICCV},
   year = {2019}


Dataset Preparation

cd AE-WTN/datasets/openimages

# download the Open Images training annotations 

## create symlinks (in datasets/openimages) to image directories of training and evaluation datasets

# Open Images (challenge/V4/V5) training images directory (about 1.58M images with all download parts combined)
# (,, ...)
ln -s train /path_to_openimages_images/train

# Open Images (V4/V5) validation images (41,620 images)
ln -s val_600 /path_to_openimages_images/validation

# Visual Genome (Version 1.2) images (108,079 images with part 1 & 2 combined)
ln -s VG_100K /path_to_visualgenome_images

cd ../..

Training & Evaluation

By default, 4 GPU cards are utilized for training and evaluation.

Training checkpoints are stored in the same directory. Evaluation results are stored in the inference subdirectory.

cd experiment

# training

# evaluate on the 3 evaluation datasets

Pretrained model: download link (place it in the same directory before running evaluation)


AE-WTN is released under the MIT license. See LICENSE for additional details.