Object-Aware Few-Shot Unsupervised Image-to-Image Translation for cross domain object detection in adverse environment with discriminator augmentation
This repository contains our PyTorch implementation of Object-Aware Few-Shot Unsupervised Image-to-Image Translation (OAFSUI2IT). In this paper, we try to address the few-shot cross domain (FSCD) object detection task with limited unlabeld images in the target domain. Built upon the architecture of CUT, our method 1) introduces adaptive discriminator augmentation module to solve the unbalanced source-target domain problem; 2) proposes pyramid patchwise contrastive learning startegy to improve the images quality; and 3) develops self-supervised content-consistency loss to enforce content matching. Trained on the images translated by our OAFSUI2IT, object detection methods (i.e. Faster RCNN) can achieve better mAP than those trained on source only, as well as those on CUT translated images.
To validate our translation method, we use TSNE to visualize distribution of images from source domain, target domain and generated.
Our code is developed based on contrastive-unpaired-translation. Part of adaptive discriminator augmentation borrows from stylegan2-ada-pytorch. We also thanks pixplot for t-SNE visualization.