SAVI2I: Continuous and Diverse Image-to-Image Translation via Signed Attribute Vectors (Published in IJCV2022)
[Paper] [Project Website]
Pytorch implementation for SAVI2I. We propose a simple yet effective signed attribute vector (SAV) that facilitates continuous translation on diverse mapping paths across multiple domains using both latent- and reference- guided.
More video results please see Our Webpage
Contact: Qi Mao (qimao@cuc.edu.cn)
Continuous and Diverse Image-to-Image Translation via Signed Attribute Vectors
Qi Mao, Hung-Yu Tseng,Hsin-Ying Lee, Jia-Bin Huang, Siwei Ma, and Ming-Hsuan Yang
In IJCV2022
If you find this work useful for your research, please cite our paper:
@article{mao2022continuous,
title={Continuous and diverse image-to-image translation via signed attribute vectors},
author={Mao, Qi and Tseng, Hung-Yu and Lee, Hsin-Ying and Huang, Jia-Bin and Ma, Siwei and Yang, Ming-Hsuan},
journal={International Journal of Computer Vision},
volume={130},
number={2},
pages={517--549},
year={2022},
publisher={Springer}
}
- Linux or Windows
- Python 3+
- Suggest to use two P100 16GB GPUs or One V100 32GB GPU.
- Clone this repo:
git clone https://github.com/HelenMao/SAVI2I.git
cd SAVI2I
- This code requires Pytorch 0.4.0+ and Python 3+. Please install dependencies by
conda create -n SAVI2I python=3.6
source activate SAVI2I
pip install -r requirements.txt
Download datasets for each task into the dataset folder
./datasets
- Style translation: Yosemite (summer <-> winter) and Photo2Artwork (Photo, Monet, Van Gogh and Ukiyo-e)
- You can follow the instructions of CycleGAN datasets to download Yosemite and Photo2artwork datasets.
- Shape-variation translation: CelebA-HQ (Male <-> Female) and AFHQ (Cat, Dog and WildLife)
- We split CelebA-HQ into male and female domains according to the annotated label and fine-tune the images manaully.
- You can follow the instructions of StarGAN-v2 datasets to download CelebA-HQ and AFHQ datasets.
For low-level style translation tasks, you suggest to set
--type=1
to use corresponding network architectures.
For shape-variation translation tasks, you suggest to set--type=0
to use corresponding network architectures.
- Yosemite
python train.py --dataroot ./datasets/Yosemite/ --phase train --type 1 --name Yosemite --n_ep 700 --n_ep_decay 500 --lambda_r1 10 --lambda_mmd 1 --num_domains 2
- Photo2artwork
python train.py --dataroot ./datasets/Photo2artwork/ --phase train --type 1 --name Photo2artwork --n_ep 100 --n_ep_decay 0 --lambda_r1 10 --lambda_mmd 1 --num_domains 4
- CelebAHQ
python train.py --dataroot ./datasets/CelebAHQ/ --phase train --type 0 --name CelebAHQ --n_ep 30 --n_ep_decay 0 --lambda_r1 1 --lambda_mmd 1 --num_domains 2
- AFHQ
python train.py --dataroot ./datasets/AFHQ/ --phase train --type 0 --name AFHQ --n_ep 100 --n_ep_decay 0 --lambda_r1 1 --lambda_mmd 10 --num_domains 3
Download and save them into
./models
or download the pre-trained models with the following script.
bash ./download_models.sh
Reference-guided
python test_reference_save.py --dataroot ./datasets/CelebAHQ --resume ./models/CelebAHQ/00029.pth --phase test --type 0 --num_domains 2 --index_s A --index_t B --num 5 --name CelebAHQ_ref
Latent-guided
python test_latent_rdm_save.py --dataroot ./datasets/CelebAHQ --resume ./models/CelebAHQ/00029.pth --phase test --type 0 --num_domains 2 --index_s A --index_t B --num 5 --name CelebAHQ_rdm
All rights reserved.
Licensed under the CC BY-NC-SA 4.0 (Attribution-NonCommercial-ShareAlike 4.0 International).
The codes are only for academical research use. For commercial use, please contact qimao@pku.edu.cn.
Codes and network architectures inspired from: