Implementation of our paper Breaking the Cycle - Colleagues are all you need (CVPR 2020)
Ori Nizan , Ayellet Tal, Breaking the Cycle - Colleagues are all you need [Project]
Send image to this telegram bot and it will send you back its female translation using our implementation
conda env create -f conda_requirements.yml
bash ./scripts/download.sh U_GAT_IT_selfie2anime
bash ./scripts/download.sh celeba_glasses_removal
bash ./scripts/download.sh celeba_male2female
├──datasets
└──DATASET_NAME
├──testA
├──im1.png
├──im2.png
└── ...
├──testB
├──im3.png
├──im4.png
└── ...
├──trainA
├──im5.png
├──im6.png
└── ...
└──trainB
├──im7.png
├──im8.png
└── ...
and change the data_root attribute to ./datasets/DATASET_NAME in the yaml file
python train.py --config configs/anime2face_council_folder.yaml --output_path ./outputs/council_anime2face_256_256 --resume
python train.py --config configs/galsses_council_folder.yaml --output_path ./outputs/council_glasses_128_128 --resume
python train.py --config configs/male2female_council_folder.yaml --output_path ./outputs/male2famle_256_256 --resume
for converting all the images in input_folder using all the members in the council:
python test_on_folder.py --config configs/anime2face_council_folder.yaml --output_folder ./outputs/council_anime2face_256_256 --checkpoint ./outputs/council_anime2face_256_256/anime2face_council_folder/checkpoints/01000000 --input_folder ./datasets/selfie2anime/testB --a2b 0
or using spsified memeber:
python test_on_folder.py --config configs/anime2face_council_folder.yaml --output_folder ./outputs/council_anime2face_256_256 --checkpoint ./outputs/council_anime2face_256_256/anime2face_council_folder/checkpoints/b2a_gen_3_01000000.pt --input_folder ./datasets/selfie2anime/testB --a2b 0
bash ./scripts/download.sh pretrain_male_to_female
python test_on_folder.py --config pretrain/m2f/256/male2female_council_folder.yaml --output_folder ./outputs/male2famle_256_256 --checkpoint pretrain/m2f/256/01000000 --input_folder ./datasets/celeba_male2female/testA --a2b 1
bash ./scripts/download.sh pretrain_glasses_removal
python test_on_folder.py --config pretrain/glasses_removal/128/galsses_council_folder.yaml --output_folder ./outputs/council_glasses_128_128 --checkpoint pretrain/glasses_removal/128/01000000 --input_folder ./datasets/glasses/testA --a2b 1
bash ./scripts/download.sh pretrain_selfie_to_anime
python test_on_folder.py --config pretrain/anime/256/anime2face_council_folder.yaml --output_folder ./outputs/council_anime2face_256_256 --checkpoint pretrain/anime/256/01000000 --input_folder ./datasets/selfie2anime/testB --a2b 0
python test_gui.py --config pretrain/m2f/128/male2female_council_folder.yaml --checkpoint pretrain/m2f/128/a2b_gen_0_01000000.pt --a2b 1
python test_gui.py --config pretrain/glasses_removal/128/galsses_council_folder.yaml --checkpoint pretrain/glasses_removal/128/a2b_gen_3_01000000.pt --a2b 1
python test_gui.py --config pretrain/anime/256/anime2face_council_folder.yaml --checkpoint pretrain/anime/256/b2a_gen_3_01000000.pt --a2b 0
@inproceedings{nizan2020council,
title={Breaking the Cycle - Colleagues are all you need},
author={Ori Nizan and Ayellet Tal},
booktitle={IEEE conference on computer vision and pattern recognition (CVPR)},
year={2020}
}
In this work we based our code on MUNIT implementation. Please cite the original MUNIT if you use their part of the code.