This is an implementation of Dual Contrastive Learning for Unsupervised Image-to-Image Translation in Tensorflow 2.
DCLGAN is a simple yet powerful model for unsupervised Image-to-image translation. Compared to CycleGAN, DCLGAN performs geometry changes with more realistic results. Compared to CUT, DCLGAN is usually more robust and achieves better performance. A viriant, SimDCL (Similarity DCLGAN) also avoids mode collapse using a new similarity loss.
Use train.py
to train a DCLGAN/SimDCL model on given dataset.
Training takes 502ms(fp32)/403ms(fp16) for a single step on RTX 3070.
Example usage for training on horse2zebra-dataset:
python train.py --mode dclgan \
--save_n_epoch 10 \
--train_a_dir ./datasets/horse2zebra/trainA \
--train_b_dir ./datasets/horse2zebra/trainB \
--test_a_dir ./datasets/horse2zebra/testA \
--test_b_dir ./datasets/horse2zebra/testB \
Use inference.py
to translate image from source domain to target domain.
Example usage:
python inference.py --mode dclgan \
--weights ./output/checkpoints \
--inputA ./datasets/horse2zebra/testA \
--inputB ./datasets/horse2zebra/testB \
You will need the following to run the above:
- TensorFlow 2.6.0, TensorFlow Addons 0.15.0
- Python 3, Numpy 1.19.5, Matplotlib 3.4.3
- The code is developed based on official-pytorch-implementation and CUT.
- The training datasets are from taesung_park/CycleGAN/datasets.