TwinGAN -- Unsupervised Image Translation for Human Portraits
Use Pretrained Model.
Run the following command to translate the demo inputs.
python inference/image_translation_infer.py \ --model_path="/PATH/TO/MODEL/256/" --image_hw=256 --input_tensor_name="sources_ph" --output_tensor_name="custom_generated_t_style_source:0" --input_image_path="./demo/inference_input/" --output_image_path="./demo/inference_output/"
input_image_path can be either one single image or a path containing images.
Blog and Technical report.
Please refer to the technical report for details on the network structure and losses.
Our idea of using adaptive normalization parameters for image translation is not unique. To the best of our knowledge, at least two more work have similar ideas: MUNIT and EG-UNIT. Our model is developed around the same time period as these models.
Some key differences between our model and the two mentioned are -- we find UNet to be extremely helpful in maintaining semantic correspondence across domain, and we found that sharing all convolution filter weights speeds up training while maintaining the same output quality.
More documentations can be found under docs/
A lot of the code are adapted from online. Here is a non-exhaustive list of the repos where I borrowed code from extensively.
Anime related repos and datasets
Shameless self promotion of my AniSeg anime object detection & segmentation model.
The all-encompassing anime dataset Danbooru2017 by gwern.
My hand-curated sketch-colored image dataset.
This personal project is developed and open sourced when I am working for Google, therefore you see Copyright 2018 Google LLC in each file. This is not an officially supported Google product. See License and Contributing for more details.