Skip to content
Twin-GAN -- Unpaired Cross-Domain Image Translation with Weight-Sharing GANs
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
datasets Open Sourcing! Jul 18, 2018
demo Adds web interface. Sep 25, 2018
deployment Open Sourcing! Jul 18, 2018
docs
inference Remove duplicate import of "os". Aug 12, 2018
interface Fix small bug where images with the same id are not updated under web… Oct 14, 2018
libs Open Sourcing! Jul 18, 2018
model Fixes wrong training script and some comments * 2 Aug 8, 2018
nets Fixes wrong training script and some comments. Aug 7, 2018
preprocessing
CONTRIBUTING.md
LICENSE
README.md
__init__.py
image_generation.py
pggan_runner.py
requirement.txt adding fix for cannot import util_io and adding requirement.txt for p… Aug 10, 2018
train_image_classifier.py Open Sourcing! Jul 18, 2018
twingan.py
util_io.py
util_io_test.py
util_misc.py
util_misc_test.py

README.md

TwinGAN -- Unsupervised Image Translation for Human Portraits

result identity_preservation search_engine search_engine

Use Pretrained Model.

We provide two pre-trained models: human to anime and human to cats.

Run the following command to translate the demo inputs.

python inference/image_translation_infer.py \
--model_path="/PATH/TO/MODEL/256/"
--image_hw=256
--input_tensor_name="sources_ph"
--output_tensor_name="custom_generated_t_style_source:0"
--input_image_path="./demo/inference_input/"
--output_image_path="./demo/inference_output/"

The input_image_path can be either one single image or a path containing images.

For more information, see the documentation on inference and eval and on the web interface.

Training

Download CelebA and the Getchu dataset by following the datasets guide. Then train your model using script from the training guide.

Blog and Technical report.

An English blog and a Chinese 中文 blog are published in early April 2018 and are available for readers with less technical background.

Network setup: network_structure

Conv layer structure: network_structure

Please refer to the technical report for details on the network structure and losses.

Extra materials:

Presentation Slides at Anime Expo 2018

Related works

Our idea of using adaptive normalization parameters for image translation is not unique. To the best of our knowledge, at least two more work have similar ideas: MUNIT and EG-UNIT. Our model is developed around the same time period as these models.

Some key differences between our model and the two mentioned are -- we find UNet to be extremely helpful in maintaining semantic correspondence across domain, and we found that sharing all convolution filter weights speeds up training while maintaining the same output quality.

Documentations

More documentations can be found under docs/

Reference

A lot of the code are adapted from online. Here is a non-exhaustive list of the repos where I borrowed code from extensively.

TF-Slim image models library

PGGAN

Anime related repos and datasets

Shameless self promotion of my AniSeg anime object detection & segmentation model.

Sketch coloring using PaintsTransfer and PaintsChainer.

Create anime portraits at Crypko and MakeGirlsMoe

The all-encompassing anime dataset Danbooru2017 by gwern.

My hand-curated sketch-colored image dataset.

Disclaimer

This personal project is developed and open sourced when I am working for Google, therefore you see Copyright 2018 Google LLC in each file. This is not an officially supported Google product. See License and Contributing for more details.

You can’t perform that action at this time.