Skip to content

Official PyTorch implementation of GDWCT (CVPR 2019, oral)

Notifications You must be signed in to change notification settings

WonwoongCho/GDWCT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

90 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


This repository provides the official code of GDWCT, and it is written in PyTorch.

Paper

Image-to-Image Translation via Group-wise Deep Whitening-and-Coloring Transformation (link)
Wonwoong Cho1), Sungha Choi1,2), David Keetae Park1), Inkyu Shin3), Jaegul Choo1)
1)Korea University, 2)LG Electronics, 3)Hanyang University
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019 (Oral)

Additional resources for comprehending the paper

Comparison with baselines on CelebA dataset


Comparison with baselines on Artworks dataset


Prerequisites

  • Python 3.6
  • PyTorch 0.4.0+
  • Linux and NVIDIA GPU + CUDA CuDNN

Instructions

Installation

git clone https://github.com/WonwoongCho/GDWCT.git
cd GDWCT

Dataset

  1. Artworks dataset Please go to the github repository of CycleGAN (link) and download monet2photo, cezanne2photo, ukiyoe2photo, and vangogh2photo.

  2. CelebA dataset Our data loader necessitates data whose subdirectories are composed of 'trainA', 'trainB', 'testA', and 'testB'. Hence, after downloading CelebA dataset, you need to preprocess CelebA data by separating the data according to a target attribute of a translation. i.e., A: Male, B: Female.
    CelebA dataset can be easily downloaded with the following script.

bash download.sh celeba
  1. BAM dataset Similar to CelebA, you need to preprocess the data after downloading. Downloading the data is possible if you fulfill a given task (segmentation labeling). Please go to the link in order to download it.

We wish to directly provide the data we used in the paper, however it cannot be allowed because the data is preprocessed. We apologize for this.

Train and Test

Settings and hyperparameters are set in the config.yaml file. Please refer to specific descriptions provided in the file as comments. After setting, GDWCT can be trained or tested by the following script (NOTE: the values of 'MODE', 'LOAD_MODEL', and 'START' should be changed if a user want to test the model.):

python run.py

Pretrained models

Run the script if you need to download pretrained models (Smile <=> Non-Smile), (Bangs <=> Non-Bangs). The pretrained models will be downloaded and unzipped into ./pretrained_models/ directory.

bash download.sh pretrained

In order to test the pretrained models, please change several options in the config file, as described in the script below.
If the name of a pretrained model is G_A_CelebA_Bangs_G4_320000.pth,

N_GROUP: 4
SAVE_NAME: CelebA_Bangs_G4
MODEL_SAVE_PATH: pretrained_models/
START: 320000
LOAD_MODEL: True
MODE: test

Results

Citation

Please cite our paper if our work including this code is helpful for your research.

@InProceedings{GDWCT2019,
author = {Wonwoong Cho, Sungha Choi, David Keetae Park, Inkyu Shin, Jaegul Choo},
title = {Image-to-Image Translation via Group-wise Deep Whitening-and-Coloring Transformation},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2019}
}