Skip to content
No description, website, or topics provided.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
figs lsun img update Jun 1, 2016
.gitignore README update Apr 3, 2016
README.md
batch_norm_conv_layer.py
batch_norm_layer.py
battle.py
conv_layer.py comments added, except in main files Apr 2, 2016
convnet_cuda128.py
convnet_cuda28.py
convnet_cuda32.py
convnet_cuda64.py recGanI merge to gran, fixes in gran May 20, 2016
deconv.py
gran.py
inquire_recganI.py
inquire_samples.py Dropbox link added, along with updates to inuqire and main files Apr 10, 2016
main_granI_cifar10.py
main_granI_lsun.py recGanI merge to gran, fixes in gran May 20, 2016
main_granI_lsunc.py
optimize_gan.py
paths.yaml
recGanI.py
recGenI128.py
recGenI28.py
recGenI32.py please ignore this commit and use previouse commit Jul 25, 2016
recGenI64.py comments added, except in main files Apr 2, 2016
to_hkl.py
util_cifar10.py
utils.py

README.md

Generating Images with Recurrent Adversarial Networks

Python (Theano) implementation of Generating Images with Recurrent Adversarial Networks code provided by Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, and Roland, Memisevic

Generative Recurrent Adversarial Network (GRAN) is a recurrent generative model inspired by the view that unrolling the gradient-based optimization yields a recurrent computation that creates images by incrementally adding onto a visual “canvas”. GRAN is trained using adversarial training to generate very good image samples.

Generative Adversarial Metric (GAM) quantitatively compare adversarial networks by having the generators and discriminators of these networks compete against each other.

For more information, see

@article{Im2015,
    title={Generating Images with Recurrent Adversarial Networks },
    author={Im, Daniel Jiwoong and Kim, Chris Dongjoo and Jiang, Hui and Memisevic, Roland},
    journal={http://arxiv.org/abs/1602.05110},
    year={2016}
}

If you use this in your research, we kindly ask that you cite the above arxiv paper.

Dependencies

Packages

How to set-up LSUN dataset

  1. Obtain the LSUN dataset from fyu's repository
  2. Resize the image to 64x64 or 128x128.
  3. Split the dataset to train/val/test set.
  4. Update the paths in provided paths.yaml, and run the script
python to_hkl.py <toy/full>

Link it to the inquire/main file, e.g.

lsun_datapath='/local/scratch/chris/church/preprocessed_toy_10/'

How to run

Entry code for CIFAR10 and LSUN Church are

    - ./main_granI_cifar10.py

How to obtain samples with pretrained models

First download the pretrained model from this Dropbox Link, save it to a local folder, and supply the path when prompted.

    python inquire_samples.py # to attain Nearest Neighbour and Sequential Samples

    python main_granI_lsun.py # to attain 100 samples from the pretrained model.

Here are some CIFAR10 samples generated from GRAN:

Image of cifar10

Image of cifar10

Here are some LSUN Church samples generated from GRAN:

Image of lsun

Image of lsun

Here are some Mix of LSUN Living Room and Kitchen dataset generated from GRAN:

Image of lsun

You can’t perform that action at this time.