Skip to content
ICML 2019. Turn a pre-trained GAN model into a content-addressable model without retraining.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
illus add illustration images May 19, 2019
.gitignore
LICENSE Initial commit May 12, 2019
README.md

README.md

Content Addressable GAN (CADGAN)

Repository containing resources from our paper:

Kernel Mean Matching for Content Addressability of GANs
Wittawat Jitkrittum,*, Patsorn Sangkloy,* Muhammad Waleed Gondal, Amit Raj, James Hays, Bernhard Schölkopf
ICML 2019
(* Equal contribution)
https://arxiv.org/abs/1905.05882
  • Full paper: main text + supplement on arXiv (file size: 36MB)
  • Main text only here (file size: 7.3MB)
  • Supplementary file only here (file size: 32MB)

We propose a novel procedure which adds content-addressability to any given unconditional implicit model e.g., a generative adversarial network (GAN). The procedure allows users to control the generative process by specifying a set (arbitrary size) of desired examples based on which similar samples are generated from the model. The proposed approach, based on kernel mean matching, is applicable to any generative models which transform latent vectors to samples, and does not require retraining of the model. Experiments on various high-dimensional image generation problems (CelebA-HQ, LSUN bedroom, bridge, tower) show that our approach is able to generate images which are consistent with the input set, while retaining the image quality of the original model. To our knowledge, this is the first work that attempts to construct, at test time, a content-addressable generative model from a trained marginal model.

Code coming soon!

Examples

We consider a GAN model from Mescheder et al., 2018 pretrained on CelebA-HQ. We run our proposed procedure using the three images (with border) at the corners as the input. All images in the triangle are the output from our procedure. Each of the output images is positioned such that the closeness to a corner (an input image) indicates the importance (weight) of the corresponding input image.

You can’t perform that action at this time.