Skip to content

Latest commit

History

History
44 lines (31 loc) 路 2.71 KB

README.md

File metadata and controls

44 lines (31 loc) 路 2.71 KB

WGAN-GP (NeurIPS'2017)

Improved Training of Wasserstein GANs

Task: Unconditional GANs

Abstract

Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.

Results and models

WGAN-GP 128, CelebA-Cropped
Model Dataset Details SWD MS-SSIM Download
WGAN-GP 128 CelebA-Cropped GN 5.87, 9.76, 9.43, 18.84/10.97 0.2601 model
WGAN-GP 128 LSUN-Bedroom GN, GP-lambda = 50 11.7, 7.87, 9.82, 25.36/13.69 0.059 model

Citation

@article{gulrajani2017improved,
  title={Improved Training of Wasserstein GANs},
  author={Gulrajani, Ishaan and Ahmed, Faruk and Arjovsky, Martin and Dumoulin, Vincent and Courville, Aaron},
  journal={arXiv preprint arXiv:1704.00028},
  year={2017},
  url={https://arxiv.org/abs/1704.00028},
}