TF-GAN is a lightweight library for training and evaluating Generative Adversarial Networks (GANs).
- Can be installed with
pip install tensorflow-gan, and used with
import tensorflow_gan as tfgan
- Well-tested examples
- Interactive introduction to TF-GAN in
Structure of the TF-GAN Library
TF-GAN is composed of several parts, which are designed to exist independently:
- Core: the main infrastructure needed to train a GAN. Set up training with any combination of TF-GAN library calls, custom-code, native TF code, and other frameworks
- Features: common GAN operations and normalization techniques, such as instance normalization and conditioning.
- Losses: losses and penalties, such as the Wasserstein loss, gradient penalty, mutual information penalty, etc.
standard GAN evaluation metrics. Use
Frechet Distance, or
Kernel Distancewith a pretrained Inception network to evaluate your unconditional generative model. You can also use your own pretrained classifier for more specific performance numbers, or use other methods for evaluating conditional generative models.
- Examples: simple examples on how to use TF-GAN, and more complicated state-of-the-art examples
Who uses TF-GAN?
Numerous projects inside Google. The following are some published papers that use TF-GAN:
- Self-Attention Generative Adversarial Networks
- Large Scale GAN Training for High Fidelity Natural Image Synthesis
- GANSynth: Adversarial Neural Audio Synthesis
- Boundless: Generative Adversarial Networks for Image Extension
- NetGAN: Generating Graphs via Random Walks
- Discriminator rejection sampling
- Generative Models for Effective ML on Private, Decentralized Datasets
- Semantic Pyramid for Image Generation
- GAN-Mediated Cell Images Batch Equalization
- Are GANs Created Equal? A Large-Scale Study
- The GAN Landscape: Losses, Architectures, Regularization, and Normalization
- Assessing Generative Models via Precision and Recall
- High-Fidelity Image Generation With Fewer Labels
Training a GAN model
Training in TF-GAN typically consists of the following steps:
- Specify the input to your networks.
- Set up your generator and discriminator using a
- Specify your loss using a
- Create your train ops using a
- Run your train ops.
At each stage, you can either use TF-GAN's convenience functions, or you can perform the step manually for fine-grained control.
There are various types of GAN setup. For instance, you can train a generator to sample unconditionally from a learned distribution, or you can condition on extra information such as a class label. TF-GAN is compatible with many setups, and we demonstrate in the well-tested examples directory
- (Documentation) David Westbrook, firstname.lastname@example.org
- Joel Shor, email@example.com, github
- Aaron Sarna, firstname.lastname@example.org, github
- Yoel Drori, email@example.com, github