Skip to content

taki0112/Toward_spatial_unbiased-Tensorflow

Repository files navigation

Spatial unbiased GANs — Simple TensorFlow Implementation [Paper]

: Toward Spatially Unbiased Generative Models (ICCV 2021)

Abstract Recent image generation models show remarkable generation performance. However, they mirror strong location preference in datasets, which we call spatial bias. Therefore, generators render poor samples at unseen locations and scales. We argue that the generators rely on their implicit positional encoding to render spatial content. From our observations, the generator’s implicit positional encoding is translation-variant, making the generator spatially biased. To address this issue, we propose injecting explicit positional encoding at each scale of the generator. By learning the spatially unbiased generator, we facilitate the robust use of generators in multiple tasks, such as GAN inversion, multi-scale generation, generation of arbitrary sizes and aspect ratios. Furthermore, we show that our method can also be applied to denoising diffusion probabilistic models.

Requirements

  • Tensorflow >= 2.x

Usage

├── dataset
   └── YOUR_DATASET_NAME
       ├── 000001.jpg 
       ├── 000002.png
       └── ...

Train

> python main.py --dataset FFHQ --phase train --img_size 256 --batch_size 4 --n_total_image 6400

Generate Video

> python generate_video.py

Results

  • FID: 3.81 (6.4M images(200k iterations), 8GPU, each 4 batch size)
    • FID reported in the paper: 6.75

Video

Uncuratd

Style mixing

  • It's worse than stylegan2.

Truncation trick

Reference

Author

Junho Kim

About

Simple Tensorflow implementation of "Toward Spatially Unbiased Generative Models" (ICCV 2021)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published