Skip to content

Redcof/StackGAN-Pytorch

 
 

Repository files navigation

StackGAN-pytorch

Original Repository hanzhanggit/StackGAN-Pytorch

Pytorch implementation for reproducing COCO results in the paper StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks by Han Zhang, Tao Xu, Hongsheng Li, Shaoting Zhang, Xiaogang Wang, Xiaolei Huang, Dimitris Metaxas. The network structure is slightly different from the tensorflow implementation.

Environment Setup (Linux)

Install conda (if not available)

  • git clone https://github.com/Redcof/StackGAN-Pytorch.git
  • wget https://repo.anaconda.com/miniconda/Miniconda3-py38_23.3.1-0-Linux-x86_64.sh
  • bash Miniconda3-py38_23.3.1-0-Linux-x86_64.sh -b
  • $HOME/miniconda3/bin/conda init
  • source $HOME/.bashrc

Create environment

  • conda create -n ganenv python=3.8
  • conda activate ganenv

Install dependencies

  • pip install -r requirements.txt
  • conda install -c conda-forge fasttext
  • conda install pytorch torchvision pytorch-cuda=11.8 -c pytorch -c nvidia

Install CUDA drivers(if not available)

How to check?

python cuda_test.py # should return True

Check OS architecture cat /etc/os-release return the OS name and uname -m command should return the OS architecture. For us, it was 'x86_64'

Downloading Toolkit https://developer.nvidia.com/cuda-11-7-0-download-archive?target_os=Linux

We choose to install online:

sudo dnf config-manager --add-repo https://developer.download.nvidia.com/compute/cuda/repos/rhel8/x86_64/cuda-rhel8.repo
sudo dnf clean all
sudo dnf -y module install nvidia-driver:latest-dkms
sudo dnf -y install cuda

Data - Text

  1. Download our preprocessed char-CNN-RNN text embeddings for training coco and evaluating coco, save them to data/coco.

  2. [Optional] Follow the instructions reedscot/icml2016 to download the pretrained char-CNN-RNN text encoders and extract text embeddings.

Data - Image

  1. Download the coco image data. Extract them to data/coco/.

Custom Dataset

  1. See data/README.md file

Training COCO

  • The steps to train a StackGAN model on the COCO dataset using our preprocessed embeddings.
    • Step 1: train Stage-I GAN (e.g., for 120 epochs) python code/main.py --cfg cfg/coco_s1.yml --gpu 0
    • Step 2: train Stage-II GAN (e.g., for another 120 epochs) python code/main.py --cfg cfg/coco_s2.yml --gpu 1
  • *.yml files are example configuration files for training/evaluating our models.
  • If you want to try your own datasets, here are some good tips about how to train GAN. Also, we encourage to try different hyper-parameters and architectures, especially for more complex datasets.

Pretrained Model

  • StackGAN for coco. Download and save it to models/coco.
  • Our current implementation has a higher inception score(10.62±0.19) than reported in the StackGAN paper

Evaluating

  • Run python code/main.py --cfg cfg/coco_eval.yml --gpu 2 to generate samples from captions in COCO validation set.

Examples for COCO:

Save your favorite pictures generated by our models since the randomness from noise z and conditioning augmentation makes them creative enough to generate objects with different poses and viewpoints from the same discription 😃

Citing StackGAN

If you find StackGAN useful in your research, please consider citing:

@inproceedings{han2017stackgan,
Author = {Han Zhang and Tao Xu and Hongsheng Li and Shaoting Zhang and Xiaogang Wang and Xiaolei Huang and Dimitris Metaxas},
Title = {StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks},
Year = {2017},
booktitle = {{ICCV}},
}

Our follow-up work

References

  • Generative Adversarial Text-to-Image Synthesis Paper Code
  • Learning Deep Representations of Fine-grained Visual Descriptions Paper Code

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 51.7%
  • Jupyter Notebook 47.4%
  • Shell 0.9%