Text To Image Synthesis
This is an experimental tensorflow implementation of synthesizing images. The images are synthesized using the GAN-CLS Algorithm from the paper Generative Adversarial Text-to-Image Synthesis. This implementation is built on top of the excellent DCGAN in Tensorflow.
Image Source : Generative Adversarial Text-to-Image Synthesis Paper
- The model is currently trained on the flowers dataset. Download the images from here and save them in
102flowers/102flowers/*.jpg. Also download the captions from this link. Extract the archive, copy the
text_c10folder and paste it in
N.B You can downloads all data files needed manually or simply run the downloads.py and put the correct files to the right directories.
downloads.pydownload Oxford-102 flower dataset and caption files(run this first).
data_loader.pyload data for further processing.
train_txt2im.pytrain a text to image model.
Deployment of Web Application
- Upload all the trained (npz) and web app files to web server or domain
input.phprun the input.php file to give input.
- Give input and submit get desired output .
- Generative Adversarial Text-to-Image Synthesis Paper
- Generative Adversarial Text-to-Image Synthesis Torch Code
- Skip Thought Vectors Paper
- Skip Thought Vectors Code
- Generative Adversarial Text-to-Image Synthesis with Skip Thought Vectors TensorFlow code
- DCGAN in Tensorflow
- these white flowers have petals that start off white in color and end in a white towards the tips.