Paper: BEGAN: Boundary Equilibrium Generative Adversarial Networks.
- Python 2.7
- Pillow
- prettytensor
- scipy
- progressbar
- TensorFlow 0.2.0 (or higher)
First, download the CelebA dataset
Second, extract the dataset and cut the image into 64x64
Third, the dataset folder should be like:
dataset_folder
|
------xx.jpg
|
------aa.jpg
Fourth, train the model:
$ python main.py --working_directory='A_PATH_TO_PLACE_YOUR_MODEL' --data_directory='A_PATH_TO_YUR_DATASET'
fe:
$ python main.py --working_directory='./celebA_train' --data_directory='../data/dataset'
Fifth, to view the result:
the image generated by the model will be saved every 100 batch training.
and the image will be place at:A_PATH_TO_PLACE_YOUR_MODEL/imgs/
Sixth, the model will be train and saved every 100 batch. You can continue your training:
$ python main.py --working_directory='A_PATH_TO_PLACE_YOUR_MODEL'
no data_directory needed this time.
Agg.png are the images generated by the generator and the Agg_d.png are the images generated by autoencoder.
For lack of training the model is still far from convergence. In my experiment, began can convergent faster in CelebA dataset. But it perform poorly in some really wild dataset.
My custom dataset is a set of shotcut built from over 156 cartoon videos. BEGAN can not show any convergence clues in mine custom dataset. However dcgan with batch discriminator can convergent on that.
I will update the result soon.
Yuletian