Deep Style Transfer
Tensorflow implementation of the fast feed-forward neural style transfer network by Johnson et al.
Takes a few hours to train on a P2 instance on AWS and image generation takes a few seconds on a Macbook Pro. Training image dataset was from MS COCO validation set and uses the VGG19 network for texture and style loss.
If you don't want to install tensorflow and deal with training your own models, try the Android app which has other styles trained using this code.
- Tensorflow 0.12
- pip install:
- numpy 1.11
Instructions for Processing
- Go to the project root (there is a pretrained model in the /data directory)
$ python style.py --input=path_to_image.jpg --output=your_output_file.jpg
Note: The style.py script defaults to a pretrained model based off starry.jpg - if you wish to try a different style you will have to train your own in the next section
Intructions for Training
- Download VGG19 weights as vgg19.npy
- Download MS COCO validation dataset - the training script defaults to look for images in a directory named input_images, though you can add whatever directory you want as a command line argument
$ python train.py --data_dir=/path/to/ms_coco --texture=path/to/source_image.jpg
Instructions for Exporting
When training, your model gets saved as a series of tensorflow checkpoints - the default directory for this is the named 'model.' We export these checkpoints to a .pb file for easy usage, using the freeze_graphs.py file copied from the tensorflow source repo. To run the export script run:
$ python export.py --input_dir=path_to/training_directory