Architectural style transfer using deep-learning. Implemented for Computer Vision 2016 (Princeton University) by Richard Du, Yash Patel, and Jason Shi
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
processing Update transfer.py Jan 17, 2017
static Add Flask web framework Jan 3, 2017
templates Add Flask web framework Jan 3, 2017
tmp Set up directories on repo Jan 4, 2017
.gitignore
DeepGIF.pdf Added paper describing general algorithm Jan 17, 2017
Procfile Procfile and requirements Jan 3, 2017
README.md Update README.md Jan 17, 2017
app.py
desktop.ini Merge pack file Jan 3, 2017
requirements.txt
runtime.txt Update requirements and runtime Jan 4, 2017

README.md

DeepGIF

Video style transfer using convolutional networks, with tracking and masks for GIFs!

Implemented for Computer Vision 2016 (Princeton University) by Richard Du, Yash Patel, and Jason Shi. Feel free to make use of the files given here or contact us if anything is not working properly! Please note that the pre-trained models that are necessary are more fully described in their respective folder, i.e. in 'processing/styletransfer' and 'processing/segmentation.'

The algoirthm and necessary background information is fully laid out in the following paper: "DeepGIF"

Requirement

  • Python = 2.7
    • TensorFlow 0.12.0
    • Keras
    • Chainer
    • Caffe

How to run

$ pip install -r requirements.txt
$ gunicorn main:app --log-file=-

Deploy to Heroku

$ heroku apps:create [NAME]
$ heroku buildpacks:add heroku/nodejs
$ heroku buildpacks:add heroku/python
$ git push heroku master

or Heroku Button.

Deploy