Skip to content

scharnk/fast-style-transfer_python-spout-touchdesigner

Repository files navigation

fast-style-transfer_python-spout-touchdesigner

This repository is a tensorflow implementation of fast-style transfer in python to be sent into touchdesigner. To talk to both programs Spout is required. Commands are written for windows10, linux/mac commands will vary slightly. This repo is designed to be a fully packaged tutorial to get everything running for webcam style transfer video in touchdesigner.

alt text

Dependencies

Set Up for Windows10:

  • Install python 3.5.1 here
  • Visual Studio Community 2015 here
  • CUDA8.0 (make sure to uninstall any other versions of CUDA)here
    • Run exe file, follow prompts
  • CUDA8.0 patch here
  • cuDNN 5.1 here
      Extract & Paste/drop files manually into Program Files/CUDA/8.0/*respective folders* (i.e. bin, input, etc)
  • Restart computer
  • Double check CUDA_PATH exists in path--Type environment variable in search
  • Tensorflow 1.2.0
      Python -m pip install tensorflow_gpu==1.2.0
      (may have to upgrade pip first before this will run)
  • Opencv 3.3.0.9
      Pip install opencv-python==3.1.0
  • pygame
    • python -m pip install pygame
  • PyOpenGL
    • python -m pip install PyOpenGL==3.1.0
  • scipy
    • python -m pip install scipy==1.0.0
  • numpy
    • python -m pip install numpy==1.13.1
  • pillow (PIL)
    • python -m pip install pillow
  • imageio
    • python -m pip install imageio (I used 2.5.0)
  • matplotlib
    • python -m pip install matplotlib (I used 3.0.3)
  • scikit-image
    • python -m pip install scikit-image (I used 0.15.0)
  • Download the pretrained vgg19 imagenet from here
    • Place it in same directory in a folder named 'pre_trained_model'
  • Download MSCOCO train2014 dataset (*12GB*) from here
    • Make sure full folder downloads (crdownload file = incomplete)
      I'd recommend keeping the downloads tab open, check it periodically and click resume if it has been interrupted. Do NOT close the browser.
      When complete, make sure folder is named 'train2014'
  • Download and install Spout here
  • (OPTIONAL) Pre-trained style-model from hwalsuklee here

Example Usage:

Style-transfer

To run from terminal set directory to project location & set required parameters:

cd C:\Users\_______\Documents\fast-style-transfer_python-spout-touchdesigner

To test style transfer:

Before you can test style transfer you either have to download a pre-trained style-model (above) or you have to train a model yourself (next section).

python run_test.py --content content/imageyouwantstylized.jpg --style_model fast_neural_style/rain_princess.ckpt --output stylizedimage.jpg

    Required parameters (defaults can be found at the top of run_test.py):
    --content: Filename of the content image.
    --style-model: Filename of the style model.
    --output: Filename of the output image.
    Optional parameter:
    --max_size: Maximum width or height of the input images. None do not change image size. Default: None

To train models:

python run_train.py --style style/stylesourceimage.jpg --output modelname --trainDB train2014 --vgg_model pre_trained_model
    Required parameters (Defaults can be found at the top of run_train.py):
    --style: Filename of the style image.
    --output: File path for trained-model. Train-log is also saved here.
    --trainDB: Relative or absolute directory path to MSCOCO DB. Default: train2014
    --vgg_model: Relative or absolute directory path to pre trained model. Default: pre_trained_model
    Optional :
    --content_weight: Weight of content-loss. Default: 7.5e0
    --style_weight: Weight of style-loss. Default: 5e2
    --tv_weight: Weight of total-varaince-loss. Default: 2e2
    --content_layers: Space-separated VGG-19 layer names used for content loss computation. Default: relu4_2
    --style_layers: Space-separated VGG-19 layer names used for style loss computation. Default: relu1_1 relu2_1 relu3_1 relu4_1 relu5_1
    --content_layer_weights: Space-separated weights of each content layer to the content loss. Default: 1.0
    --style_layer_weights: Space-separated weights of each style layer to loss. Default: 0.2 0.2 0.2 0.2 0.2
    --max_size: Maximum width or height of the input images. Default: None
    --num_epochs: The number of epochs to run. Default: 2
    --batch_size: Batch size. Default: 4
    --learn_rate: Learning rate for Adam optimizer. Default: 1e-3
    --checkpoint_every: Save-frequency for checkpoint. Default: 1000
    --test: Filename of the content image for test during training. Default: None
    --max_size: Maximum width or height of the input image for test. None do not change image size. Default: None

Spout Usage

To test spout:

Start with python spout_hello.py should get 'Hello, from c++ dll!'

Next try python spout_receiver_example.py and python spout_receiver_example.py
When these are working the receiver should open a black window, and the sender should show a rotating cube.

To run style transfer from webcam video and view in touchdesigner:

Open touchdesigner file (spout.3.toe)
Make sure videodevin1 is active (toggle=on), and syphonspoutin1 is set to the correct sender and toggle=on
Make sure sender name in spout_NST_receiver_sender.py matches that in Touch
Touchdesigner should be open when you run: python spout_NST_receiver_sender.py --style_model ast_neural_style/rain_princess.ckpt
If working, you should see the stylized video feed from the style model you've selected

Contributors

Acknowledgements

Some depreciated python modules were replaced but for the most part fast-style transfer is based on this repo: https://github.com/hwalsuklee/tensorflow-fast-style-transfer
Spout for Python was based on this repo: https://github.com/spiraltechnica/Spout-for-Python
Some set up/dependencies were from Grant Watson's fastyle transfer repository here: https://github.com/ghwatson/faststyle

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published