Skip to content
The source code of CVPR17 'Diversified Texture Synthesis with Feed-forward Networks'.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data
models Update Model_multi_texture_synthesis.lua Jun 5, 2018
src Update utils.lua Jul 20, 2017
LICENSE.txt Add files via upload Mar 19, 2018
README.md Update README.md Mar 19, 2018
convertHDF5.lua Update convertHDF5.lua Jul 14, 2017
multi_style_transfer_test.lua Add files via upload Jul 14, 2017
multi_style_transfer_train.lua Update multi_style_transfer_train.lua Aug 20, 2017
multi_texture_synthesis_test.lua Update multi_texture_synthesis_test.lua Jul 14, 2017
multi_texture_synthesis_train.lua
plot_loss.py Add files via upload Jul 14, 2017
single_texture_diverse_synthesis_test.lua
single_texture_diverse_synthesis_train.lua Update single_texture_diverse_synthesis_train.lua Jul 14, 2017

README.md

MultiTextureSynthesis

Torch implementation of our CVPR17 paper on multi-texture synthesis.

Prerequisites

  • Linux
  • NVIDIA GPU + CUDA CuDNN
  • Torch
  • Pretrained VGG model (download and put it under data/pretrained/)

Task 1: Diverse synthesis

We first realize the diverse synthesis on single-texture. Given one texture example, the generator should be powerful enough to combine elements in various way.

  • Training
th single_texture_diverse_synthesis_train.lua -texture YourTextureExample.jpg -image_size 256 -diversity_weight -1.0
  • Testing
th single_texture_diverse_synthesis_test.lua 

After obtaining all diverse results, run gif.m (data/test_out/) in Matlab to convert them to an .avi video for view.

To plot the stored training loss (.json file) for any usage,

python plot_loss.py

Task 2: Multi-texture synthesis

  • Training

Collect your texture image set (e.g., data/texture60/) before the training.

th multi_texture_synthesis_train.lua
  • Testing

We release a 60-texture synthesis model that synthesizes the provided 60-texture set (ind_texture =1,2,...,60) in data/texture60/ folder.

th multi_texture_synthesis_test.lua -ind_texture 24

Task 3: Multi-style transfer

In the synthesis, each bit in the selection unit represents a texture example. In the transferring, we employ a set of selection maps where each map represents one style image when initalized as a noise map (e.g., from the uniform distribution).

Collect your style image set (e.g., data/style1000/) before the training. For large number of style images (e.g., 1000), it is suggested to convert all images (e.g., ,jpg) to a HDF5 file for fast reading.

th convertHDF5.lua -images_path YourImageSetPath -save_to XXX.hdf5 -resize_to 512
  • Training
th multi_style_transfer_train.lua -image_size 512
  • Testing

We release a 1000-style transfer model that transfers this 1000-style set (ind_texture =1,2,...,1000).

th multi_style_transfer_test.lua 

Citation

@inproceedings{DTS-CVPR-2017,
    author = {Li, Yijun and Fang, Chen and Yang, Jimei and Wang, Zhaowen and Lu, Xin and Yang, Ming-Hsuan},
    title = {Diversified Texture Synthesis with Feed-forward Networks},
    booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
    year = {2017}
}

Acknowledgement

You can’t perform that action at this time.