Skip to content

JaeDukSeo/cooc_texture

 
 

Repository files navigation

Co-occurrence Based Texture Synthesis

  • Official PyTorch implementation of the paper Co-occurrence Based Texture Synthesis.

Abstract

As image generation techniques mature, there is a growing interest in explainable representations that are easy to understand and intuitive to manipulate. In this work, we turn to co-occurrence statistics, which have long been used for texture analysis, to learn a controllable texture synthesis model. We propose a fully convolutional generative adversarial network, conditioned locally on co-occurrence statistics, to generate arbitrarily large images while having local, interpretable control over the texture appearance. To encourage fidelity to the input condition, we introduce a novel differentiable co-occurrence loss that is integrated seamlessly into our framework in an end-to-end fashion. We demonstrate that our solution offers a stable, intuitive and interpretable latent representation for texture synthesis, which can be used to generate a smooth texture morph between different textures. We further show an interactive texture tool that allows a user to adjust local characteristics of the synthesized texture image using the co-occurrence values directly.

Fidelity and diversity

Interpolation

Naive interpolation in RGB space

Interpolation in the co-occurrence space

Large texture generation with control

Pre-requisites

  • Python 3.7
  • PyTorch (conda version recommended, pytorch-gpu = 1.3.1)
  • torchvision, numpy, scipy, matplotlib, tqdm, pillow, os, scikit-learn

In order to install all the requirements via conda, just run the following command:

conda create -n cooc_texture python=3.7 pytorch-gpu=1.3.1 torchvision numpy scipy matplotlib tqdm pillow=6.1 scikit-learn

and then activate the environment:

conda activate cooc_texture

Training

Run the following command for training:

python train_model.py --texturePath=samples/marbled_0095.jpg --imageSize=128 --kVal=4 

Takes around 3 hours on a NVIDIA 1080Ti GPU, for an image crop size of 128x128 and kVal=4.

  • Here, kVal is the number of cluster centers for co-occurrence calculation.
  • You can use any image you want, and give the path in the format shown above.

Evaluating

python evaluate_model.py --texturePath=samples/marbled_0095.jpg --modelPath=results/marbled_0095_2020-03-02_23-02-09/ --checkpointNumber=120 --kVal=4 --outputFolder=eval_results/marbled_0095/ --evalFunc=f_d  

Replace the modelPath with the actual path of the directory. The one included is just for representation purposes

  • modelPath is the folder containing the checkpoint models.
  • checkpointNumber is the number of the epoch of which model we want to use.
  • outputFolder is the location where we want to save the evaluated results.
  • evalFunc is the selector between:
    • f_d = Fidelity and Diversity - saves an image showing the fidelity with respect to the input crop, and diversity with respect to random noise vector seeds.
    • interp = Interpolation - saves an image of the interpolation between two crops, by interpolating in the co-occurrence space and then generating the result.
    • write_tex = Large Image Generation with Control - saves a high resolution image generated from the input co-occurrence and writes the text '2020' on it.

Check config.py for all options

Acknowledgements

🎉 Thank you for Zalando SE, for providing for the code which served as a base for this project.

👍 The Describable Textures Dataset (DTD) for providing with a highly diverse dataset of images.

License

This project is licensed under the terms of the MIT license (see LICENSE for details).

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%