Skip to content
Unofficial pytorch implementation of 'Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization' [Huang+, ICCV2017]
Branch: master
Clone or download
Latest commit e13bed9 Nov 24, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
input first commit Nov 29, 2017
models first commit Nov 29, 2017
.gitignore Merge branch 'master' into develop Jan 4, 2018
LICENSE Create LICENSE Nov 23, 2018
README.md Update README.md Jul 17, 2018
function.py Pytorch v0.3 -> v0.4 May 17, 2018
net.py Pytorch v0.3 -> v0.4 May 17, 2018
results.png Add results May 18, 2018
sampler.py training Dec 2, 2017
test.py test.py uses cpu if no gpu available Oct 12, 2018
torch_to_pytorch.py Pytorch v0.3 -> v0.4 May 17, 2018
train.py change default style weight May 17, 2018

README.md

pytorch-AdaIN

This is an unofficial pytorch implementation of a paper, Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization [Huang+, ICCV2017]. I'm really grateful to the original implementation in Torch by the authors, which is very useful.

Results

Requirements

  • Python 3.5+
  • PyTorch 0.4+
  • TorchVision
  • Pillow

(optional, for training)

  • tqdm
  • TensorboardX

Usage

Download models

This command will download a pre-trained decoder as well as a modified VGG-19 network.

bash models/download_models.sh

Convert models

This command will convert the models for Torch to the models for PyTorch.

python torch_to_pytorch.py --model models/vgg_normalised.t7
python torch_to_pytorch.py --model models/decoder.t7

Test

Use --content and --style to provide the respective path to the content and style image.

CUDA_VISIBLE_DEVICES=<gpu_id> python test.py --content input/content/cornell.jpg --style input/style/woman_with_hat_matisse.jpg

You can also run the code on directories of content and style images using --content_dir and --style_dir. It will save every possible combination of content and styles to the output directory.

CUDA_VISIBLE_DEVICES=<gpu_id> python test.py --content_dir input/content --style_dir input/style

This is an example of mixing four styles by specifying --style and --style_interpolation_weights option.

CUDA_VISIBLE_DEVICES=<gpu_id> python test.py --content input/content/avril.jpg --style input/style/picasso_self_portrait.jpg,input/style/impronte_d_artista.jpg,input/style/trial.jpg,input/style/antimonocromatismo.jpg --style_interpolation_weights 1,1,1,1 --content_size 512 --style_size 512 --crop

Some other options:

  • --content_size: New (minimum) size for the content image. Keeping the original size if set to 0.
  • --style_size: New (minimum) size for the content image. Keeping the original size if set to 0.
  • --alpha: Adjust the degree of stylization. It should be a value between 0.0 and 1.0 (default).
  • --preserve_color: Preserve the color of the content image.

Train

Use --content_dir and --style_dir to provide the respective directory to the content and style images.

CUDA_VISIBLE_DEVICES=<gpu_id> python train.py --content_dir <content_dir> --style_dir <style_dir>

For more details and parameters, please refer to --help option.

I share the model trained by this code here

References

  • [1]: X. Huang and S. Belongie. "Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization.", in ICCV, 2017.
  • [2]: Original implementation in Torch
You can’t perform that action at this time.