Skip to content

This is the official Tensorflow implementation of our paper: "DualAST: Dual Style-Learning Networks for Artistic Style Transfer"

Notifications You must be signed in to change notification settings

HalbertCH/DualAST

Repository files navigation

DualAST: Dual Style-Learning Networks for Artistic Style Transfer

This is the official Tensorflow implementation of our paper: "DualAST: Dual Style-Learning Networks for Artistic Style Transfer" (CVPR 2021)

This project provides a novel style transfer framework, termed as DualAST, to address the artistic style transfer problem from a new perspective. Unlike existing style transfer methods, which learn styles from either a single style example or a collection of artworks, DualAST learns simultaneously both the holistic artist-style (from a collection of an artist's artworks) and the specific artwork-style (from a single style image): the first style sets the tone (i.e., the overall feeling) for the stylized image, while the second style determines the details of the stylized image, such as color and texture. Moreover, we introduce a Style-Control Block (SCB) to adjust the styles of generated images with a set of learnable style-control factors.

image

Requirements

We recommend the following configurations:

  • python 3.7
  • tensorflow 1.14.0
  • CUDA 10.1
  • PIL, numpy, scipy
  • tqdm

Model Training

  • Download the content dataset: Places365(105GB).
  • Download the style dataset: Artworks of Different Artists. Thanks for the dataset provided by AST.
  • Download the pre-trained VGG-19 model, and record the path of VGG-19 in vgg19.py.
  • Set your available GPU ID in Line185 of the file ‘main.py’.
  • Run the following command:
python main.py --model_name van-gogh \
               --phase train \
               --image_size 768 \
               --ptad /disk1/chb/data/vincent-van-gogh_road-with-cypresses-1890 \
               --ptcd /disk1/chb/data/data_large

Model Testing

  • Put your trained model to ./models/ folder.
  • Put some sample photographs to ./images/content/ folder.
  • Put some reference images to ./images/reference/ folder.
  • Set your available GPU ID in Line185 of the file ‘main.py’.
  • Run the following command:
python main.py --model_name=van-gogh \
               --phase=inference \
               --image_size=1280 \
               --ii_dir images/content/ \
               --reference images/reference/van-gogh/1.jpg \
               --save_dir=models/van-gogh/inference

image

We provide some pre-trained models in link.
We refer the reader to AST for the computation of Deception Rate.

Comparison Results

We compare our DualAST with Gatys et al., AdaIN, WCT, Avatar-Net, SANet, AST, and Svoboda et al..

image

Acknowledgments

The code in this repository is based on AST. Thanks for both their paper and code.

About

This is the official Tensorflow implementation of our paper: "DualAST: Dual Style-Learning Networks for Artistic Style Transfer"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages