A Neural Algorithm of Artistic Style
Image Style Transfer Using Convolutional Neural Networks by Gatys
unsupervised
input: content image, target style image
loss: content loss + style loss
Could trade-off between content and style
The final model is style and image specified, i.e. each image output requires training an individual model. The speed of synthesis procedure depends heavily on image resolution. 512x512 pixels images take an hour on Nvidia K40 GPU.
Perceptual Losses for Real-Time Style Transfer and Super-Resolution Answer above question of Style Transfer using CNN: Style-specified model perceptual loss- compare two image based on high-level representations from pretrained CNN e.g. VGG loss ( pretrained for classification)
Feature Reconstruction Loss: Euclidean distance between feature representation, computed by the loss network
Style Reconstruction Loss: penalize difference in style with perceptual loss
Content Target y_c = Input Image x
Total Variation Regularization: encourage spatial smoothness
Kernel loss
There are a lot GAN that could handle Style Transfer