In this notebook, i'm trying to recreate a style transfer method that is outlined in the paper, Image Style Transfer Using Convolutional Neural Networks, by Gatys in PyTorch link to Paper!.
This paper uses VGG19 model to create a new target image which should contain desired content and style components:
-
objects and their arrangement are similar to that of the content image
-
style, colors, and textures are similar to that of the style image
Here is the content & style of the first example
And the target of applying the model over the two images
Achieving a loss of 305766.5625 which is better than the referenced example here CV Foundation
Here is the content & style of the second example
And the target of applying the model over the two images
Achieving a loss of 289863.03125 which is better than the referenced example here GitHub
Here is the content & style of the third example
And the target of applying the model over the two images
Achieving a loss of 13028.4619140625 which is better than the referenced example here Pytorch
The code benefits from outstanding prior work and their implementations including: