Skip to content

Using Neural Transfer to transfer style of an image to content of another image

Notifications You must be signed in to change notification settings

himalayanZephyr/image_style_transfer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Introduction

Gatys et al. published in 2016 introduced a novel way of using CNNs to transfer the style of an image to the content of another image. This process is also known as Neural Transfer.

The idea is to use the feature maps learned during the training of a CNN model to take a new image and reproduce it with a new artistic style.

Implementation

The code was implemented using Pytorch 1.0 and ran on Google Colab.

Following results were attempted to replicate as shown in the paper:-

  • Given a content image and style image , output a new image that combines the style and content from original images(Figure 3 in the paper)
  • Show the effect of relative weighting of content and style i.e. effect of ratio alpha/beta(Figure 4 in the paper)
  • Effect of matching content representation in different layers of the network. (Figure 5 in the paper)
  • Effect of initialization of target image from content image vs style image (Figure 6 in the paper)

CREDITS

The idea for this project came from the content and an exercise from an Udacity course called Intro to Deep Learning with PyTorch. Some of the code has been adapted from there, as well as the official Pytorch tutorial.

Results

STYLE TRANSFER USING DIFFERENT STYLE IMAGES

The following figure shows the results of applying the style transfer algorithm to a sample image for different style images. This figure is analogus to figure 3 of the paper.

  • Parameters used: alpha/beta ratio was 1x10^(-4) and the optimizer was run for 8000 steps. Figure 3

EFFECT OF INITIALIZATION OF GRADIENT DESCENT

This figure is analogus to figure 6 of the paper. The following figure shows results after initializing the target image from different images(Content/Style/White Noise).

  • Parameters for initialization from Content Image: Runs=8000, alpha/beta=1x10^(-4)
  • Parameters for initialization from Style Image: Runs=15000, alpha/beta=1
  • Parameters for initialization from White noise Image: Runs=25000, alpha/beta=1

Different parameters were used here because the parameters used in the paper i.e. alpha/beta ratio of 1x10^(-3) did not yield same results using same number of runs. It was noticed that the initialization with style or white noise images required more number of runs and a more balanced ratio of alpha/beta to give reasonable results. Figure 6

FIGURE 4 and 5

No differentiable results could be reproduced for Figure 4 and Figure 5 shown in the paper by using the same set of parameters specified in the paper.

About

Using Neural Transfer to transfer style of an image to content of another image

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published