Skip to content

Aleadinglight/Pytorch-VGG-19

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Pytorch-VGG-19

Using Pytorch to implement VGG-19

Instruction

Implementation and notes can be found here.

This is an implementation of this paper in Pytorch.

This one was wrote using important ideas from Pytorch tutorial. I did my best to explain in detail the ideas in each section of the Python notebook. The maths and visual illustation can be found below.

Maths

We will feed two pictures X and Y into the VGG-19 neural network. We will adjust the feature maps of these pictures to look closely to each other. Because the feature maps contain the style and content of the particular picture (Convolutional layer helps us to create more aspects of a picture). I explain it with more detail here.

We have to minimize the style loss the make the picture X adopt the style of picture Y, also we need to minimize the content loss between the picture X itself so that the content stays and the style changes.

The features maps from X that we will use:

Using the Gram matrix as feature correlation between one pixel to all others in one picture, we can calculate the style loss.

Gallery

From

To

About

Using Pytorch to implement VGG-19

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published