The aim of this project is to replicate (at least partially) the model and the results presented in this paper, approaching the style transfer task by means of autoencoders trained for image reconstruction and the WCT transformation presented, pertrubing the latent representation of the input image.
This repository contains the following files/directories:
images
: folder containing two images needed for thevisualize.ipynb
notebook.train
: contains all the files used to train and test the models.model.py
: file containing the model implemented, together with additional functions nedded to implement the WCT transformation.visualize.ipynb
: a toy example showing the models in action.parameters
: a folder that should contain the parameters of the model.
Due to the large size of the files containing the trained parameters (~134 MB in total) we can't load them directly here on GitHub. However, following this link you should be able to download them. Once downloaded, just place them inside the local parameters
folder.