The repository uses Neural Style Transfer. It takes as input two images: content image (the main input image), and style image (the filter). It then creates a third image using high level features (contents) of the content image and lower level features (styles) of the style image.
The detailed workflow is explained in style_transfer.ipynb. An in-depth analysis of the results and comparision with the Prisma app is also shown in images folder.
Content | Style | Output |
---|---|---|
- [1] https://arxiv.org/abs/1508.06576
- [2] https://arxiv.org/abs/1603.08155
- [3] Deep Learning with Python by Francois Chollet
The files are created in Google Colab, and input and output images are fed through Google Drive. Link is included.