Skip to content

ChandhiniG/Generate-Stylized-Image-from-Edges

Repository files navigation

Chandhini Grandhi, cgrandhi@ucsd.edu

Abstract Proposal

This project builds an image to image translation using pix2pix model and combines it with style transfer to generate a stylised image from sketches. The project takes in a dataset of faces of people obtained from CUHK dataset . It consists of two phases: The first model is a Pix2pix Generative Adversarial networks that takes in the image, does the processing required and generates the photos from this. Essentially, this step involves translating edges to faces. The second model is the Neural Style transfer whose content image is the image generated from pix2pix model and style image is chosen by the user.The final generated image is the stylized version of face image generated from edges.

I first built the models and experimented with the dataset. Then, I used some of the images drawn by my friends (Available in data/user-images) and generated standalone faces from user sketches and performed style transfer on them.

Project Report

The report is available here

Model/Data

Briefly describe the files that are included with your repository:

  • data : Input and generated output images of the pix2pix model are located in the data folder
  • trained models: single_test contains the trained checkpoint for the pix2pix model
  • pix2pixtensorflow : contains the cloned version of pix2pix model

Code

Your code for generating your project:

  • pix2pix model: Followed the steps in pix2pix model
  • Style transfer model: style_transfer.ipynb

Results

Two versions of results are shown below

  1. Generated stylized images from the validation dataset during testing
  • Input Image
    Alt Text

  • edgestoface generated from pix2pix model
    Alt Text

  • stylized image
    Alt Text

  1. Generated stylized images from user inputs (my friends)
  • Input Image
    Alt Text

  • edgestoface generated from pix2pix model
    Alt Text

  • stylized image
    Alt Text

Technical Notes

  • The code runs on Google CoLab
  • The code requires pip, TensorFlow, OpenCv libraries to run.

Reference

References to any papers, techniques, repositories you used:

About

Image to Image Translation using pix2pix, Neural Style Transfer

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published