Skip to content

saswat0/Sketchy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 

Repository files navigation

Sketchy

Sketch inversion using DCNNs (synthesising photo-realistic images from pencil sketches) following the work of Convolutional Sketch Inversion and Scribbler.

This project is mainly focused on sketches of human faces and architectural drawings of buildings. However according to Scribbler and the followed experimentation with their proposed framework, it is possible that given a large dataset and ample training time, this network could generalize to other categories as well.

The model has been trained using a dataset generated from a large database of face images, and then the network was fine-tuned to fit the purpose of architectural sketches.

Results

Faces

Buildings

Datasets

Sketching

The datasets were simulated, i.e. the sketches were generated, using the following methods (with the exception of the CUHK dataset, which contains sketches and the corresponding images)

Furthermore, due to the low number of images of buildings available, various augmentations on the ZuBuD dataset were applied to produce more images using the following image augmentation tool

Network Architecture

The network architecture used in Scribbler was used. The generator follows an encoder-decoder design, with down-sampling steps, followed by residual layers, followed by up-sampling steps.

Loss Functions

The Loss function was computed as the weighted average of three loss functions; namely: pixel loss, total variation loss, and feature loss.

The pixel loss was computed as:

Where t is the true image, p is the predicted image, and n,m,c are the height, width, and number of color channels respectively.

The feature loss was computed as:

The total variation loss was used to encourage smoothness of the output, and was computed as

Where phi(x) is the output of the fourth layer in a pre-trained model (VGG16 relu_2_2) to feature transform the targets and predictions.

The total loss is then computed as

Lt = wpLp + wfLf + wvLv

For the present application, wf = 0.001, wp = 1, wv = 0.00001

Pretrained weights

  • Face Weights after training the network on the CelebA dataset using the Pencil Sketchify method
  • Building Weights after fine-tuning the network for the building sketches using the augmented ZuBuD dataset with the Pencil Sketchify method

Todo

  • Training with a larger building dataset using a variety of sketch styles to improve the generality of the network
  • Adding adversarial loss to the network.
  • Using sketch anti-roughing to unify the styles of the training and input sketches.
  • Passing the sketch results to a super-resolution network to improve image clarity.
  • Increasing the image size of the training data.

References

Sketchback