The model is based on Multi-Content GAN for Few-Shot Font Style Transfer; Samaneh Azadi, Matthew Fisher, Vladimir Kim, Zhaowen Wang, Eli Shechtman, Trevor Darrell, in arXiv, 2017.
When editing photos or artworks using photoshop, designers often have to use the same or similar font used in brand logos or artworks. Even though the photoshop provides search tool that finds the most similar fonts in their library, it's not enough. Using GAN network by Azadi and some image processing techniques, We could copy the font of the given image.
A page from the speakeasy magazine
The structure of the program is like this. The program segmentates the text from the given image and after some process, feed them as the input of GAN models. I set the model to generate 26 upper alphabets from 5 characters, but the model is also capable to generate 52 upper and lower alphabets if you adjust some options.
remove background and segmentate the text from image | segmentation.py classificate the characters segmentated from the image | classify.py create dataset for the GAN model | dataset_gen.py
you have to check the dataset before running the GAN model.(Since segmentation is done by basic image processing, its accuracy is sometime not good enough.)
mc-gan-master/scripts/train_StackGAN.sh [image folder] mc-gan-master/scripts/test_StackGAN.sh [image folder]