Create your own styles styled images directly on your mobile device. It allows the users to take any photograph and turn it into a an image looking like a painting. The app is written in swift and leverages python code for with tensorflow and keras to train models. It was developed with a focus on portability and tha ability to run the models without access to the cloud.
To transform a picture, follow these steps:
- Click on the camera icon to access your image selection
- Choose either camera or library for the input
- Once the image is loaded, click
TRANSFORM
to run the style transfer model - Click
SAVE
to save the resulting image to your photo library
Here are a few examples of the models I have trained and that are available in the git repository for the app
- This can be compiled for or run in the simulator
- You may need to update the bundle identifier and team in the app's general tab to allow it to run on your device
- This app was built to run on any iOS device running iOS 12 or higher
- It has only been tested on an iPhone Xs.
- Next revisions will include a broader range of iOS and devices
This app uses CocoaPods and the Fritz SDK. The full references of those are listed in
- Fritz Quickstart guide
- Cocoa Pods
To use other models included, you need to replace the CustomStyleModel.mlmodel
in XCode:
- In Xcode, delete the file
CustomStyleModel.mlmodel
- From the folder
style_transfer / models / other models
copy another model file to the folderstyle_transfer / models / other models
- Rename the file you just copied
CustomStyleModel.mlmodel
- Move the file back to Xcode, setting the target
- Rebuild the app in Xcode and Voilà!
- Guide to training new models for Fritz SDK
https://heartbeat.fritz.ai/20-minute-masterpiece-4b6043fdfff5
- Redesigning buttons and giving feedbacks to the clicks
- Combing both live video recording and photo transform options
- Allowing multiple style choices
- Include an easy share button
- Allow style sharing between apps to give your friends the styles you created
- Leveraging CoreML2 flexible images for model inputs and outputs
- Implementation on Android using Tensorflow Lite
- A special thanks to Michael Ramos for the inspiration in his original work on style transfer, and the Fritz team for building a great and easy to use product
- This app is under the MIT License
- The Fritz pod present in this app is under the Apache License
- My initial work on style transfer looked at a slow style transfer implementation from Andrew Ng’s Deep Learning Specialization classes on Coursera, using VGG19 CNN:
https://www.coursera.org/learn/convolutional-neural-networks
- I initially tried to use Michael Ramos’ implementation of style transfer and got inspired by his project for the swift portion of the code
https://hackernoon.com/diy-prisma-fast-style-transfer-app-with-coreml-and-tensorflow-817c3b90dacd https://github.com/mdramos/fast-style-transfer-coreml
- As the models in Michael Ramos’ implementation were too large for my device, I ended up going with the lighter mobile solution offered by Fritz, but another solution would have been to use quantization of the models for better portability
- Fritz GitHub
- Other interesting Fast Style Transfer ressources
https://github.com/lengstrom/fast-style-transfer https://arxiv.org/pdf/1603.08155.pdf
- Other interesting implementation by Reiichiro Nakano
https://magenta.tensorflow.org/blog/2018/12/20/style-transfer-js/
- Pretrained model resources
https://www.tensorflow.org/lite/guide/hosted_models http://www.vlfeat.org/matconvnet/pretrained/
- Working with pre-trained ConvNets
- Public datasets for image training
- Working with cloud computing makes it easier to train your own models. Here is a good tutorial on how to setup your instances for Deep Learning on AWS:
https://www.datacamp.com/community/tutorials/deep-learning-jupyter-aws