Yicun Liu | Jimmy Ren | Jianbo Liu | Jiawei Zhang | Xiaohao Chen
This repository contains code for the paper: Learning Selfie-Friendly Abstraction from Artistic Style Images.
Contact: Yicun Liu (stanleylau@link.cuhk.edu.hk)
The code is tested on 64 bit Linux (Ubuntu 14.04 LTS). You should also install Matlab (We have tested on R2015a). We have tested our code on GTX TitanX (but it can also run on other GPUs with vRAM >= 2G) with CUDA8.0+cuDNNv5. Please install all these prerequisites before running our code.
-
Clone the code.
git clone https://github.com/DandilionLau/Selfie-Friendly-Abstraction.git cd Selfie-Friendly-Abstraction
-
Build standard caffe follow the instruction.
cd caffe/ # Modify Makefile.config according to your Caffe installation. # Remember to allow CUDA and CUDNN. make -j12 make matcaffe
-
Customize caffe.
Add the following message to
caffe.proto
to configure the newNNUp Upsample Layer
:Message Parameter{ // Insert to an existing class. Try not to conflict with any existing message numbers. optional NNUpsampleParameter nn_upsample_param = 163; } message NNUpsampleParameter { // Append to the last of caffe.proto file optional uint32 resize = 1 [default = 2]; }
Copy
include/nn_upsample_layer.hpp
tocaffe/include/caffe/layers/
. Copysrc/nn_upsample_layer.cpp
andsrc/nn_upsample_layer.cu
tocaffe/src/caffe/layers/
. Then recompile both caffe and matcaffe.make -j12 make matcaffe
-
Download training models.
To prepare for the testing step, you may simply download the trained caffemodels from [DropBox][BaiduYun] and put them to themodel/style/
directory.Additionally, if you want to train your own style abstraction model, you need to download the VGG-16 model from [VGG Website][DropBox][BaiduYun] to compute the perceptual loss. It should be put to the
model/vgg_16layers/
directory.
- Generate image patches
Rundata/GenPatches_train_6chs.m
anddata/GenPatches_val_6chs.m
at MATLAB to extract image patches for training and validation. We provide 40 selfie images and their corresponding output images generated from Prisma. The selfie image and stylistic reference image directory is atdata/training/
. You may replace it with your own dataset to train with different styles. - Training the model
Runtrain/train_6chs_reshape.m
at MATLAB to train the model. Remember to include matcaffe before training. In our experiment, the balance factor between loss_pixel and loss_feat is set as 1000.
- Run
test/test_6chs_reshape.m
at MATLAB to test the model. Remember to include matcaffe before running the test. We provide 99 images from Flickr for testing, including portraits, landscapes, wild lives, and other scenes. The image directory is atdata/testing/
. You may replace it with your own dataset for testing. - For the inter-frame consistency test, please visit our online demo to check the results.
Please cite our paper if you find it helpful for your work:
@article{liu2018learning,
title={Learning Selfie-Friendly Abstraction from Artistic Style Images},
author={Liu, Yicun and Ren, Jimmy and Liu, Jianbo and Zhang, Jiawei and Chen, Xiaohao},
journal={arXiv preprint arXiv:1805.02085},
year={2018}
}
@InProceedings{pmlr-v95-liu18a,
title = {Learning Selfie-Friendly Abstraction from Artistic Style Images},
author = {Liu, Yicun and Ren, Jimmy and Liu, Jianbo and Chen, Xiaohao},
booktitle = {Proceedings of The 10th Asian Conference on Machine Learning},
year = {2018}
}