Skip to content
Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time
Jul 3, 2018
May 22, 2018
May 22, 2018
May 22, 2018
Jul 3, 2018
May 5, 2018

Learning Selfie-Friendly Abstraction from Artistic Style Images

Yicun Liu | Jimmy Ren | Jianbo Liu | Jiawei Zhang | Xiaohao Chen

ACML 2018

This repository contains code for the paper: Learning Selfie-Friendly Abstraction from Artistic Style Images.

Contact: Yicun Liu (


The code is tested on 64 bit Linux (Ubuntu 14.04 LTS). You should also install Matlab (We have tested on R2015a). We have tested our code on GTX TitanX (but it can also run on other GPUs with vRAM >= 2G) with CUDA8.0+cuDNNv5. Please install all these prerequisites before running our code.


  1. Clone the code.

    git clone 
    cd Selfie-Friendly-Abstraction  
  2. Build standard caffe follow the instruction.

    cd caffe/
    # Modify Makefile.config according to your Caffe installation. 
    # Remember to allow CUDA and CUDNN.
    make -j12
    make matcaffe
  3. Customize caffe.

    Add the following message to caffe.proto to configure the new NNUp Upsample Layer:

    Message Parameter{
    // Insert to an existing class. Try not to conflict with any existing message numbers.
      optional NNUpsampleParameter nn_upsample_param = 163;
    message NNUpsampleParameter {
    // Append to the last of caffe.proto file
      optional uint32 resize = 1 [default = 2];

    Copy include/nn_upsample_layer.hpp to caffe/include/caffe/layers/. Copy src/nn_upsample_layer.cpp and src/ to caffe/src/caffe/layers/. Then recompile both caffe and matcaffe.

    make -j12
    make matcaffe
  4. Download training models.
    To prepare for the testing step, you may simply download the trained caffemodels from [DropBox][BaiduYun] and put them to the model/style/ directory.

    Additionally, if you want to train your own style abstraction model, you need to download the VGG-16 model from [VGG Website][DropBox][BaiduYun] to compute the perceptual loss. It should be put to the model/vgg_16layers/ directory.


  1. Generate image patches
    Run data/GenPatches_train_6chs.m and data/GenPatches_val_6chs.m at MATLAB to extract image patches for training and validation. We provide 40 selfie images and their corresponding output images generated from Prisma. The selfie image and stylistic reference image directory is at data/training/. You may replace it with your own dataset to train with different styles.
  2. Training the model
    Run train/train_6chs_reshape.m at MATLAB to train the model. Remember to include matcaffe before training. In our experiment, the balance factor between loss_pixel and loss_feat is set as 1000.


  1. Run test/test_6chs_reshape.m at MATLAB to test the model. Remember to include matcaffe before running the test. We provide 99 images from Flickr for testing, including portraits, landscapes, wild lives, and other scenes. The image directory is at data/testing/. You may replace it with your own dataset for testing.
  2. For the inter-frame consistency test, please visit our online demo to check the results.



Please cite our paper if you find it helpful for your work:

  title={Learning Selfie-Friendly Abstraction from Artistic Style Images},
  author={Liu, Yicun and Ren, Jimmy and Liu, Jianbo and Zhang, Jiawei and Chen, Xiaohao},
  journal={arXiv preprint arXiv:1805.02085},
title = {Learning Selfie-Friendly Abstraction from Artistic Style Images},
author = {Liu, Yicun and Ren, Jimmy and Liu, Jianbo and Chen, Xiaohao},
booktitle = {Proceedings of The 10th Asian Conference on Machine Learning},
year = {2018}


Code repo for 'Learning Selfie-Friendly Abstraction from Artistic Style Images' (ACML 2018).




No releases published


No packages published