Skip to content


Repository files navigation

Facelet-Bank for Fast Portrait Manipulation



  • Python 2.7 or Python 3.6
  • NVIDIA GPU or CPU (only for testing)
  • Linux or MacOS

Getting Started


Install pytorch. The code is tested on 0.3.1 version.

pip install

Clone this project to your machine.

git clone
cd Facelet_Bank


pip install -r requirements.txt

to install other packages.

How to use

We support testing on images and videos.

To test an image:

python test_image --input_path examples/input.png --effect facehair --strength 5

If "--input_path" is a folder, all images in this folder will be tested.

To test a video:

python test_video --input_path examples/input.mp4 --effect facehair --strength 5

Note that all required models will be downloaded automatically for the first time. Alternatively, you can also manually download the facelet_bank folder from dropbox or Baidu Netdisk and put them in the root directory.

If you do not have a GPU, please include "-cpu" argument to your command. For speed issue, you can optionally use a smaller image by specifying the "--size " option.

python test_image --input_path examples/input.png --effect facehair --strength 5 --size 400,300 -cpu

For more details, please run

python test_image --help


python test_video --help

Note: Although this framework is robust to an extent, testing on extreme cases could cause the degradation of quality. For example, an extremely high strength may cause artifact. Testing on an extremely large image may not work as well as testing on a proper size (from 448 x 448 to 600 x 800).

More effects

The current project supports

  • facehair
  • older
  • younger
  • feminization
  • masculinization

More effects will be available in the future. Once a new effect is released, the file will be updated accordingly. We also provide an instruction of training your own effect in the following.




Training our network requires two steps, i.e., generating the attribute vector (Eq. (6) in our paper) and training our model.

Generating attribute vector

We utilize the Deep Feature Interpolation project to generate attribute vectors as pseudo labels to supervise our facelet network. Please see for more details.

After setting up the DFI project, copy DFI/ to its root directory. Then cd to the DFI project folder and run

python --effect facehair --input_path images/celeba --npz_path attribute_vector

This extracts the facehair effect from images/celeba folder, and save the extracted attribute vectors to attribute_vector folder. For more details, please run

python --help

Note: In our implementation, we use the aligned version of celebA dataset for training, and resize the images to 448 x 448.

From our experience, 2000~3000 samples should be enough to train a facelet model.

Training Facelet model

After generating enough attribute vectors, we can utilize them to train a facelet model. Please cd to the Facelet_bank folder and run

python --effect facehair --input_path ../deepfeatinterp/images/celeba --npz_path ../deepfeatinterp/attribute_vector

where "--input_path" is the training image folder (the one used for generating attribute vector), and "--npz_path" is the folder of the generated attribute vectors.

For more details, please run

python --help

Testing your own model

The trained facelet model is stored in the checkpoint folder. To test the trained model, please include the "--local_model" augment, i.e.,

python test_image --input_path examples/input.png --effect facehair --strength 5 --local_model


Ying-Cong Chen, Huaijia Lin, Michelle Shu, Ruiyu Li, Xin Tao, Yangang Ye, Xiaoyong Shen, Jiaya Jia, "Facelet-Bank for Fast Portrait Manipulation" ,* Computer Vision and Pattern Recognition (CVPR), 2018 pdf

  title={Facelet-Bank for Fast Portrait Manipulation},
  author={Chen, Ying-Cong and Lin, Huaijia and Shu, Michelle and Li, Ruiyu and Tao, Xin and Ye, Yangang and Shen, Xiaoyong and Jia, Jiaya},


Please contact if you have any question or suggestion.


No releases published


No packages published