Skip to content

IJCAI-2020 Real-World Automatic Makeup via Identity Preservation Makeup Net

Notifications You must be signed in to change notification settings

huangzhikun1995/IPM-Net

Repository files navigation

License CC BY-NC-SA 4.0 Python 3.6

[IJCAI-2020] Real-World Automatic Makeup via Identity Preservation Makeup Net

Zhikun Huang • Zhedong ZhengChenggang Yan Hongtao Xie

Yaoqi SunJianzhong WangJiyong Zhang

IPM-Net

Real-World Automatic Makeup via Identity Preservation Makeup Net
Zhikun Huang, Zhedong Zheng, Chenggang Yan, Hongtao Xie, Yaoqi Sun, Jianzhong Wang, Jiyong Zhang

Abstract: This paper focuses on the real-world automatic makeup problem. Given one non-makeup target image and one reference image, the automatic makeup is to generate one face image, which maintains the original identity with the makeup style in the reference image. In the real-world scenario, face makeup task demands a robust system against the environmental variants. The two main challenges in real-world face makeup could be summarized as follow: first, the background in real-world images is complicated. The previous methods are prone to change the style of background as well; second, the foreground faces are also easy to be affected. For instance, the “heavy” makeup may lose the discriminative information of the original identity. To address these two challenges, we introduce a new makeup model, called Identity Preservation Makeup Net (IPM-Net), which preserves not only the background but the critical patterns of the original identity. Specifically, we disentangle the face images to two different information codes, i.e., identity content code and makeup style code. When inference, we only need to change the makeup style code to generate various makeup images of the target person. In the experiment, we show the proposed method achieves not only better accuracy in both realism (FID) and diversity (LPIPS) in the test set, but also works well on the real-world images collected from the Internet.

Dataset Preparation

We train and test our model on the widely-used Makeup Transfer dataset.

Training

You can train your own model after downloading the dataset and preprocessing the data.

Besides, we deploy ResNet50 model trained on VggFace2. You should also download this model named [resnet50_ft_weight.pkl] (https://drive.google.com/file/d/1A94PAAnwk6L7hXdBXLFosB_s0SzEhAFU/view) and move the model to "~/IPM-Net/".

Images processing

To train your own model, you must processing the dataset first. You can use MATLAB and the code we provide in the preprocessing folder to preprocess the data. highcontract_texture.m provides a differential filter to extract the texture of the face in the picture which is same to our model , and sobel_texture.m provides a Sobel operator to extract the texture. In addition, if you collect some makeup and non-mekeup image from the internet to train a model or test our model, you have to parse the faces in the new images before images preprocessing.

Train IPM-Net

  1. Setup the yaml file. Check out config/***.yaml. Change the data_root field to the path of your prepared folder-based dataset.

  2. Start training, and you can use tensorboard to visualize your loss log.

python train.py --config config/***.yaml

Testing

To test the model, you will need to have a CUDA capable GPU, PyTorch, cuda/cuDNN drivers, tensorboardX and pyyaml installed. The model named resnet50_ft_weight.pkl should be downloaded first.

Download the trained model

We provide our trained model. You can download it from Google Drive (or Baidu Disk password: yuxv). You should create a folder named outputs in ~/IPM-Net to store the trained model.

Testing the model

You may test our trained model use test.py and the few images in the dataset folder. Or you can collect some makeup images and non-makeup images from the Internet to test our model.

python test.py 

If you want to test your trained model, remember to change the parameters name and num in the test.py.

Image generation evaluation

You may use generate lots of images and then do the evaluation using FID and LPIPS .

Related Work

We compared with BeautyGAN , which is also GAN-based and open sourced the trained model. We forked the code and made some changes for evaluatation, thank the authors for their great work. We would also like to thank to the great projects in [CycleGAN(https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix), MUNIT and DG-Net.

License

Copyright (C) 2020 Hangzhou Dianzi University. All rights reserved. Licensed under the CC BY-NC-SA 4.0 (Attribution-NonCommercial-ShareAlike 4.0 International). The code is released for academic research use only.

About

IJCAI-2020 Real-World Automatic Makeup via Identity Preservation Makeup Net

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published