This is the official pytorch code for "FPMT: Fast and Precise High-Resolution Makeup Transfer via Frequency Decomposition".
The training code, testing code, and pre-trained model have all been open sourced
In this paper, we focus on accelerating high-resolution makeup transfer process without compromising generative performance.
-
Our paper SHMT was accepted by NeurIPS2024. Paper link and code link.
-
Our paper CSD-MT was accepted by CVPR2024. Paper link and code link.
-
Our paper SSAT++ was accepted by TNNLS2023. Paper link and code link.
-
Our paper SSAT was accepted by AAAI2022. Paper link and code link.
If you only want to get results quickly, please go to the "quick_start" folder and follow the readme.md inside to generate results quickly.
The pre trained model is very small and is already in this folder.
We recommend that you just use your own pytorch environment; the environment needed to run our model is very simple. If you do so, please ignore the following environment creation.
A suitable conda environment named FPMT can be created
and activated with:
conda env create -f environment.yaml
conda activate FPMT
- MT dataset can be downloaded here BeautyGAN. Extract the downloaded file and place it on top of this folder.
- Prepare face parsing. Face parsing is used in this code. In our experiment, face parsing is generated by https://github.com/zllrunning/face-parsing.PyTorch.
- Put the results of face parsing in the .\MT-Dataset\seg1\makeup and .\MT-Dataset\seg1\non-makeup
We have set the default hyperparameters in the options.py file, please modify them yourself if necessary.
- In L=2 of FPMT, "crop_size=256, resize_size=int(256*1.12), num_high=2"
- In L=3 of FPMT, "crop_size=512, resize_size=int(512*1.12), num_high=3"
- In L=4 of FPMT, "crop_size=1024, resize_size=int(1024*1.12), num_high=4"
To train the model, please run the following command directly
python train.py
python inference.py
Some of the codes are build upon PSGAN, Face Parsing, aster.Pytorch, LPTN.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.



