Skip to content


Folders and files

Last commit message
Last commit date

Latest commit



9 Commits

Repository files navigation

M3D-VTON: A Monocular-to-3D Virtual Try-On Network

Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network"

Paper | Supplementary | MPV3D Dataset | Pretrained Models



python >= 3.8.0, pytorch == 1.6.0, torchvision == 0.7.0

Data Preparation

MPV3D Dataset

After downloading the MPV3D Dataset, please run the following script to preprocess the data:

python util/ --MPV3D_root path/to/MPV3D/dataset

Custom Data

If you want to process your own data, some more steps are needed (the → indicates the corresponding folder where the images should be put into):

  1. prepare an in-shop clothing image C (→ mpv3d_example/cloth) and a frontal person image P (→ mpv3d_example/image) with resolution of 320*512;

  2. obtain the mask of C (→ mpv3d_example/cloth-mask) by thresholding or using;

  3. obtain the human segmentation layout (→ mpv3d_example/image-parse) by applying 2D-Human-Paring on P;

  4. obtain the human joints (→ mpv3d_example/pose) by applying OpenPose (25 keypoints) on P;

  5. run the data processing script python util/ --MPV3D_root mpv3d_example to automatically obtain the remaining inputs (pre-aligned clothing, palm mask, and image gradients);

  6. now the data preparation is finished and you should be able to run inference with the steps described in the next section "Running Inference".

Running Inference

We provide demo inputs under the mpv3d_example folder, where the target clothing and the reference person are like:

Demo inputs

with inputs from the mpv3d_example folder, the easiest way to get start is to use the pretrained models and sequentially run the four steps below:

1. Testing MTM Module

python --model MTM --name MTM --dataroot mpv3d_example --datalist test_pairs --results_dir results

2. Testing DRM Module

python --model DRM --name DRM --dataroot mpv3d_example --datalist test_pairs --results_dir results

3. Testing TFM Module

python --model TFM --name TFM --dataroot mpv3d_example --datalist test_pairs --results_dir results

4. Getting colored point cloud and Remeshing

(Note: since the back-side person images are unavailable, in we provide a fast face inpainting function that produces the mirrored back-side image after a fashion. One may need manually inpaint other back-side texture areas to achieve better visual quality.)


Now you should get the point cloud file prepared for remeshing under results/aligned/pcd/test_pairs/*.ply. MeshLab can be used to remesh the predicted point cloud, with two simple steps below:

  • Normal Estimation: Open MeshLab and load the point cloud file, and then go to Filters --> Normals, Curvatures and Orientation --> Compute normals for point sets

  • Possion Remeshing: Go to Filters --> Remeshing, Simplification and Reconstruction --> Surface Reconstruction: Screen Possion (set reconstruction depth = 9)

Now the final 3D try-on result should be obtained:

Try-on Result

Training on MPV3D Dataset

With the pre-processed MPV3D dataset, you can train the model from scratch by folllowing the three steps below:

1. Train MTM module

python --model MTM --name MTM --dataroot path/to/MPV3D/data --datalist train_pairs --checkpoints_dir path/for/saving/model

then run the command below to obtain the --warproot (here refers to the --results_dir) which is necessary for the other two modules:

python --model MTM --name MTM --dataroot path/to/MPV3D/data --datalist train_pairs --checkpoints_dir path/to/saved/MTMmodel --results_dir path/for/saving/MTM/results

2. Train DRM module

python --model DRM --name DRM --dataroot path/to/MPV3D/data --warproot path/to/MTM/warp/cloth --datalist train_pairs --checkpoints_dir path/for/saving/model

3. Train TFM module

python --model TFM --name TFM --dataroot path/to/MPV3D/data --warproot path/to/MTM/warp/cloth --datalist train_pairs --checkpoints_dir path/for/saving/model

(See options/ and options/ for more training options.)


The use of this code and the MPV3D dataset is RESTRICTED to non-commercial research and educational purposes.


If our code is helpful to your research, please cite:

    author    = {Zhao, Fuwei and Xie, Zhenyu and Kampffmeyer, Michael and Dong, Haoye and Han, Songfang and Zheng, Tianxiang and Zhang, Tao and Liang, Xiaodan},
    title     = {M3D-VTON: A Monocular-to-3D Virtual Try-On Network},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2021},
    pages     = {13239-13249}


Official code for ICCV2021 paper "M3D-VTON: A Monocular-to-3D Virtual Try-on Network"






No releases published


No packages published