Bags pose a unique challenge by introducing self-occlusion and causing unconventional body postures when trying on clothing items. Currently, our model faces limitations in effectively addressing this issue. It's important to note that handling self-occlusion is a challenging problem, and it remains unresolved in the virtual try-on task. As part of our future work, we are going to address this challenge and improve our model's performance in such scenarios. (Left: Human, Right: Unsuccessful results).
Please downloading the MPV3D Dataset and run the following script to preprocess the data:
python util/data_preprocessing.py --MPV3D_root path/to/MPV3D/dataset
If you want to process your own data, please refer to this to process the data and place the data in the corresponding folder.
We provide demo inputs under the mpv3d_example
folder.
With inputs from the mpv3d_example
folder, the easiest way to get start is to use the pretrained models and sequentially run the four steps below:
python test.py --model SGN --name SGN --dataroot path/to/data --datalist test_pairs --results_dir results
python test.py --model GLA --name GLA --dataroot path/to/data --datalist test_pairs --results_dir results
python test.py --model P --name P --dataroot path/to/data --warproot path/to/warp --datalist test_pairs --results_dir results
cd DM
python run.py -p test -c config/inpainting_MPV.json
python test.py --model RDG --name RDG --dataroot path/to/data --warproot path/to/warp --datalist test_pairs --results_dir results
(Note: since the back-side person images are unavailable, in rgbd2pcd.py
we provide a fast face inpainting function that produces the mirrored back-side image after a fashion. One may need manually inpaint other back-side texture areas to achieve better visual quality.)
python rgbd2pcd.py
Now you should get the point cloud file prepared for remeshing under results/aligned/pcd/test_pairs/*.ply
. MeshLab can be used to remesh the predicted point cloud, with two simple steps below:
-
Normal Estimation: Open MeshLab and load the point cloud file, and then go to Filters --> Normals, Curvatures and Orientation --> Compute normals for point sets
-
Possion Remeshing: Go to Filters --> Remeshing, Simplification and Reconstruction --> Surface Reconstruction: Screen Possion (set reconstruction depth = 9)
With the pre-processed MPV3D dataset, you can train the model from scratch by folllowing the three steps below:
python train.py --model SGN --name SGN --dataroot path/to/MPV3D/data --datalist train_pairs --checkpoints_dir path/for/saving/model
then run the command below to obtain the --warproot
(here refers to the --results_dir
) which is necessary for the other two modules:
python test.py --model SGN --name SGN --dataroot path/to/MPV3D/data --datalist train_pairs --checkpoints_dir path/to/saved/MTMmodel --results_dir path/for/saving/MTM/results
python train.py --model GLA --name GLA --dataroot path/to/MPV3D/data --datalist train_pairs --checkpoints_dir path/for/saving/model
then run the command below to obtain the --warproot
(here refers to the --results_dir
) which is necessary for the other two modules:
python test.py --model GLA --name GLA --dataroot path/to/MPV3D/data --datalist train_pairs --checkpoints_dir path/to/saved/MTMmodel --results_dir path/for/saving/MTM/results
python train.py --model P --name P --dataroot path/to/MPV3D/data --warproot path/to/warp --datalist train_pairs --checkpoints_dir path/for/saving/model
cd DM
python run.py -p train -c config/inpainting_MPV.json
python train.py --model RDG --name RDG --dataroot path/to/MPV3D/data --warproot path/to/warp --datalist train_pairs --checkpoints_dir path/for/saving/model
(See options/base_options.py and options/train_options.py for more training options.)
The use of this code is RESTRICTED to non-commercial research and educational purposes.