[CPVTON] Bochao Wang, Huabin Zheng, Xiaodan Liang, Yimin Chen, Liang Lin, and Meng Yang. Toward characteristicpreserving image-based virtual try-on network. In European Conference on Computer Vision (ECCV), 2018.
[GFLA] Yurui Ren, Ge Li, Shan Liu, and Thomas H. Li. Deep spatial transformation for pose-guided person image generation and animation. IEEE Transactions on Image Processing (TIP), 2020.
[ACGPN] Han Yang, Ruimao Zhang, Xiaobao Guo, Wei Liu, Wangmeng Zuo, and Ping Luo. Towards photo-realistic virtual try-on by adaptively generating ↔ preserving image content. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
[FashionOn] Chia-Wei Hsieh, Chieh-Yun Chen, Chien-Lung Chou, Hong-Han Shuai, Jiaying Liu, andWen-Huang Cheng. FashionOn: Semantic-guided image-based virtual try-on with detailed human and clothing information. In Proceedings of the 27th ACM International Conference on Multimedia (ACM MM), 2019.
[VTNCAP] Na Zheng, Xuemeng Song, Zhaozheng Chen, Linmei Hu, Da Cao, and Liqiang Nie. Virtually trying on new clothing with arbitrary poses. In Proceedings of the 27th ACM International Conference on Multimedia (ACM MM), 2019.
[FWGAN] Haoye Dong, Xiaodan Liang, Xiaohui Shen, Bowen Wu, Bing-Cheng Chen, and Jian Yin. FW-GAN: Flow-navigated warping gan for video virtual try-on. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019.
[FashionMirror] Ours
We compare our try-on results with the most relevant previous work (FWGAN). The following cases show that our novel designed model, FashionMirror, synthesizes sequential try-on results correctly changing the clothing type and containing more detailed information (e.g., human face and clothing color) than FWGAN.