Official code for paper:
Multi-View Consistent 3D GAN Inversion via Bidirectional Encoder
Haozhan Wu, Hu Han, Shiguang Shan, and Xilin Chen
The 18th IEEE International Conference on Automatic Face and Gesture Recognition
The environment used in this paper: env.txt
We strongly recommend that you successfully build EG3D env first, and then build this paper's env based on it.
Train: /Bidirectional_Encoder/scripts/train_hybrid.py
Test Single-View Reconstruction: /Bidirectional_Encoder/scripts/test_psp20_encode_loop.py
Test Multi-View Consistency: /Bidirectional_Encoder/scripts/test_psp20_3D_multi_loop.py
Sketch Synthesis Algorithm: /Bidirectional_Encoder/gen_sketch (Please run the code in sequence, from step 1 to 5)
Synthesized Sketch Dataset: Sketch-BiDiE (decompression password: BiDiE-FG2024)
I've only uploaded 512x512 EG3D-cropped versions, but you can synthesize sketches yourself using my open source algorithm.
This code borrows from: EG3D, pSp, e4e ...
We also thank these open source projects: U2-Net, Facer, Deep3DFaceRecon_pytorch
, EG3D-projector ...
If you use this "Bidirectional Encoder" / "BiDiE Sketch Dataset" / "BiDiE Sketch Synthesis Algorithm" for your research, please cite our paper
@inproceedings{wu_BiDiE_FG2024,
title={Multi-View Consistent 3D GAN Inversion via Bidirectional Encoder},
author={Haozhan Wu, Hu Han, Shiguang Shan, Xilin Chen},
booktitle={The 18th IEEE International Conference on Automatic Face and Gesture Recognition, 2024. Proceedings.},
pages={???--???},
year={2024},
organization={IEEE}
}