Skip to content

harlanhong/Face-Depth-Network

Repository files navigation

📖 The Face Depth Network of ``Depth-Aware Generative Adversarial Network for Talking Head Video Generation'' (CVPR 2022)

🔥 If DaGAN is helpful in your photos/projects, please help to ⭐ it or recommend it to your friends. Thanks🔥

[Paper]   [Project Page]   [Demo]   [Poster Video]

Fa-Ting Hong, Longhao Zhang, Li Shen, Dan Xu
The Hong Kong University of Science and Technology

Cartoon Sample

cartoon.mp4

Human Sample

celeb.mp4

Image Dataset

🔧 Dependencies and Installation

⚙️ Setup

  1. Clone repo

    git clone https://github.com/harlanhong/DaGAN-Head.git
    cd CVPR2022-Head
  2. Install dependent packages

    conda install pytorch=0.4.1 torchvision=0.2.1 -c pytorch
    pip install tensorboardX==1.4
    conda install opencv=3.3.1   # just needed for evaluation

    Or you can use the environment of DaGAN directly

⚡ Quick Inference

Pre-trained checkpoint

The pre-trained checkpoint of face depth network and our DaGAN checkpoints can be found under following link: OneDrive.

Inference! To run a demo, download checkpoint and run the following command to predict scaled disparity for a single image with:

python test_simple.py --image_path assets/test_image.jpg --model_name tmp/You_Model/models/weights_19

⏳ Training

Datasets

  1. Splits. The train/test/validation splits are upload on the One drive

Train on VoxCeleb

To train a model on specific dataset run:

CUDA_VISIBLE_DEVICES=0 python train.py --batch_size 32  --heigh 256 --width 256 --dataset vox  --sample_num 100000 --model_name taking_head_10w --data_path vox2

Training on your own dataset

You can train on a custom monocular or stereo dataset by writing a new dataloader class which inherits from MonoDataset – see the CELEBDataset class in datasets/celeb_dataset.py for an example.

📜 Acknowledgement

Our Face-Depth-Network implementation is borrowed from Monodepth2. We appreciate the authors of Monodepth2 for making their codes available to public.

📜 BibTeX

@inproceedings{hong2022depth,
            title={Depth-Aware Generative Adversarial Network for Talking Head Video Generation},
            author={Hong, Fa-Ting and Zhang, Longhao and Shen, Li and Xu, Dan},
            journal={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
            year={2022}
          }

📧 Contact

If you have any question, please email fhongac@cse.ust.hk.

About

The component of DaGAN (CVPR 2022)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published