Skip to content

Talegqz/neural_novel_actor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Neural Novel Actor: Learning a Generalized Animatable Neural Representation for Human Actors.

Qingzhe Gao*, Yiming Wang*, Libin Liu†, Lingjie Liu†, Christian Theobalt, Baoquan Chen†
TVCG 2023

Updates

  • [09/06/2023] Released official test codes and pretrained checkpoints!

Installation

First clone this repository and all its submodules using the following command:

git clone --recursive https://github.com/Talegqz/neural_novel_actor
cd neural_novel_actor

Then install dependencies with conda and pip:

conda create -n nna python=3.8
conda activate nna

pip install -r requirements.txt

python setup.py build_ext --inplace

pip install --upgrade https://github.com/unlimblue/KNN_CUDA/releases/download/0.2/KNN_CUDA-0.2-py3-none-any.whl

Dataset

We provide a script to convert the ZJU-dataset to our data convention, which can be found in tools/dataset_from_zju_mocap.py.

Test

First download the pretrained checkpoints from Google Drive, and then put it in the save folder. You can then generate pose driven results using the following command.

bash generate.sh

Prepare your own data

To test the model on your own dataset, please organize your dataset in following structure:

The data is organized like:

<dataset_path>/0          # character id
|-- intrinsic             # camera intrinsic for each camera, fixed across all frames 
    |-- 0000.txt
    |-- 0001.txt
    ...
|-- extrinsic                  # camera extrinsic for each camera, fixed across all frames
    |-- 0000.txt
    |-- 0001.txt
    ...
|-- smpl_transform             # json files defined the target pose transformation (produced by EasyMocap) 
    |-- 000000.json       
    |-- 000001.json  
    ...
|-- rgb_bg                   # ground-truth RGB image for each frame and each camera
    |-- 000000            # frame id
        |-- 0000.png
        |-- 0001.png
        ...
    |-- 000001            # camera id
        |-- 0000.png
        |-- 0001.png
        ...
    ...     
|-- mask                   # ground-truth mask image for each frame and each camera
    |-- 000000            # frame id
        |-- 0000.png
        |-- 0001.png
        ...
    |-- 000001            # camera id
        |-- 0000.png
        |-- 0001.png
        ...
    ...     

Citation

@article{gao2023neural,
  title={Neural novel actor: Learning a generalized animatable neural representation for human actors},
  author={Gao, Qingzhe and Wang, Yiming and Liu, Libin and Liu, Lingjie and Theobalt, Christian and Chen, Baoquan},
  journal={IEEE Transactions on Visualization and Computer Graphics},
  year={2023},
  publisher={IEEE}
}

About

[TVCG 2023] Neural Novel Actor: Learning a Generalized Animatable Neural Representation for Human Actors.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published