This is the official code of "EvHandPose: Event-based 3D Hand Pose Estimation with Sparse Supervision".
[📃Project Page] [Data] [Paper] [Arxiv] [Model] [🎥Video]
Clone this repository and install dependencies.
- create an environment
conda env create -f environment.yml conda activate EvHand_public
- Our real-world dataset is from EvRealHands.
Please download our dataset to your disk. We use
$data$to represent the absolute path to our dataset.
- Please download MANO models from MANO.
Put the
MANO_LEFT.pklandMANO_RIGHT.pkltodata/smplx_model/mano.
-
Our training process consists of three steps. First, we train FlowNet under supervision with the flow loss, and fix its parameters in the following steps. Then, we train our model under fully supervision to get reasonable good parameter initialization. Finally, we train our model with both labeled data and unlabeled data under semi supervision.
-
We will introduce our training process as follows. Please refer to our paper for more details about our training strategy.
-
First, set the values in
configs/train_flow.ymlas follows:- Set the
exper.output_dirto your output flow model path. We use$output_flow_model_path$to represent the output path of our flow model; - Set the
data.smplx_pathto your MANO model path; - Set the
data.data_dirto$data$.
- Set the
-
Second, run the following script:
cd ./scripts python train.py --config ../configs/train_flow.yml --gpus 1
We train 20 epochs in our experiments.
-
First, set the values in
configs/train_supervision.ymlas follows:- Set the
exper.output_dirto your output path. We use$output_fully_supervision_model_path$to represent the output path of our fully supervision model; - Set the
data.smplx_pathto your MANO model path; - Set the
data.data_dirto$data$.
- Set the
-
Second, run the following script:
cd ./scripts python train.py --config ../configs/train_supervision.yml --gpus 1 --flow_model_path $output_flow_model_path$/last.ckpt
We train 40 epochs in our experiments.
-
We use
$output_semi_supervision_model_path$to represent the output path of our semi supervision model; run the following script:cd ./scripts python train.py --config ../configs/train_supervision.yml --gpus 1 --model_path $output_fully_supervision_model_path$/last.ckpt --config_merge ../configs/train_semi.yml
We train 40 epochs in our experiments.
-
Set the values in
configs/eval.ymlas follows:-
Set the
exper.data_dirto$data$. -
For inference, we provide quantitative results of MPJPE and PA-MPJPE in
scripts/eval.py. -
You can also get predicted mesh, 3d joints, image with warped events, .etc by setting the
log.save_result=Trueinconfigs/eval.yml.
-
-
Then, run the following script:
cd ./scripts python eval.py --config_train $output_semi_supervision_model_path$/train.yml --config_test ../configs/eval.yml --gpu 1 --model_path $output_semi_supervision_model_path$/last.ckpt- The output results will be saved in
$output_semi_supervision_model_path$/test.
- The output results will be saved in
-
If you want to directly implement Inference without training, we also provide pretrained model in this link. To directly implement Inference, you need to set
data.data_dir,data.smplx_path,exper.output_dirinconfigs/pretrain.ymlto your own setting. Then, run the following script:cd ./scripts python eval.py --config_train ../configs/pretrain.yml --config_test ../configs/eval.yml --gpu 1 --model_path $Your_Pretrained_Model_Path$
Then you can directly view the results.
@article{jiang2024evhandpose,
author = {Jianping, Jiang and Jiahe, Li and Baowen, Zhang and Xiaoming, Deng and Boxin, Shi},
title = {EvHandPose: Event-based 3D Hand Pose Estimation with Sparse Supervision},
journal = {TPAMI},
year = {2024},
}
@inproceedings{Jiang2024EvRGBHand,
title={Complementing Event Streams and RGB Frames for Hand Mesh Reconstruction},
author={Jiang, Jianping and Zhou, Xinyu and Wang, Bingxuan and Deng, Xiaoming and Xu, Chao and Shi, Boxin},
booktitle={CVPR},
year={2024}
}- In our experiments, we use the official code of EventHands, MobRecon, EV-Transfer, NGA for comparison.
