MOHO: Learning Single-view Hand-held Object Reconstruction with Multi-view Occlusion-Aware Supervision, CVPR 2024
Chenyangguang Zhang, Guanlong Jiao, Yan Di, Gu Wang, Ziqin Huang, Ruida Zhang, Fabian Manhardt, Bowen Fu, Federico Tombari, Xiangyang Ji
conda activate moho
pip install -r requirements.txt
To keep the training and testing split with IHOI (https://github.com/JudyYe/ihoi), we use their cache
file (https://drive.google.com/drive/folders/1v6Pw6vrOGIg6HUEHMVhAQsn-JLBWSHWu?usp=sharing). Unzip it and put under cache/
folder.
The split we use in DexYCB follows the realsed code of (https://github.com/zerchen/gSDF), which needs to be downloaded from (https://drive.google.com/drive/folders/1qULhMx1PrnXkihrPacIFzLOT5H2FZSj7) and put in the cache/
folder as cache/dexycb_test_s0.json
and cache/dexycb_train_s0.json
.
Moreover, the xxx_view_test.txt
in the cache/
folder is for evaluation of novel view synthesis.
SOMVideo
is composed of SOMVideo_ref.zip
for the reference HO images and SOMVideo_sup
for the occlusion free supervisions. SOMVideo_ref.zip
can be downloaded from (https://mailstsinghuaeducn-my.sharepoint.com/:f:/g/personal/zcyg22_mails_tsinghua_edu_cn/Etb0op97f0lOjYLu58ZM7_wBSfu2v0GRo6OKqAaMwzeztg?e=KbVTR4). For SOMVideo_sup
, test.tar.gz
and val.tar.gz
can be found in the link above under the SOMVideo_sup
folder, and train.tar.gz
is uploaded at (https://mailstsinghuaeducn-my.sharepoint.com/:f:/g/personal/jgl22_mails_tsinghua_edu_cn/EkWaMicLv05JsxNswWgSYgIBIoArG9BcA2NOkiAs1SKhsA?e=bCHnt6). When all 3 files are downloaded, you can put them into a same SOMVideo_sup
folder.
HO3D
is downloaded from (https://www.tugraz.at/index.php?id=40231) (we use HO3D(v2)).
DexYCB
is downloaded from (https://dex-ycb.github.io/).
externals/mano
contains MANO_LEFT.pkl
and MANO_RIGHT.pkl
, get them from (https://mano.is.tue.mpg.de/).
We use PCA maps generated from DINO for generic semantic cues for MOHO, these data is also released on our link (https://mailstsinghuaeducn-my.sharepoint.com/:f:/g/personal/zcyg22_mails_tsinghua_edu_cn/Etb0op97f0lOjYLu58ZM7_wBSfu2v0GRo6OKqAaMwzeztg?e=KbVTR4) (dino_pca.tar.gz
). Users should unzip this file and put it into the corresponding folder of HO3D
and DexYCB
.
2D hand coverage maps are released also on the link above (dexycb_seg.zip
and ho3d_seg.zip
) for the amodal-mask-weighted supervision when real-world finetuning.
In all config files in confs/
folder, please make sure the correct data_dir
, ref_dir
, cache_dir
and seg_dir
.
data_dir
means the path of supervision images. It is the same as ref_dir
(the path of reference images) in the real-world finetuning on HO3D
and DexYCB
, but remains different on SOMVideo
.
seg_dir
is only used in the real-world finetuning for the amodal-mask-weighted supervision.
For reproduction convenience, DexYCB and SOMVideo pre-trained checkpoints can be found at (https://mailstsinghuaeducn-my.sharepoint.com/:f:/g/personal/zcyg22_mails_tsinghua_edu_cn/Ev0OkTOqd31BlqEHZ37ybXcBFA7BtP_veuSUZ3y_h5Si2Q?e=omMQyY). HO3D is a relatively small dataset, users can easily finetune on it with the SOMVideo pre-trained checkpoint with low time consumption.
CUDA_VISIBLE_DEVICES=0,1,2 python exp_runner_ho_dp_hand.py --mode train --conf confs/moo_wmask_dp_hand.conf --case pre_training --gpu_num 3
First, copy the pre-trained checkpoint to the experiment directory of real-world finetuning and rename it as 'ckpt_000000.pth'.
CUDA_VISIBLE_DEVICES=0,1,2 python exp_runner_ho_dp_hand.py --mode train --conf confs/ho3d_dino_hand.conf --case finetuning --gpu_num 3 --is_continue
CUDA_VISIBLE_DEVICES=0,1,2 python exp_runner_ho_dp_hand.py --mode train --conf confs/dexycb_dino_hand.conf --case finetuning --gpu_num 3 --is_continue
For mesh inference,
CUDA_VISIBLE_DEVICES=0 python exp_runner_ho_dp_hand.py --mode test_mesh --conf confs/ho3d_dino_hand_test.conf --case finetuning --gpu_num 1 --is_continue
CUDA_VISIBLE_DEVICES=0 python exp_runner_ho_dp_hand.py --mode test_mesh --conf confs/dexycb_dino_hand_test.conf --case finetuning --gpu_num 1 --is_continue
For novel view synthesis inference,
CUDA_VISIBLE_DEVICES=0 python exp_runner_ho_dp_hand.py --mode test_image --conf confs/ho3d_dino_hand_test.conf --case finetuning --gpu_num 1 --is_continue
CUDA_VISIBLE_DEVICES=0 python exp_runner_ho_dp_hand.py --mode test_image --conf confs/dexycb_dino_hand_test.conf --case finetuning --gpu_num 1 --is_continue
Users need to download YCBmodels
from https://rse-lab.cs.washington.edu/projects/posecnn/ for mesh evaluation on HO3D
and DexYCB
.
For mesh evaluation,
python eval_ho3d_mesh.py --conf confs/ho3d_dino_hand_test.conf --case finetuning --shape_path PATH_TO_YCBmodels
python eval_dexycb_mesh.py --conf confs/dexycb_dino_hand_test.conf --case finetuning --shape_path PATH_TO_YCBmodels
For novel view synthesis evaluation,
python eval_image.py -T ho3d -P PATH_TO_THE_NOVEL_VIEW_SYNTHESIS_INFERENCE
python eval_image.py -T dexycb -P PATH_TO_THE_NOVEL_VIEW_SYNTHESIS_INFERENCE