Skip to content

doduythao/TriHorn-Net

 
 

Repository files navigation

PWC PWC PWC

TriHorn-Net

This repository contains the PyTorch implementation of TriHorn-Net: A Model for Accurate Depth-Based 3D Hand Pose Estimation published at the ESWA journal. It contains easy instructions to replicate the results reported in the paper.

Fig 1. TriHorn-Net overview.

Setup

Download the repository:

makeReposit = [/the/directory/as/you/wish]
mkdir -p $makeReposit/; cd $makeReposit/
git clone https:https://github.com/mrezaei92/infrustructure_HPE.git

Preparing the Dataset

  1. NYU dataset

    Download and extract the dataset from the link provided below

    Copy the content of the folder data/NYU to where the dataset is located

  2. ICVL dataset

    Download the file test.pickle from here

    Download and extract the training set from the link provided below

    Navigate to the folder data/ICVL. Run the following command to get a file named train.pickle:
    python prepareICVL_train.py ICVLpath/Training
    Here, ICVLpath represents the address where the training set is extracted

    Place both test.pickle and train.pickle in one folder. This folder will serve as the ICVL dataset folder

  3. MSRA dataset

    Download and extract the dataset from the link provided below (dataset original author), extract P1,...P8 to data/MSRA/ (don't create a separate folder)

    Download and extract data/MSRA.tar.xz and copy its content to where the dataset is located. New Update: should use text files from this guy mrezaei92#2 https://drive.proton.me/urls/87MJVDWANW#GhV94ErapWsh

Training and Evaluation

Before running the experiment, first set the value ”datasetpath” in the corresponding .yaml file located in the folder configs. This value should be set to the address of the corresponding dataset. Then open a terminal and run the corresponding command.

Also set env var like ICVL_PATH, NYU_PATH, MSRA_PATH to save checkpoints by export VARNAME="my value"

After running each command, training is first done, and then the resulting models will be evaluated on the corresponding test set.
The results will be saved in a file named ”results.txt”.

  1. NYU

    bash train_eval_NYU.bash
  2. ICVL

    bash train_eval_ICVL.bash
  3. MSRA

    bash train_eval_MSRA.bash

Supported Datasets

This repo supports using the following dataset for training and testing:

Results

The table below shows the predicted labels on ICVL, NYU and MSRA dataset. All labels are in the format of (u, v, d) where u and v are pixel coordinates.

Dataset Predicted Labels
ICVL Download
NYU Download
MSRA Download

Reproduce results

Changing the config would lead you to some better results ICVL (5.68 vs 5.73 on paper), MSRA (7.05 vs 7.13 on paper)

Update

This work no longer is SOTA (due to journal long process so accepted in 2023) but actual SOTA (at 2024-03-14th) is this one with source

Adaptive wingloss isn't helpful tho. (maybe not done enough training)

Bibtex

If you use this paper for your research or projects, please cite TriHorn-Net: A Model for Accurate Depth-Based 3D Hand Pose Estimation.

@article{rezaei2023trihorn,
  title={TriHorn-Net: A model for accurate depth-based 3D hand pose estimation},
  author={Rezaei, Mohammad and Rastgoo, Razieh and Athitsos, Vassilis},
  journal={Expert Systems with Applications},
  pages={119922},
  year={2023},
  publisher={Elsevier}
}

About

Try to improve TriHorn-Net

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.2%
  • Shell 0.8%