Skip to content

aurooj/Hand-Segmentation-in-the-Wild

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

59 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Analysis of Hand Segmentation in the Wild

Abstract

A large number of works in egocentric vision have concentrated on action and object recognition. Detection and segmentation of hands in first-person videos, however, has less been explored. For many applications in this domain, it is necessary to accurately segment not only hands of the camera wearer but also the hands of others with whom he is interacting. Here, we take an in-depth look at the hand segmentation problem. In the quest for robust hand segmentation methods, we evaluated the performance of the state of the art semantic segmentation methods, off the shelf and fine-tuned, on existing datasets. We fine-tune RefineNet, a leading semantic segmentation method, for hand segmentation and find that it does much better than the best contenders. Existing hand segmentation datasets are collected in the laboratory settings. To overcome this limitation, we contribute by collecting two new datasets: a) EgoYouTubeHands including egocentric videos containing hands in the wild, and b) HandOverFace to analyze the performance of our models in presence of similar appearance occlusions. We further explore whether conditional random fields can help refine generated hand segmentations. To demonstrate the benefit of accurate hand maps, we train a CNN for hand-based activity recognition and achieve higher accuracy when a CNN was trained using hand maps produced by the fine-tuned RefineNet. Finally, we annotate a subset of the EgoHands dataset for fine-grained action recognition and show that an accuracy of 58.6% can be achieved by just looking at a single hand pose which is much better than the chance level (12.5%).

Code

We have uploaded the additional files needed to train, test and evaluate our models' performance. Code for multiscale evaluation is also provided. See the folder refinenet_files.

To test the models:

  • you will need to download the refinenet code from their github repository.
  • Copy the files provided in refinenet_files folder to refinenet/main folder.
  • Place the refinenet-based hand segmentation model (see Models section) in refinenet/model_trained folder.
  • For instance, to test the model trained on EgoHands dataset, copy the refinenet_res101_egohands.mat file in refinenet/model_trained folder. Set the path to test images folder in demo_refinenet_test_example_egohands.m and run the script.
  • The demo code is the same from the original refinenet demo files except minor changes.

Models

You can download our refinenet-based hand segmentation models using the links given below:

Datasets

We used 4 hand segmentation datasets in our work, two of them(EgoYouTubeHands and HandOverFace datasets) are collected as part of our contribution:

Warning!

Thanks to Rafael Redondo Tejedor who pointed out some minor mistakes in the dataset:

  • For HandOverFace dataset, 216.jpg and 221.jpg images are actually GIFs in the original size folder.
  • There were minor annotations errors for the following images in xml files: 10.jpg and 225.jpg which were pointed out and corrected by Rafael Redondo Tejedor.
  • Current link to the dataset has updated xml files for the above mentioned annotation errors.

NEW!

02/23/2021! Hands masks for GTEA (cropped till wrist) has been uploaded under [data directory.

Links to the videos used for EYTH dataset are given below. Each video is 3-6 minutes long. We cleaned the dataset before annotation and discarded unnecessary frames (e.g., frames containing text or if hands were out of view for a long time, etc).

vid4 vid6 vid9

NEW!

Test set for HandOverFace dataset is uploaded here.

Example images from EgoYouTubeHands dataset: EYTH

Example images from HandOverFace dataset: HOF

  • EgoHands+ dataset: To study fine-level action recognition, we provide additional annotations for a subset of EgoHands dataset. You can find more details here and download the dataset from this download link.

Results

Qualitative Results

Hand segmentation results for all datasets: All datasets:

CVPR Poster

cvpr-poster

License

We only provide the annotations for the videos in EYTH and images in HOF. The copyright to the videos belong to YouTube, and images are collected from internet. The license for our work is in the license.txt file.

Acknowledgements

We would like to thank undergraduate students Cristopher Matos, and Jose-Valentin Sera-Josef, and MS student Shiven Goyal for helping us in data annotations.

Citation

If this work and/or datasets is useful for your research, please cite our paper.

Questions?

Please contact 'aishaurooj@gmail.com'

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages