Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about hand size and data augmentation #5

Closed
yaoxinreaps opened this issue Dec 10, 2018 · 5 comments
Closed

about hand size and data augmentation #5

yaoxinreaps opened this issue Dec 10, 2018 · 5 comments

Comments

@yaoxinreaps
Copy link

hi,
thanks for sharing your project, I have two questions here,

  1. will the cube size of hand influence the result? for example, change it from 300 to 250, how will the predict result be? have you done some experiments for comparison?
  2. which is the better way for data augmentation? just augment for every sample or choose some of them (for example, half probability)?
    thank you!
@xinghaochen
Copy link
Owner

Hi,

  1. We didn't conduct extensive experiments for the cube size of the cropped hand. Changing it from 300 to 250 should give similar performance, I guess.
  2. We did online data augmentation in training, that is, augmenting each training sample for random transformations before feeding it for training.

@yaoxinreaps
Copy link
Author

OK, thank you~

@yaoxinreaps
Copy link
Author

Sorry to bother you again.
Would you please tell me the training parameters on hands17 dataset? More specifically, what is the batch size, initial learning rate, learning policy and stepsize or stepvalue, and max training epoch?
By the way, have you trained the REN model on hands17 dataset for comparison? I guess it can achieve the similar subjective performance because the training data is big enough. What's your opinion?
Thanks again.

@yaoxinreaps yaoxinreaps reopened this Dec 14, 2018
@xinghaochen
Copy link
Owner

Hi, here are the training parameters on hands17 dataset:

base_lr: 0.001
lr_policy: "step"
gamma: 0.1
stepsize: 200000
display: 100
max_iter: 800000
momentum: 0.9
weight_decay: 0.0005

The batch size is 128, the same with all other experiments on our paper.

We did try to triain the REN model on hands17 dataset. Now I can't remember what exactly the performance was. However, Pose-REN did consistently perform better than REN (for several millimeters, maybe) as far as I can remember.

In case you are now working on the HANDS17 challenge, it should be noted that the pre-trained models on hands17 dataset released in this repo is not exactly the one for our final entry in HANDS17 Challenge. As stated in the challenge document, we used REN as the Init-Net and there were also several improvements/tricks. The pre-trained models on hands17 dataset in this repo are not intended to reproduce our performance in the challenge, but to provide a more stable and accurate hand pose estimator for various applications.

@yaoxinreaps
Copy link
Author

OK, I will try to train again and hope to get as good performance as yours. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants