Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accuracy not consistent? #49

Closed
ethanhe42 opened this issue Oct 5, 2017 · 5 comments
Closed

Accuracy not consistent? #49

ethanhe42 opened this issue Oct 5, 2017 · 5 comments

Comments

@ethanhe42
Copy link

ethanhe42 commented Oct 5, 2017

after traing, I got 88.65% accuracy and 85.62% for avg class accuracy. why it's not consistent with 89.2% and 86.2% as paper suggested?

@charlesq34
Copy link
Owner

charlesq34 commented Oct 6, 2017

Hi Yihui,
There may be some fluctuation in training performance from time to time. That's the number we get in our experiment, it may take you some trial to reach the same ones.

PS: to get the same evaluation results, it's recommended to use the evaluation script which evaluates test shapes in all rotations instead of only a single default rotation for each shape. The test set only has 2468 shapes thus evaluating without rotations will be very unstable.

@charlesq34
Copy link
Owner

closing due to no continuing conversation.

@Tgaaly
Copy link

Tgaaly commented Dec 17, 2017

Hi @charlesq34, Thanks for the code! Great work!
A couple of questions:

  • Are you saying that the variance of the performance is high and that you report the highest achieved accuracy in the paper (the 89.2%)?
  • How many rotations did you use to evaluate the method in the paper - to get the 89.2%?
  • Best performance is with Adam or SGD?
  • Is the best performing model trained with exponential decay every 20 epochs for both learning rate AND batch normalization momentum? It's confusing because in the paper it says that the LR is reduced every 20 epochs but in the code the default setting is 20,000 iterations for both BN momentum and LR. Which one achieves the 89.2%?

@charlesq34
Copy link
Owner

HI @Tgaaly

It has been a while since I checked the repo's issues. sorry for the delay. Firstly thanks for your interest!

There is some variance of the accuracies so it's more stable if we evaluate on several rotated version of the point clouds. The accuracy on the testing set during training process can fluctuate from around 88.6 to 89.1 as I remember. I think I used evaluate.py with num_votes=12 to get the final accuracy number.

The best model is trained with Adam. Both BN and LR has decays. I used 20 epochs for the step size for both of the decays.

Hope it helps.

@RyanCV
Copy link

RyanCV commented Oct 21, 2018

@charlesq34 For the train.py for running point_cls model, I found the decay_step is out of the range, but you mentioned to @Tgaaly

I used 20 epochs for the step size for both of the decays.

parser.add_argument('--decay_step', type=int, default=200000, help='Decay step for lr decay [default: 200000]')

which one is correct? Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants