-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accuracy not consistent? #49
Comments
Hi Yihui, PS: to get the same evaluation results, it's recommended to use the evaluation script which evaluates test shapes in all rotations instead of only a single default rotation for each shape. The test set only has 2468 shapes thus evaluating without rotations will be very unstable. |
closing due to no continuing conversation. |
Hi @charlesq34, Thanks for the code! Great work!
|
HI @Tgaaly It has been a while since I checked the repo's issues. sorry for the delay. Firstly thanks for your interest! There is some variance of the accuracies so it's more stable if we evaluate on several rotated version of the point clouds. The accuracy on the testing set during training process can fluctuate from around 88.6 to 89.1 as I remember. I think I used evaluate.py with num_votes=12 to get the final accuracy number. The best model is trained with Adam. Both BN and LR has decays. I used 20 epochs for the step size for both of the decays. Hope it helps. |
@charlesq34 For the train.py for running point_cls model, I found the decay_step is out of the range, but you mentioned to @Tgaaly
which one is correct? Thanks. |
after traing, I got 88.65% accuracy and 85.62% for avg class accuracy. why it's not consistent with 89.2% and 86.2% as paper suggested?
The text was updated successfully, but these errors were encountered: