-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some Problems with the reproduced results #9
Comments
Hi @Gao-JT , Thanks for your interest in our work. Please note that the classification on ModelNet40 is an experiment with large randomness due to the simplicity of the dataset(not caused by seed since we have set the seed actually). If you run a same code (not limited to our model but other models such as DGCNN, PointNet, etc.) for several times, you will get different results and some SOTA methods may have larger variance in their results on ModelNet40 than ours. Also, the voting strategy is a strategy with randomness, the results without the post-processing factor (i.e., voting) better reflects the performance gained purely from model designs. Thus it’s very normal that you can not get the best result. So far we can make sure that you can test the pre-trained model we released in our README link to get 93.6% accuracy. And get 93.9% accuracy after using the voting if everything goes right. Hopefully this is helpful to you. Regards, |
兄弟,你知道为什么了吗,我和你的结果类似,backbone为DGCNN的话,分类的结果在93.1左右。。。 |
Hi there, Here is the illustration of the classification results.
a. For the classification on ModelNet40, if you train our model from scratch, the concrete variance of is +- 0.5%, which means 93.1% is very normal. b. Since this is very normal across different SOTA methods, it is necessary to emphasize that the variance is mostly caused by the simplicity of the dataset (pure CAD model with a limited number of categories and samples, very easy to be overfitting). If you re-produce our code on more complex datasets in part_seg or scene_seg tasks, the results will be stable. c. For the classification task, I may recommend you to try to verify your models on ScanObjectNN, a real-world classification dataset, where the results will be more stable than ModelNet40. d. By the way, we can make sure that you can test the pre-trained model we released in our README link to get 93.6% accuracy.
Hope this is helpful to you guys! Thanks, |
好的,谢谢 |
Dears:
Thanks for your excellent work! Now I am trying to reproduce the results on the task of 3D Object Classification on ModelNet40 through the provided code, but the best reproduced result using DGCNN as the backbone is only 93.07. Do you know what's wrong with it? Is it the randomness of the training that causes this phenomenon? If so, can you provide a random seed for the training?
Looking forward to hearing from you. Thanks for your excellent work again!
The text was updated successfully, but these errors were encountered: