Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some Problems with the reproduced results #9

Closed
Gao-JT opened this issue Jul 2, 2021 · 4 comments
Closed

Some Problems with the reproduced results #9

Gao-JT opened this issue Jul 2, 2021 · 4 comments

Comments

@Gao-JT
Copy link

Gao-JT commented Jul 2, 2021

Dears:

Thanks for your excellent work! Now I am trying to reproduce the results on the task of 3D Object Classification on ModelNet40 through the provided code, but the best reproduced result using DGCNN as the backbone is only 93.07. Do you know what's wrong with it? Is it the randomness of the training that causes this phenomenon? If so, can you provide a random seed for the training?

image

Looking forward to hearing from you. Thanks for your excellent work again!

@mutianxu
Copy link
Collaborator

mutianxu commented Jul 3, 2021

Hi @Gao-JT ,

Thanks for your interest in our work.

Please note that the classification on ModelNet40 is an experiment with large randomness due to the simplicity of the dataset(not caused by seed since we have set the seed actually). If you run a same code (not limited to our model but other models such as DGCNN, PointNet, etc.) for several times, you will get different results and some SOTA methods may have larger variance in their results on ModelNet40 than ours.

Also, the voting strategy is a strategy with randomness, the results without the post-processing factor (i.e., voting) better reflects the performance gained purely from model designs. Thus it’s very normal that you can not get the best result.

So far we can make sure that you can test the pre-trained model we released in our README link to get 93.6% accuracy. And get 93.9% accuracy after using the voting if everything goes right.

Hopefully this is helpful to you.

Regards,
Mino

@swzaaaaaaa
Copy link

Dears:

Thanks for your excellent work! Now I am trying to reproduce the results on the task of 3D Object Classification on ModelNet40 through the provided code, but the best reproduced result using DGCNN as the backbone is only 93.07. Do you know what's wrong with it? Is it the randomness of the training that causes this phenomenon? If so, can you provide a random seed for the training?

image

Looking forward to hearing from you. Thanks for your excellent work again!

兄弟,你知道为什么了吗,我和你的结果类似,backbone为DGCNN的话,分类的结果在93.1左右。。。

@mutianxu
Copy link
Collaborator

mutianxu commented Jul 26, 2021

Hi there,

Here is the illustration of the classification results.

  1. About ModelNet40:

a. For the classification on ModelNet40, if you train our model from scratch, the concrete variance of is +- 0.5%, which means 93.1% is very normal.
Some SOTA methods may have larger variance than us on ModelNet40 in our own re-productions, and we follow them to report the highest result. For instance, you can find larger variance in the reproductions by different people in one of the issues of DGCNN.
In our experiments, when we replace our PAConv with pure MLP-backbones, we will get a much lower result compared with the results reported in the papers of the selected backbones, but we still report the highest results listed in their papers.

b. Since this is very normal across different SOTA methods, it is necessary to emphasize that the variance is mostly caused by the simplicity of the dataset (pure CAD model with a limited number of categories and samples, very easy to be overfitting). If you re-produce our code on more complex datasets in part_seg or scene_seg tasks, the results will be stable.

c. For the classification task, I may recommend you to try to verify your models on ScanObjectNN, a real-world classification dataset, where the results will be more stable than ModelNet40.

d. By the way, we can make sure that you can test the pre-trained model we released in our README link to get 93.6% accuracy.

  1. About voting:
    The voting strategy during the test may also have a variance of +- 0.5% (we also report the results of w/ and w/o voting in our paper). Our voting code is similar with RSCNN, whose released model is 92.4 w/o voting, while the reported result in their paper is 93.6 w/ voting. By eliminating the post-processing factor, the results without voting better reflects the performance gained purely from model designs and show the effectiveness of our PAConv.

  2. Possible adjustments:
    In your experiments, I recommend you to adjust the batch_size, the number of GPUs, the training epochs, learning rate, .etc, to run several times, which may give you a better result.

Hope this is helpful to you guys!

Thanks,
Mino

@swzaaaaaaa
Copy link

好的,谢谢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants