Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Did you do a fair comparison with other methods in Table 2? #8

Closed
abc-def-g opened this issue Mar 7, 2022 · 1 comment
Closed

Did you do a fair comparison with other methods in Table 2? #8

abc-def-g opened this issue Mar 7, 2022 · 1 comment

Comments

@abc-def-g
Copy link

abc-def-g commented Mar 7, 2022

The OccNet and ConvOccNet are all trained on 13 categories of ShapeNet. According to your code, your network is trained on each category separately.
For the results of ConvOccNet as shown in Table 2, why are the Chamfer Distances of ConvONet even worse than the results trained on 13 categories as reported in the paper of ConvOccNet? And ConvOccNet is trained using sparse and noisy point clouds (3000 points) as input. However, your network is trained with clean point clouds as shown in Figure 4-(a). How can you do the comparison like this? Is it a fair comparison?

@tommaoer
Copy link
Member

tommaoer commented Apr 2, 2022

Sorry for the late reply.
Thanks for your interest in our paper. Very thanks for improving our work!
After reading your comments, I summarize three questions:

  1. Train ConvOCCNet on 13 categories, our training is separate.
  • According to common practice and previous work, we use the pre-trained models to conduct the evaluations. Despite no evidence that OccNet and ConvOccNet trained on 13 categories performed worse than trained on each category separately, finetuning the pre-trained models rather than using them directly may be better for a fair comparison. We may revise the paper on the arxiv.
  1. CD of OCCNet is worse than the original paper.
  • Chamfer distance is related to the scale of objects. To do a meaningful comparison of different methods, the meshes are normalized to the same scale. Comparing the results reported in the different papers directly is improper. For table 2, the reported number is performed at the same testing settings.
  1. During training, ConvOCCNet uses noisy input and ours use clean input.
  • For the evaluation, all the methods (including ours) use clean point clouds as input. What you said is a good suggestion, we will also finetune the convoccnet with clean point clouds, or re-train the convoccnet to update the score for each method.

In all, your comments are very constructive for our work, we will retrain these baseline methods with the same settings as possible and update the results.

If you have any further questions/problems, please feel free to contact us via Github/E-mail.

@tommaoer tommaoer closed this as completed Apr 2, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants