You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The OccNet and ConvOccNet are all trained on 13 categories of ShapeNet. According to your code, your network is trained on each category separately.
For the results of ConvOccNet as shown in Table 2, why are the Chamfer Distances of ConvONet even worse than the results trained on 13 categories as reported in the paper of ConvOccNet? And ConvOccNet is trained using sparse and noisy point clouds (3000 points) as input. However, your network is trained with clean point clouds as shown in Figure 4-(a). How can you do the comparison like this? Is it a fair comparison?
The text was updated successfully, but these errors were encountered:
Sorry for the late reply.
Thanks for your interest in our paper. Very thanks for improving our work!
After reading your comments, I summarize three questions:
Train ConvOCCNet on 13 categories, our training is separate.
According to common practice and previous work, we use the pre-trained models to conduct the evaluations. Despite no evidence that OccNet and ConvOccNet trained on 13 categories performed worse than trained on each category separately, finetuning the pre-trained models rather than using them directly may be better for a fair comparison. We may revise the paper on the arxiv.
CD of OCCNet is worse than the original paper.
Chamfer distance is related to the scale of objects. To do a meaningful comparison of different methods, the meshes are normalized to the same scale. Comparing the results reported in the different papers directly is improper. For table 2, the reported number is performed at the same testing settings.
During training, ConvOCCNet uses noisy input and ours use clean input.
For the evaluation, all the methods (including ours) use clean point clouds as input. What you said is a good suggestion, we will also finetune the convoccnet with clean point clouds, or re-train the convoccnet to update the score for each method.
In all, your comments are very constructive for our work, we will retrain these baseline methods with the same settings as possible and update the results.
If you have any further questions/problems, please feel free to contact us via Github/E-mail.
The OccNet and ConvOccNet are all trained on 13 categories of ShapeNet. According to your code, your network is trained on each category separately.
For the results of ConvOccNet as shown in Table 2, why are the Chamfer Distances of ConvONet even worse than the results trained on 13 categories as reported in the paper of ConvOccNet? And ConvOccNet is trained using sparse and noisy point clouds (3000 points) as input. However, your network is trained with clean point clouds as shown in Figure 4-(a). How can you do the comparison like this? Is it a fair comparison?
The text was updated successfully, but these errors were encountered: