Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cannot reproduce results on Manifold40 #24

Closed
lykius opened this issue Jan 12, 2022 · 4 comments
Closed

cannot reproduce results on Manifold40 #24

lykius opened this issue Jan 12, 2022 · 4 comments

Comments

@lykius
Copy link

lykius commented Jan 12, 2022

Hi, I'm trying to reproduce your results on Manifold40.

I downloaded the dataset with the command sh scripts/manifold40/get_data.sh, which downloads the file Manifold40-MAPS-96-3.zip. I noticed in your instructions that it is possible to download a different version of Manifold40 "before remeshing". Which is the correct version to use for evaluation?

I noticed that the version in Manifold40-MAPS-96-3.zip contains approximatively 10 times the number of shapes contained in the original ModelNet40, why is that?

I run the command sh scripts/manifold40/get_pretrained.sh to obtain the checkpoint for the model and then I run the command sh scripts/manifold40/test.sh to perform evaluation. The evaluation runs fine but I get a very low accuracy. Do you have any idea of what I could have missed?

Thanks in advance,
Luca.

@lzhengning
Copy link
Owner

Hi Luca @lykius,

  1. Use the scripts/manifold40/get_data.sh to download the processed dataset.
  2. We generated different meshes of a same shape as data augmentation. More details are provided in the sections about remeshing and data augmentation. A related discussion is here Dataset #11.
  3. In the previous released codes, the classification labels are determined by the order in file system. This leads to inconsistent labeling in different OS. I realized this a few weeks ago, and changed the labels to be ordered by the class names. Today I have updated the codes, scripts, and pretrained weights. Please update your local repository and download the newest weights before running the test script.

Feel free to reply if you have any more questions.

@lykius
Copy link
Author

lykius commented Jan 13, 2022

Hi @lzhengning,

Thanks a lot for your answer.
I pulled the repo and re-downloaded the weights, now I get 90.9% accuracy without voting and 91.5% accuracy with voting.
Does it sound right to you? The paper reports 91.2% and 91.5% respectively.

@lzhengning
Copy link
Owner

Yes, 90.9% (without voting) and 91.5% (with voting) are the performance of this provided checkpoint.

A network does not always achieve the highest accuracy both with and without voting. 91.2% and 91.5% are accuracies from two separated checkpoints in our experiments. Since 90.9% is quite close to the best, only the checkpoint with 91.5% voted accuracy is released for brevity.

@lykius
Copy link
Author

lykius commented Jan 13, 2022

Ok thanks!
This closes the issue.

@lykius lykius closed this as completed Jan 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants