Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The evaluation results on DTU evaluation set are different between paper and released checkpoints #21

Open
YANG-SOBER opened this issue Dec 29, 2022 · 2 comments

Comments

@YANG-SOBER
Copy link

Dear Rui Peng:

Thank you very much for your contribution and nice work.

I evaluate your released checkpoints "unimvsnet_dtu.ckpt" on the DTU evaluation set. (I do not change any parameters)

The results are: 0.4173 for mean accuracy and 0.2966 for mean completeness.

However, in the paper, the two metrics are 0.352 and 0.278, respectively.

May I know whether this released checkpoint is the one you used for the paper?

Thanks for your help.

Looking forward to your response.

@YANG-SOBER
Copy link
Author

After set the align_corners=True in the F.grid_sample(), the result is 0.3685 (acc) and 0.2785 (comp), the other difference will be attributed to the gipuma (fusible). The compilation of which is dependent on the compute capability of the particular cuda and gpu version, please refer to https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/.

@prstrive
Copy link
Owner

The test results are indeed related to the environment, maybe you can try pytorch1.2.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants