Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The question of pre-training accuracy #16

Closed
bunnyveil opened this issue May 22, 2024 · 7 comments
Closed

The question of pre-training accuracy #16

bunnyveil opened this issue May 22, 2024 · 7 comments

Comments

@bunnyveil
Copy link

I first ran the classification task of ModelNet40 training-from-scratch in pointmamba on four 3080Ti, and selected the your pretrain.pth file to run the train from pre-trained classification of ModelNet40. All these are implemented in accordance with the parameters and steps described in the paper, but the final classification accuracy is only 93.0713%, while the best classification accuracy mentioned in the paper is 93.6%.We ran it many times(pic 1 with voting and pic 2,3,4 without voting) I have tested other classification and segmentation tasks, and the results obtained in accordance with the parameters in the paper have decreased by about 1-4 percentage points compared with those in the paper. I wonder if this is a problem or if the model parameters in the article are not updated in real time?
1
2
3
4

@LMD0311
Copy link
Owner

LMD0311 commented May 23, 2024

Thank you for your interest in our work!

The code currently open-sourced from the repository aligns with the paper's current version. Our models are all trained and tested using a single RTX 4090, which may differ across hardware platforms. Also please refer to this issue and the explanation here.

Feel free to evaluate our open-sourced weight for ModelNet40.

@formerlya
Copy link

May I ask if you can directly test with the weights you provide? The accuracy of the test in pointmamba is only 7%~8%, and the weight files provided by them are also used in point-MAE, with about 93.5%
1716887100665

@formerlya
Copy link

并且当我训练的时候
1716887279564
会nan
同样的数据,在mae中可以正常训练
请问可以直接使用代码中的超参数吗,需要针对性调整?

@LMD0311
Copy link
Owner

LMD0311 commented May 28, 2024

May I ask if you can directly test with the weights you provide? The accuracy of the test in pointmamba is only 7%~8%, and the weight files provided by them are also used in point-MAE, with about 93.5% 1716887100665

I just re-cloned the repository, downloaded the open-sourced checkpoint, and achieved the exact same results as reported. The test command should be CUDA_VISIBLE_DEVICES=0 python main.py --test --config cfgs/finetune_modelnet.yaml --ckpts modelnet_scratch.pth.

@LMD0311
Copy link
Owner

LMD0311 commented May 28, 2024

并且当我训练的时候 1716887279564 会nan 同样的数据,在mae中可以正常训练 请问可以直接使用代码中的超参数吗,需要针对性调整?

I'm sorry, we never encountered an issue with NaN.

@formerlya
Copy link

并且当我训练的时候 1716887279564 会nan 同样的数据,在mae中可以正常训练 请问可以直接使用代码中的超参数吗,需要针对性调整?

I'm sorry, we never encountered an issue with NaN.

Thank you very much for your answer~

@LMD0311
Copy link
Owner

LMD0311 commented Jun 13, 2024

I am closing this issue. Please feel free to reopen it if necessary.

@LMD0311 LMD0311 closed this as completed Jun 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants