Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Best results in ModelNet40 #12

Closed
zeal-up opened this issue Oct 12, 2018 · 6 comments
Closed

Best results in ModelNet40 #12

zeal-up opened this issue Oct 12, 2018 · 6 comments

Comments

@zeal-up
Copy link

zeal-up commented Oct 12, 2018

Thanks for your job! Pytorch is more elegant for me.
I want to asks that what's your best result of classification trained on ModelNet40 using the default hyper-parameters? Or, what's the best accuracy when you tune the hyper-parameters appropriate?
I'm training the model using your code and I will be appreciated if you post the best results.

@erikwijmans
Copy link
Owner

This repo matches the performance from the paper. If I recall correctly, the only hyper-parameter you need to change is the number of points (to 10k) for that.

@zeal-up
Copy link
Author

zeal-up commented Oct 16, 2018

@erikwijmans Thanks for your reply! My training results do match the paper, but it is still 0.x% accuracy gap with the paper. And I find that, in your implementation, the architecture seems to be pruned.

        self.SA_modules.append(
            PointnetSAModuleMSG(
                npoint=512,
                radii=[0.1, 0.2, 0.4],
                nsamples=[32, 64, 128],
                `mlps=[[input_channels, 64], [input_channels, 128],` 
################### in the paper, it seem to be a three layer mlp for per sub-pointnet
                      [input_channels, 128]],
                use_xyz=use_xyz
            )
        )

I want to know if you do this on purpose?
Thanks!

@erikwijmans
Copy link
Owner

I have played around with the architectures a fair amount. Makes sense to change them back to the ones given in Charles' repo, I will make that change.

@mingminzhen
Copy link

@erikwijmans How to test the model for modelnet40? Could you give some tips to re-implement the experiment result in the paper?

@erikwijmans
Copy link
Owner

The default parameters should do that. train/train_cls.py will train an MSG model on modelnet40.

@LiuNull
Copy link

LiuNull commented Apr 22, 2019

@erikwijmans Thanks for your reply! My training results do match the paper, but it is still 0.x% accuracy gap with the paper. And I find that, in your implementation, the architecture seems to be pruned.

        self.SA_modules.append(
            PointnetSAModuleMSG(
                npoint=512,
                radii=[0.1, 0.2, 0.4],
                nsamples=[32, 64, 128],
                `mlps=[[input_channels, 64], [input_channels, 128],` 
################### in the paper, it seem to be a three layer mlp for per sub-pointnet
                      [input_channels, 128]],
                use_xyz=use_xyz
            )
        )

I want to know if you do this on purpose?
Thanks!

Hi, sorry to bother. Did you match the paper's accuracy with the newest arch? I run the code and only got 0.9023 with the default hyper-parameters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants