Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to adjust SuperPoint number to avoid CUDA memory full? #96

Open
glennliu opened this issue Feb 22, 2024 · 1 comment
Open

How to adjust SuperPoint number to avoid CUDA memory full? #96

glennliu opened this issue Feb 22, 2024 · 1 comment

Comments

@glennliu
Copy link

Hi

I'm running GeoTransformer on fused indoor point cloud (from ScanNet). I run it using demo.py. My GPU is Nvidia3090 with 24GB memory. But the program frequently fails due to "not enough CUDA memory". I downsample the point cloud with 2.5cm voxel size if it is >30000.

I understand the key part is reduce the superpoint number. So I follow #16 to adjust the parameters

backbone.num_stages=5,
neighbor_limits=[38,36,36,38,38]

But it stopped at RPEMultiHeadAttention and it shows,

Exception has occurred: RuntimeError
einsum(): subscript n has size 390 for operand 1 which does not broadcast with previously seen size 2418

Is any suggestion on how to adjust the parameter properly?
Thanks

@glennliu
Copy link
Author

I believe the error should comes from the KPConvFPN backbone.
It is fixed at 4 stages and not adjusted with changed parameters.

I can modify the backbone to more stages. But it requires to re-train the network.
So I find an easier way to test the pre-trained weight. I adjust the backbone.voxel_size=0.05 to make it generate less superpoints. It works fine now without memory full issue.

But it may be less accurate than the original setting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant