Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChamferDistancePytorch functions require too much memory #3

Closed
marcusabate opened this issue May 17, 2021 · 1 comment
Closed

ChamferDistancePytorch functions require too much memory #3

marcusabate opened this issue May 17, 2021 · 1 comment

Comments

@marcusabate
Copy link

When trying to run trainer in completion mode, CUDA runs out of memory very quickly. I'm running this on an 8GB GPU, but CUDA is asking for over 15GB. This happens whenever calls to distChamfer and distChamfer_raw are made.

Is there a recommended setting to run shape-inversion on smaller machines before going to a larger computer cluster? It would be great if I could train remote and then complete shapes locally even if the full evaluation isn't done in the loop, as I can always evaluate afterwards.

Thank you.

@Pikoyooo
Copy link

hi~I thought I'm facing the same problem. May I know what you do to slove it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants