You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When trying to run trainer in completion mode, CUDA runs out of memory very quickly. I'm running this on an 8GB GPU, but CUDA is asking for over 15GB. This happens whenever calls to distChamfer and distChamfer_raw are made.
Is there a recommended setting to run shape-inversion on smaller machines before going to a larger computer cluster? It would be great if I could train remote and then complete shapes locally even if the full evaluation isn't done in the loop, as I can always evaluate afterwards.
Thank you.
The text was updated successfully, but these errors were encountered:
When trying to run
trainer
incompletion
mode, CUDA runs out of memory very quickly. I'm running this on an 8GB GPU, but CUDA is asking for over 15GB. This happens whenever calls todistChamfer
anddistChamfer_raw
are made.Is there a recommended setting to run shape-inversion on smaller machines before going to a larger computer cluster? It would be great if I could train remote and then complete shapes locally even if the full evaluation isn't done in the loop, as I can always evaluate afterwards.
Thank you.
The text was updated successfully, but these errors were encountered: