Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run process always killed #40

Closed
x0s opened this issue Oct 6, 2022 · 2 comments
Closed

Run process always killed #40

x0s opened this issue Oct 6, 2022 · 2 comments

Comments

@x0s
Copy link

x0s commented Oct 6, 2022

Hi,

Thanks for sharing your work, I am getting some trouble executing your script with success using nerf_synthetic dataset as described in README, process seems to be killed because there is not enough memory, but not sure if it's VRAM or RAM.
Similar output with Evulation or video rendering commands.
Here is the output. Do you have any idea how to reduce memory consumption ?
Thanks

Using /home/x0s/.cache/torch_extensions/py310_cu102 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/x0s/.cache/torch_extensions/py310_cu102/adam_upd_cuda/build.ninja...
Building extension module adam_upd_cuda...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module adam_upd_cuda...
Using /home/x0s/.cache/torch_extensions/py310_cu102 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/x0s/.cache/torch_extensions/py310_cu102/render_utils_cuda/build.ninja...
Building extension module render_utils_cuda...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module render_utils_cuda...
Using /home/x0s/.cache/torch_extensions/py310_cu102 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/x0s/.cache/torch_extensions/py310_cu102/total_variation_cuda/build.ninja...
Building extension module total_variation_cuda...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module total_variation_cuda...
Using /home/x0s/.cache/torch_extensions/py310_cu102 as PyTorch extensions root...
No modifications detected for re-loaded extension module render_utils_cuda, skipping build step...
Loading extension module render_utils_cuda...
Using /home/x0s/.cache/torch_extensions/py310_cu102 as PyTorch extensions root...
Detected CUDA files, patching ldflags
Emitting ninja build file /home/x0s/.cache/torch_extensions/py310_cu102/ub360_utils_cuda/build.ninja...
Building extension module ub360_utils_cuda...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module ub360_utils_cuda...
Loaded blender (400, 800, 800, 4) torch.Size([160, 4, 4]) [800, 800, 1111.1110311937682] ./data/nerf_synthetic/lego
Killed
@x0s
Copy link
Author

x0s commented Oct 10, 2022

Solved by extending RAM, but could be nice to know how to reduce memory requirements from the scrips

@x0s x0s closed this as completed Oct 10, 2022
@saurabhmishra608
Copy link

Solved by extending RAM, but could be nice to know how to reduce memory requirements from the scrips

How much ram is actually needed to run the scripts for inference and training?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants