PyTorch GPU memory allocation #34323
Labels
module: cuda
Related to torch.cuda, and CUDA support in general
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
How to prevent shared libraries from allocating memory in GPU? I see that even before any shared library function is used, GPU memory uses increases significantly with PyTorch as soon as the process gets started. Any workaround?
With this simple example code nvidia-smi shows usage of 781MB of memory!
cc @ngimel
The text was updated successfully, but these errors were encountered: