Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyTorch GPU memory allocation #34323

Open
riyadshairi979 opened this issue Mar 5, 2020 · 2 comments
Open

PyTorch GPU memory allocation #34323

riyadshairi979 opened this issue Mar 5, 2020 · 2 comments
Labels
module: cuda Related to torch.cuda, and CUDA support in general triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@riyadshairi979
Copy link

riyadshairi979 commented Mar 5, 2020

How to prevent shared libraries from allocating memory in GPU? I see that even before any shared library function is used, GPU memory uses increases significantly with PyTorch as soon as the process gets started. Any workaround?

    torch::Device cpuDevice(torch::kCPU);

    torch::Tensor t1 = torch::ones({5, 5}, gpuDevice);
    torch::Tensor t2 = torch::ones({5, 5}, gpuDevice);
    torch::Tensor t = t1 + t2;

    torch::Tensor tcpu = t.to(cpuDevice);

With this simple example code nvidia-smi shows usage of 781MB of memory!

| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0       467      C   ./example-gpu                                633MiB |
|    0      2283      G   /usr/lib/xorg/Xorg                           104MiB |
|    0      2320      G   /usr/bin/gnome-shell                          55MiB |
|    0      3184      G   /usr/lib/xorg/Xorg                           390MiB |
|    0      3321      G   /usr/bin/gnome-shell                         297MiB |
|    0      4239      G   ...quest-channel-token=8978750699062060003    41MiB |
|    0     30984      G   ...uest-channel-token=11524677140754815412    44MiB |
+-----------------------------------------------------------------------------+

cc @ngimel

@ptrblck
Copy link
Collaborator

ptrblck commented Mar 5, 2020

The first CUDA operation will create the CUDA context on your device, which will use some memory.
I'm not sure, if I understand the question correctly regarding the shared libraries.

@riyadshairi979
Copy link
Author

I see increased gpu memory even before the first CUDA operation. Trying to see where it gets allocated.

@yf225 yf225 added module: cuda Related to torch.cuda, and CUDA support in general triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Mar 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: cuda Related to torch.cuda, and CUDA support in general triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

3 participants