Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

amount of GPU memory required #10

Closed
pfps opened this issue Aug 4, 2021 · 1 comment
Closed

amount of GPU memory required #10

pfps opened this issue Aug 4, 2021 · 1 comment

Comments

@pfps
Copy link

pfps commented Aug 4, 2021

How much GPU memory is required to run the decompiler? When I run on GPU with 11GB of memory I get out-of-error messages like:

File "/data/pfps/nbref/baseline_model/modules/encoder_decoder_layers.py", line 108, in forward
    energy = torch.matmul(Q, K.permute(0, 1, 3, 2)) / self.scale.cuda()
RuntimeError: CUDA out of memory. Tried to allocate 60.00 MiB (GPU 0; 10.92 GiB total capacity; 10.15 GiB already allocated; 10.6\
9 MiB free; 10.32 GiB reserved in total by PyTorch)
@ChengFu0118
Copy link

Hi pfps,

I was using 4/8 16GB GPUs (V100) to test the code. If you already set the batch size to a small size, you have to reduce the maximum input length to fit the model in your GPU. An example of forcing maximum input length is given here .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants