You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How much GPU memory is required to run the decompiler? When I run on GPU with 11GB of memory I get out-of-error messages like:
File "/data/pfps/nbref/baseline_model/modules/encoder_decoder_layers.py", line 108, in forward
energy = torch.matmul(Q, K.permute(0, 1, 3, 2)) / self.scale.cuda()
RuntimeError: CUDA out of memory. Tried to allocate 60.00 MiB (GPU 0; 10.92 GiB total capacity; 10.15 GiB already allocated; 10.6\
9 MiB free; 10.32 GiB reserved in total by PyTorch)
The text was updated successfully, but these errors were encountered:
I was using 4/8 16GB GPUs (V100) to test the code. If you already set the batch size to a small size, you have to reduce the maximum input length to fit the model in your GPU. An example of forcing maximum input length is given here .
How much GPU memory is required to run the decompiler? When I run on GPU with 11GB of memory I get out-of-error messages like:
The text was updated successfully, but these errors were encountered: