-
Notifications
You must be signed in to change notification settings - Fork 183
Closed
Description
what is the GPU memory requirement for running prediction? Is it system dependent? If so is there a simple way to estimate the memory required?
I was running an inference for a complex with N_asym 6, N_token 2372, N_atom 18500, N_msa 4940 on a GPU with 24 GB memory. and the job was killed by a OOM:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 5.37 GiB. GPU
Metadata
Metadata
Assignees
Labels
No labels