Skip to content

OOM error #65

@cyxu2017

Description

@cyxu2017

what is the GPU memory requirement for running prediction? Is it system dependent? If so is there a simple way to estimate the memory required?

I was running an inference for a complex with N_asym 6, N_token 2372, N_atom 18500, N_msa 4940 on a GPU with 24 GB memory. and the job was killed by a OOM:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 5.37 GiB. GPU

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions