Skip to content

Training on GPUs with less than 24GB VRAM?from xty #33

@1111307

Description

@1111307

Hi authors,

Congratulations on your paper. I am trying to re-train TransCS using the command provided:
python train.py --rate 0.1 --device 0

The README mentions: "please ensure 24G memory or more". I currently only have access to a GPU with 12GB/16GB VRAM, and I am encountering OOM (Out of Memory) errors.

Is there a configuration argument to reduce the batch size or modify the patch size to fit into smaller GPU memory? Or could you provide guidance on which parameters in config.py (or loader.py) I should adjust to lower the memory consumption?

Best regards.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions