Hi authors,
Congratulations on your paper. I am trying to re-train TransCS using the command provided:
python train.py --rate 0.1 --device 0
The README mentions: "please ensure 24G memory or more". I currently only have access to a GPU with 12GB/16GB VRAM, and I am encountering OOM (Out of Memory) errors.
Is there a configuration argument to reduce the batch size or modify the patch size to fit into smaller GPU memory? Or could you provide guidance on which parameters in config.py (or loader.py) I should adjust to lower the memory consumption?
Best regards.