New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU memory #13
Comments
Hi 2 * P40 probably are not enough for the task semantic segmentation for example 2*P40 even can’t run the Deeplab v3+ or DANet with ResNet-101. Following is the minimum resource to run SETR (bs=8) on Cityscapes you can see it is on par with most existing segmentation models. SETR-Naive-DeiT, 8 * 11.5G |
I tried to train SETR-Naive-DeiT&SETR-MLA-DeiT on 4* TITAN RTX 24g GPU, I set samples_per_gpu=1 on config/SETR/.py,so my batch size is 4. But I can not start training cus OOM. You said SETR-Naive-DeiT, 811.5G SETR-PUP-DeiT, 812.8G SETR-MLA-DeiT, 812.1G . But It is different from my experimental result. How could I do to reduce the memory used? |
we don't have such a problem from our side |
RuntimeError: CUDA out of memory. Tried to allocate 44.00 MiB (GPU 6; 23.70 GiB total capacity; 21.91 GiB already allocated; 36.81 MiB free; 22.28 GiB reserved in total by PyTorch) When i tried to train your model "SETR_PUP" by the "./tools/dist_train.sh configs/SETR/SETR_PUP_768x768_40k_cityscapes_bs_8.py 8", i get the above issues even my machine has 8*3090 with 24G. Can you help me to solve it? Thank you. |
try following three variants with DeiT
|
I have a similar issue when training on my own dataset. Always it is CUDA out of memory. I am using 6 GPUs with 12GB (4 GTX 1080TI, 2 RTX 2080TI). Is there any way to train without getting error? |
Hello,thanks for your code.
How much GPU memory is needed for training SETR ?
I have 2 P40 GPU but I cann't start training cus OOM.
Looking forward to your reply.
The text was updated successfully, but these errors were encountered: