-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
in the middle of training process at iteration 2100 it show this error #5
Comments
Is this from Vgg16 backbone? and do you change batch size? |
yes it throws ÇUDA OUT OF MEMORY error so that i changed the batch size to 1 |
the backbone is R-50-FPN-RETINANET'' |
Since the cross-image graph-based message propagation (within batch) is necessary, batch size should be set at least 2. We tested batch size 2 and 4. Do you change the learning rate for bs=1? |
I didn't change the learning rate but it still throws cuda out of memory error using batch size of 2 |
We used 2080 Ti (12GB) for bs=2 and V100 (36GB) for bs=4, and never had a try for bs=1. It is common practice to halve the learning rate if you halve the batch size. For now, you can try to halve the learning rate. We will further test the bs=1 if you still face such a problem. But we still don't recommend training with only bs=1. |
at the beginning it starts well but then at some pint of the iteration it show that error. I will try reducing the learning rate. |
'CUDA out of memory' even for learning rate of 0.0005 |
It seems that your GPU memory is too small., Try to further reduce the number of sampled nodes by changing this in the YAML config file. (The number of sampled nodes could increase during the training.) NUM_NODES_PER_LVL_SR: 50 Reduce the node number as much as possible until the CUDA out of memory doesn't appear, although it may have some negative impact on the performance. |
Actually, 8GP GPU is a little bit small for the detection tasks. |
Okay and isn't there a checkpoint because it starts from the scratch every time i restarted it even though it did many iterations before |
We automatically start to save the checkpoint if the validation results are larger than SOLVER.INITIAL_AP50 to save the desk space. You can change SOLVER.INITIAL_AP50 to 0 to save more checkpoints. |
let me try it by applying your suggestions this issue will be open until the process finishes |
thank you I will re-open when an issue is encountered. |
Hi, I have reproduced your issue, and this issue should have been addressed in the latest commit. Since your bs is too small (bs=1), there exists an extreme case in which there are only two nodes in the source domain and no nodes in the target domain. Then, SIGMA will split the source nodes into two parts to train the matching branch, leading to the wrong size of target nodes [256] instead of [num_node, 256]. We fix this bug by adding these lines, which directly jump out the middle head if there are not enough source nodes. Add these lines. Then, you can try to keep the original learning rate to train faster, or it will take too long time to train the model with limited bs=1. |
We have updated README about the small batch-size training for your convenience. ResNet 50 backbone always gives better results than VGG 16. |
Hi, that's okay since the framework will automatically load the latest checkpoint. You can directly ignore the INFO message since it is from EPM, which isn't used in our project. You can directly continue to train the model and set the warm-up iterations to 0. It seems to work properly now, if you face the previous issue again, you only need to add those lines mentioned above. I recommend you to try changing the learning rate back to 0.0025 to train faster as I find your model converges too slowly with only bs=1. As the updated readme, you need to train double iterations if you halve the batch size. Usually, for bs=2 of resnet 50 (0.0025 lr), it can achieve 40+ mAP only using 10000 iterations. |
Noted with thanks so I think I don't have t redownload the repo the update is on file 'graph_matching_head.py' just changing the learning rate to 0.0025 and batch size 2 so I can just replace graph_matching_head.py right? |
Yes, you only need to replace graph_matching_head.py and change the BASE_LR in the YAML config file |
Dear sir the ''çuda out of memory' problem still persists after ten thousands of iterations even though I applied the recommendation provided So i updated it to the original. is there any other recommendation please? |
Hi, maybe you can disable the one-to-one (o2o) matching by setting MODEL.MIDDLE_HEAD.GM.MATCHING_CFG 'none', which will save lots of CUDA memory. Please try this setting first, thanks!. Besides, we have updated some solutions for limited GPU memory in the latest README. Kindly have a try. |
Ok Thanks I will try it. |
The text was updated successfully, but these errors were encountered: