New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
trained custom dataset uses coco_eval.py but got ap = -1.0 #542
Comments
What 's your training and eval command? |
This is my training command: And this is my eval command: ( I can use weights_smoke.pth to test and the result is not bad). Is there any code should I change when I use coco_eval.py? |
try |
THX. I will try. |
After setting lr=1e-3, is mAP up? |
sorry, I have not tried the new lr yet, but I find why my coco_eval results are all equal to -1. I will do more experiments later. |
@fightingaaa why is the gt id always -1? |
Thanks! I find this problem ,Now the map is correct,map is higher ,but I find the detection score is lower than before. |
@fightingaaa keep training until it overfits |
Hi, I have some problems when I use coco_eval.py after training with my own dataset, I don't know why all the AP is equal to -1.0.
I am sure that the path of val set is correct. The training seems to be successful and I can use efficientdet_test.py with trained weights to detect. This is my training log.
0.09451: 98%|█████████▊| 292/299 [03:49<00:04, 1.40it/s]
Step: 59793. Epoch: 199/200. Iteration: 293/299. Cls loss: 0.02686. Reg loss: 0.00648. Total loss: 0.03334: 98%|█████████▊| 292/299 [03:50<00:04, 1.40it/s]
Step: 59793. Epoch: 199/200. Iteration: 293/299. Cls loss: 0.02686. Reg loss: 0.00648. Total loss: 0.03334: 98%|█████████▊| 293/299 [03:50<00:04, 1.43it/s]
Step: 59794. Epoch: 199/200. Iteration: 294/299. Cls loss: 0.06801. Reg loss: 0.01800. Total loss: 0.08601: 98%|█████████▊| 293/299 [03:51<00:04, 1.43it/s]
Step: 59794. Epoch: 199/200. Iteration: 294/299. Cls loss: 0.06801. Reg loss: 0.01800. Total loss: 0.08601: 98%|█████████▊| 294/299 [03:51<00:03, 1.43it/s]
Step: 59795. Epoch: 199/200. Iteration: 295/299. Cls loss: 0.03731. Reg loss: 0.01866. Total loss: 0.05597: 98%|█████████▊| 294/299 [03:51<00:03, 1.43it/s]
Step: 59795. Epoch: 199/200. Iteration: 295/299. Cls loss: 0.03731. Reg loss: 0.01866. Total loss: 0.05597: 99%|█████████▊| 295/299 [03:51<00:02, 1.45it/s]
Step: 59796. Epoch: 199/200. Iteration: 296/299. Cls loss: 0.03701. Reg loss: 0.01164. Total loss: 0.04865: 99%|█████████▊| 295/299 [03:52<00:02, 1.45it/s]
Step: 59796. Epoch: 199/200. Iteration: 296/299. Cls loss: 0.03701. Reg loss: 0.01164. Total loss: 0.04865: 99%|█████████▉| 296/299 [03:52<00:02, 1.47it/s]
Step: 59797. Epoch: 199/200. Iteration: 297/299. Cls loss: 0.03641. Reg loss: 0.01481. Total loss: 0.05122: 99%|█████████▉| 296/299 [03:52<00:02, 1.47it/s]
Step: 59797. Epoch: 199/200. Iteration: 297/299. Cls loss: 0.03641. Reg loss: 0.01481. Total loss: 0.05122: 99%|█████████▉| 297/299 [03:52<00:01, 1.50it/s]
Step: 59798. Epoch: 199/200. Iteration: 298/299. Cls loss: 0.05714. Reg loss: 0.02491. Total loss: 0.08206: 99%|█████████▉| 297/299 [03:53<00:01, 1.50it/s]
Step: 59798. Epoch: 199/200. Iteration: 298/299. Cls loss: 0.05714. Reg loss: 0.02491. Total loss: 0.08206: 100%|█████████▉| 298/299 [03:53<00:00, 1.47it/s]
Step: 59799. Epoch: 199/200. Iteration: 299/299. Cls loss: 0.02466. Reg loss: 0.01264. Total loss: 0.03729: 100%|█████████▉| 298/299 [03:54<00:00, 1.47it/s]
Step: 59799. Epoch: 199/200. Iteration: 299/299. Cls loss: 0.02466. Reg loss: 0.01264. Total loss: 0.03729: 100%|██████████| 299/299 [03:54<00:00, 1.51it/s]
Step: 59799. Epoch: 199/200. Iteration: 299/299. Cls loss: 0.02466. Reg loss: 0.01264. Total loss: 0.03729: 100%|██████████| 299/299 [03:54<00:00, 1.27it/s]
The text was updated successfully, but these errors were encountered: