-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
It doesn't reproduce to your accuracy #43
Comments
After fine-tuning, the log should be like this: (this is my experimental results from seed01 1-shot on Pascal VOC split 1) {"train_lr": 1.5000000000000005e-06, "train_class_error": 0.0, "train_grad_norm": 58.47532653808594, "train_loss": 1.4763914942741394, "train_loss_bbox": 0.07749657705426216, "train_loss_bbox_0": 0.14321566373109818, "train_loss_bbox_1": 0.09320082515478134, "train_loss_bbox_2": 0.0857505314052105, "train_loss_bbox_3": 0.08199029043316841, "train_loss_bbox_4": 0.07725301757454872, "train_loss_category_codes_cls": 0.11686665192246437, "train_loss_ce": 0.008141814265400171, "train_loss_ce_0": 0.10580113902688026, "train_loss_ce_1": 0.047131314873695374, "train_loss_ce_2": 0.017384656239300966, "train_loss_ce_3": 0.01357174920849502, "train_loss_ce_4": 0.009504480753093958, "train_loss_giou": 0.08753593638539314, "train_loss_giou_0": 0.1434987187385559, "train_loss_giou_1": 0.10159203037619591, "train_loss_giou_2": 0.0932667925953865, "train_loss_giou_3": 0.08680127188563347, "train_loss_giou_4": 0.08638809621334076, "train_cardinality_error_unscaled": 287.76251220703125, "train_cardinality_error_0_unscaled": 287.9250030517578, "train_cardinality_error_1_unscaled": 289.0375061035156, "train_cardinality_error_2_unscaled": 289.54376220703125, "train_cardinality_error_3_unscaled": 289.48126220703125, "train_cardinality_error_4_unscaled": 289.1625061035156, "train_class_error_unscaled": 0.0, "train_loss_bbox_unscaled": 0.015499315224587917, "train_loss_bbox_0_unscaled": 0.02864313218742609, "train_loss_bbox_1_unscaled": 0.018640165217220783, "train_loss_bbox_2_unscaled": 0.017150106839835644, "train_loss_bbox_3_unscaled": 0.016398058272898197, "train_loss_bbox_4_unscaled": 0.01545060332864523, "train_loss_category_codes_cls_unscaled": 0.02337333094328642, "train_loss_ce_unscaled": 0.004070907132700086, "train_loss_ce_0_unscaled": 0.05290056951344013, "train_loss_ce_1_unscaled": 0.023565657436847687, "train_loss_ce_2_unscaled": 0.008692328119650483, "train_loss_ce_3_unscaled": 0.00678587460424751, "train_loss_ce_4_unscaled": 0.004752240376546979, "train_loss_giou_unscaled": 0.04376796819269657, "train_loss_giou_0_unscaled": 0.07174935936927795, "train_loss_giou_1_unscaled": 0.050796015188097954, "train_loss_giou_2_unscaled": 0.04663339629769325, "train_loss_giou_3_unscaled": 0.043400635942816734, "train_loss_giou_4_unscaled": 0.04319404810667038, "test_class_error": 5.472572386649347, "test_loss": 11.130939615926435, "test_loss_bbox": 0.5066791164298211, "test_loss_bbox_0": 0.5212143290427423, "test_loss_bbox_1": 0.5106292650584252, "test_loss_bbox_2": 0.5027764009852563, "test_loss_bbox_3": 0.49650043556767126, "test_loss_bbox_4": 0.4928392595821811, "test_loss_category_codes_cls": 0.06833156943321228, "test_loss_ce": 0.799778366617618, "test_loss_ce_0": 0.7964376487078205, "test_loss_ce_1": 0.7633872688297303, "test_loss_ce_2": 0.7538318927249601, "test_loss_ce_3": 0.7609124298537931, "test_loss_ce_4": 0.7619222026678824, "test_loss_giou": 0.5621161045566682, "test_loss_giou_0": 0.5845255267235541, "test_loss_giou_1": 0.5759262894430468, "test_loss_giou_2": 0.5661270769373064, "test_loss_giou_3": 0.5565041990049424, "test_loss_giou_4": 0.5505002444790256, "test_cardinality_error_unscaled": 286.87172379032256, "test_cardinality_error_0_unscaled": 293.8604334677419, "test_cardinality_error_1_unscaled": 293.77363911290325, "test_cardinality_error_2_unscaled": 293.6239415322581, "test_cardinality_error_3_unscaled": 292.1960181451613, "test_cardinality_error_4_unscaled": 290.13714717741937, "test_class_error_unscaled": 5.472572386649347, "test_loss_bbox_unscaled": 0.10133582312733896, "test_loss_bbox_0_unscaled": 0.10424286586142355, "test_loss_bbox_1_unscaled": 0.10212585292756557, "test_loss_bbox_2_unscaled": 0.10055528025954, "test_loss_bbox_3_unscaled": 0.0993000871952503, "test_loss_bbox_4_unscaled": 0.09856785183712359, "test_loss_category_codes_cls_unscaled": 0.013666314072906971, "test_loss_ce_unscaled": 0.399889183308809, "test_loss_ce_0_unscaled": 0.39821882435391026, "test_loss_ce_1_unscaled": 0.38169363441486515, "test_loss_ce_2_unscaled": 0.37691594636248005, "test_loss_ce_3_unscaled": 0.3804562149268966, "test_loss_ce_4_unscaled": 0.3809611013339412, "test_loss_giou_unscaled": 0.2810580522783341, "test_loss_giou_0_unscaled": 0.29226276336177703, "test_loss_giou_1_unscaled": 0.2879631447215234, "test_loss_giou_2_unscaled": 0.2830635384686532, "test_loss_giou_3_unscaled": 0.2782520995024712, "test_loss_giou_4_unscaled": 0.2752501222395128, "test_coco_eval_bbox": [0.2237539923834384, 0.36212298616476496, 0.23457736703757642, 0.02539442618386005, 0.13298507772985532, 0.28761370973175865, 0.264984217870063, 0.4056139382126699, 0.4253965281576116, 0.10106334841628958, 0.23122574383771002, 0.5370109415978369], "epoch": 699, "n_parameters": 51664028, "evaltype": "novel"} Your loss after fine-tuning does not seem right. Please check your settings. If you still encounter issues, I would like to help. However, do take note that I am recently looking for jobs, so I might not be able to check and respond very quickly. |
十分感谢您的及时回复,我是按照您给出的run_experiments_voc1_50epoch.sh 中28行以后的代码进行的设置,其他并未改动,如果您知道问题的原因,感谢您的告知! EXP_DIR=exps/voc1 fewshot_seed=01 if [ $num_shot -eq 1 ] python -u main.py |
These commands seem correct. Did you use 8 GPUs for finetuning? DETR is known for its convergence issues so large batchsizes are needed. You should run the commands like this. GPUS_PER_NODE=8 ./tools/run_dist_launch.sh 8 ./fsfinetune.sh (fsfinetune.sh contains the above commands) |
很抱歉我的计算资源只有1张3090gpu ,batch_size使用的是您代码中默认值 2,是不是因为单卡训练导致的我目前平均下降5个点的问题呢?如果我想更接近您论文中的精度,您有什么建议吗? |
是的,我认为精度下降是这个原因。我的经验是,对于DETR的模型,单卡finetune也能运行,但是要达到论文汇报的精度,最好用大batchsize finetune (和base training保持同样的batchsize)。 为了让其他人也能参考这次讨论,英文翻译如下: I think single-GPU fine-tuning is the reason for declined accuracy. To reach paper's reported accuracy, we need to set the batchsize the same as the base training stage, which takes 8 GPUs for training. Single GPU finetuning can also produce good results, but tend to be lower than 8-GPU finetuning. |
Your loss is significantly larger than mine, which itself indicates that the finetuning hasn't fully converged. To use one GPU to improve performance, you may try the following methods: Thank you :) |
Sorry to bother you here. |
很抱歉在这里打扰您。 |
I'd appreciate it if you could tell me why 。I hope you can reply to me as soon as possible
--dataset_file voc_base1
fewshot_seed=01
num_shot=01
gpu 3090
torch 1.7.1
gcc 7.5.0
Averaged stats: class_error: 16.67 loss: 13.4678 (13.6217) loss_ce: 1.0068 (1.0683) loss_bbox: 0.7345 (0.6811) loss_giou: 0.5437 (0.6277) loss_category_codes_cls: 0.0205 (0.0205) loss_ce_0: 1.0740 (1.0708) loss_bbox_0: 0.7470 (0.6255) loss_giou_0: 0.5046 (0.5959) loss_ce_1: 1.1113 (1.0220) loss_bbox_1: 0.6579 (0.6045) loss_giou_1: 0.4740 (0.5838) loss_ce_2: 1.0879 (1.0172) loss_bbox_2: 0.6594 (0.6091) loss_giou_2: 0.5281 (0.5844) loss_ce_3: 1.0211 (1.0271) loss_bbox_3: 0.6782 (0.6267) loss_giou_3: 0.4912 (0.5870) loss_ce_4: 1.0229 (1.0482) loss_bbox_4: 0.7843 (0.6298) loss_giou_4: 0.5092 (0.5920) loss_ce_unscaled: 0.5034 (0.5341) class_error_unscaled: 0.0000 (9.1140) loss_bbox_unscaled: 0.1469 (0.1362) loss_giou_unscaled: 0.2718 (0.3138) cardinality_error_unscaled: 279.0000 (271.8738) loss_category_codes_cls_unscaled: 0.0041 (0.0041) loss_ce_0_unscaled: 0.5370 (0.5354) loss_bbox_0_unscaled: 0.1494 (0.1251) loss_giou_0_unscaled: 0.2523 (0.2979) cardinality_error_0_unscaled: 294.7500 (293.1469) loss_ce_1_unscaled: 0.5556 (0.5110) loss_bbox_1_unscaled: 0.1316 (0.1209) loss_giou_1_unscaled: 0.2370 (0.2919) cardinality_error_1_unscaled: 297.8750 (295.3336) loss_ce_2_unscaled: 0.5439 (0.5086) loss_bbox_2_unscaled: 0.1319 (0.1218) loss_giou_2_unscaled: 0.2641 (0.2922) cardinality_error_2_unscaled: 295.2500 (292.1838) loss_ce_3_unscaled: 0.5105 (0.5136) loss_bbox_3_unscaled: 0.1356 (0.1253) loss_giou_3_unscaled: 0.2456 (0.2935) cardinality_error_3_unscaled: 295.3750 (292.3367) loss_ce_4_unscaled: 0.5114 (0.5241) loss_bbox_4_unscaled: 0.1569 (0.1260) loss_giou_4_unscaled: 0.2546 (0.2960) cardinality_error_4_unscaled: 290.6250 (288.4062)
Accumulating evaluation results...
DONE (t=2.62s).
IoU metric: bbox
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.177
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.304
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.177
The text was updated successfully, but these errors were encountered: