Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pretrain_model performance bad #12

Closed
tyjiang1997 opened this issue Jul 30, 2021 · 6 comments
Closed

pretrain_model performance bad #12

tyjiang1997 opened this issue Jul 30, 2021 · 6 comments

Comments

@tyjiang1997
Copy link

hello,jihan. Thanks for your work, when I use the pretrained model to eval the KITTI results, I got a bad results as fellows:
[2021-07-30 10:39:38,642 eval_utils.py 91 INFO] *************** Performance of EPOCH 80 *****************
[2021-07-30 10:39:38,643 eval_utils.py 93 INFO] Generate label finished(sec_per_example: 0.0255 second).
[2021-07-30 10:39:38,643 eval_utils.py 109 INFO] recall_roi_0.3: 0.977685
[2021-07-30 10:39:38,643 eval_utils.py 110 INFO] recall_rcnn_0.3: 0.977685
[2021-07-30 10:39:38,643 eval_utils.py 109 INFO] recall_roi_0.5: 0.798957
[2021-07-30 10:39:38,643 eval_utils.py 110 INFO] recall_rcnn_0.5: 0.798957
[2021-07-30 10:39:38,643 eval_utils.py 109 INFO] recall_roi_0.7: 0.070421
[2021-07-30 10:39:38,643 eval_utils.py 110 INFO] recall_rcnn_0.7: 0.070421
[2021-07-30 10:39:38,647 eval_utils.py 118 INFO] Average predicted number of objects(3769 samples): 11.060
[2021-07-30 10:39:57,848 eval_utils.py 129 INFO] Car AP@0.70, 0.70, 0.70:
bbox AP:17.6470, 23.7820, 33.4059
bev AP:11.3823, 14.4799, 21.0351
3d AP:0.6494, 0.5051, 9.0909
aos AP:17.53, 23.37, 32.83
Car AP_R40@0.70, 0.70, 0.70:
bbox AP:18.4800, 23.7817, 28.3788
bev AP:11.2770, 13.3130, 14.3258
3d AP:0.0528, 0.1354, 0.1768
aos AP:18.36, 23.37, 27.70
Car AP@0.70, 0.50, 0.50:
bbox AP:17.6470, 23.7820, 33.4059
bev AP:45.1927, 54.1058, 63.2144
3d AP:32.6627, 41.2533, 48.3045
aos AP:17.53, 23.37, 32.83
Car AP_R40@0.70, 0.50, 0.50:
bbox AP:18.4800, 23.7817, 28.3788
bev AP:47.9240, 55.8623, 62.0041
3d AP:33.6433, 41.2521, 45.2606
aos AP:18.36, 23.37, 27.70

And I just use the pretrianed second_iou model with kitti_models/secondiou_orcale config.

@tyjiang1997
Copy link
Author

when i use the config 'da-waymo-kitti_models/secondiou/secondiou_old_anchor_sn', I got the result as :
[2021-07-30 11:03:22,788 eval_utils.py 109 INFO] recall_roi_0.3: 0.977338
[2021-07-30 11:03:22,789 eval_utils.py 110 INFO] recall_rcnn_0.3: 0.977338
[2021-07-30 11:03:22,789 eval_utils.py 109 INFO] recall_roi_0.5: 0.949600
[2021-07-30 11:03:22,789 eval_utils.py 110 INFO] recall_rcnn_0.5: 0.949600
[2021-07-30 11:03:22,789 eval_utils.py 109 INFO] recall_roi_0.7: 0.746611
[2021-07-30 11:03:22,789 eval_utils.py 110 INFO] recall_rcnn_0.7: 0.746611
[2021-07-30 11:03:22,793 eval_utils.py 118 INFO] Average predicted number of objects(3769 samples): 9.362
[2021-07-30 11:03:43,656 eval_utils.py 129 INFO] Car AP@0.70, 0.70, 0.70:
bbox AP:81.0327, 77.9448, 79.4064
bev AP:77.7435, 71.9792, 70.3093
3d AP:70.1846, 61.3860, 59.4520
aos AP:80.87, 77.45, 78.74
Car AP_R40@0.70, 0.70, 0.70:
bbox AP:84.6954, 78.0836, 79.7570
bev AP:79.4047, 72.5905, 72.6319
3d AP:70.1550, 61.2689, 60.8881
aos AP:84.50, 77.56, 79.06
Car AP@0.70, 0.50, 0.50:
bbox AP:81.0327, 77.9448, 79.4064
bev AP:86.2862, 79.7260, 80.9945
3d AP:82.0367, 79.1649, 80.4611
aos AP:80.87, 77.45, 78.74
Car AP_R40@0.70, 0.50, 0.50:
bbox AP:84.6954, 78.0836, 79.7570
bev AP:87.2480, 82.6880, 84.4618
3d AP:85.8646, 80.6758, 82.2929
aos AP:84.50, 77.56, 79.06

@jihanyang
Copy link
Member

May I know which model you are used to eval?

@tyjiang1997
Copy link
Author

May I know which model you are used to eval?

I use the pretrained SECOND-IoU model provided by you

@jihanyang
Copy link
Member

I will come back to test it 2-3 days later due to some urgent works. However, since other researchers have already try to reproduce the result or verify the pretrained model (you can refer to issues), I guess this might be caused by the environment or data preparation.

@tyjiang1997
Copy link
Author

I will come back to test it 2-3 days later due to some urgent works. However, since other researchers have already try to reproduce the result or verify the pretrained model (you can refer to issues), I guess this might be caused by the environment or data preparation.

thank you for your reply,

@jihanyang
Copy link
Member

I haven't found any problem with my provided model. Have you used openpcdet before? If so, you need to re-install it in our repo since the version of spconv and pcdet might not well matched.

[2021-08-02 17:37:01,102  eval_utils.py 52  INFO]  *************** EPOCH 7362 EVALUATION *****************
eval: 100%|██████████| 236/236 [01:04<00:00,  3.68it/s, recall_0.3=(3452, 3452) / 3532]
[2021-08-02 17:38:06,358  eval_utils.py 91  INFO]  *************** Performance of EPOCH 7362 *****************
[2021-08-02 17:38:06,378  eval_utils.py 93  INFO]  Generate label finished(sec_per_example: 0.0171 second).
[2021-08-02 17:38:06,378  eval_utils.py 109  INFO]  recall_roi_0.3: 0.977344
[2021-08-02 17:38:06,378  eval_utils.py 110  INFO]  recall_rcnn_0.3: 0.977344
[2021-08-02 17:38:06,378  eval_utils.py 109  INFO]  recall_roi_0.5: 0.949614
[2021-08-02 17:38:06,378  eval_utils.py 110  INFO]  recall_rcnn_0.5: 0.949614
[2021-08-02 17:38:06,378  eval_utils.py 109  INFO]  recall_roi_0.7: 0.746612
[2021-08-02 17:38:06,378  eval_utils.py 110  INFO]  recall_rcnn_0.7: 0.746612
[2021-08-02 17:38:06,380  eval_utils.py 118  INFO]  Average predicted number of objects(3769 samples): 9.534
/mnt/lustre/yangjihan/.local/lib/python3.7/site-packages/numba/core/typed_passes.py:314: NumbaPerformanceWarning: 
The keyword argument 'parallel=True' was specified but no transformation for parallel execution was possible.

To find out why, try turning on parallel diagnostics, see http://numba.pydata.org/numba-doc/latest/user/parallel.html#diagnostics for help.

File "../pcdet/datasets/kitti/kitti_object_eval_python/eval.py", line 121:
@numba.jit(nopython=True, parallel=True)
def d3_box_overlap_kernel(boxes, qboxes, rinc, criterion=-1):
^

  state.func_ir.loc))
/mnt/lustre/yangjihan/.local/lib/python3.7/site-packages/numba/core/typed_passes.py:314: NumbaPerformanceWarning: 
The keyword argument 'parallel=True' was specified but no transformation for parallel execution was possible.

To find out why, try turning on parallel diagnostics, see http://numba.pydata.org/numba-doc/latest/user/parallel.html#diagnostics for help.

File "../pcdet/datasets/kitti/kitti_object_eval_python/eval.py", line 121:
@numba.jit(nopython=True, parallel=True)
def d3_box_overlap_kernel(boxes, qboxes, rinc, criterion=-1):
^

  state.func_ir.loc))
[2021-08-02 17:39:07,021  eval_utils.py 129  INFO]  Car AP@0.70, 0.70, 0.70:
bbox AP:90.2642, 87.2909, 86.5538
bev  AP:89.3554, 83.9180, 82.0950
3d   AP:84.1257, 74.0156, 70.7837
aos  AP:90.21, 86.98, 86.14
Car AP_R40@0.70, 0.70, 0.70:
bbox AP:95.2374, 89.2881, 88.5335
bev  AP:91.9372, 85.7694, 83.7419
3d   AP:85.9557, 74.0871, 71.2213
aos  AP:95.17, 88.96, 88.07
Car AP@0.70, 0.50, 0.50:
bbox AP:90.2642, 87.2909, 86.5538
bev  AP:90.4402, 88.3341, 87.7081
3d   AP:90.4336, 88.2360, 87.5608
aos  AP:90.21, 86.98, 86.14
Car AP_R40@0.70, 0.50, 0.50:
bbox AP:95.2374, 89.2881, 88.5335
bev  AP:95.4826, 92.3827, 91.9459
3d   AP:95.4700, 91.9819, 91.5355
aos  AP:95.17, 88.96, 88.07

[2021-08-02 17:39:07,026  eval_utils.py 135  INFO]  Result is save to /mnt/lustre/yangjihan/ST3D/output/da-waymo-kitti_models/secondiou/secondiou_old_anchor_sn/default/eval/epoch_7362/val/default
[2021-08-02 17:39:07,026  eval_utils.py 136  INFO]  ****************Evaluation done.*****************

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants