Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about the base model result #57

Open
robotzheng opened this issue May 13, 2024 · 7 comments
Open

about the base model result #57

robotzheng opened this issue May 13, 2024 · 7 comments

Comments

@robotzheng
Copy link

robotzheng commented May 13, 2024

plan_L2_1s:0.3348474009348846
plan_L2_2s:0.6005098198601633
plan_L2_3s:0.9474086193724374
plan_obj_col_1s:0.0
plan_obj_col_2s:0.0
plan_obj_col_3s:3.255844337443258e-05
plan_obj_box_col_1s:0.0019535065442469234
plan_obj_box_col_2s:0.002979097479976558
plan_obj_box_col_3s:0.006088428835334152

projects/configs/VAD/VAD_base_e2e.py

L2 is better, but collasion is worse than the paper‘s. Why?

@robotzheng
Copy link
Author

loss_plan_col=dict(type='PlanCollisionLoss', loss_weight=1.0),

can I make the above "loss_weight" bigger?

@Halo5082
Copy link

Did you train the model yourself, or use the pretrained model? If using the pretrained one, I think VAD_base_stage_2.py should be used instead of VAD_base_e2e.py
Additionally, I think that they use another image normalization config in the paper and pretrained model. See #18 (comment) and #9 (comment)

@robotzheng
Copy link
Author

I train the model from resnet50-19c8e357.pth by the configs of "VAD_base_e2e.py“. Is the "loss_weight" not important?

@StevenJ308
Copy link

I was in a similar situation

-------------- Motion Prediction --------------
EPA_car: 0.6096603478939834
EPA_pedestrian: 0.3235923685435086
ADE_car: 0.8396922945976257
ADE_pedestrian: 0.8586953282356262
FDE_car: 1.1747483015060425
FDE_pedestrian: 1.189048409461975
MR_car: 0.12827822120866592
MR_pedestrian: 0.16290983606557377

-------------- Planning --------------
gt_car:4.503418636452432
gt_pedestrian:2.099042781793319
cnt_ade_car:3.6200429771439735
cnt_ade_pedestrian:1.077358859152178
cnt_fde_car:3.4264504786091035
cnt_fde_pedestrian:0.9533111935924985
hit_car:2.9869115061535454
hit_pedestrian:0.7980074233248682
fp_car:0.40378980269583903
fp_pedestrian:0.20296932994725533
ADE_car:3.1434736251831055
ADE_pedestrian:0.9509602189064026
FDE_car:4.025217056274414
FDE_pedestrian:1.1335331201553345
MR_car:0.43953897245555773
MR_pedestrian:0.1553037702676304
plan_L2_1s:0.30277927277533684
plan_L2_2s:0.5726086170295752
plan_L2_3s:0.949615497207263
plan_obj_col_1s:0.0
plan_obj_col_2s:0.0
plan_obj_col_3s:3.255844337443258e-05
plan_obj_box_col_1s:0.0034186364524321157
plan_obj_box_col_2s:0.005176792342254347
plan_obj_box_col_3s:0.011851273242745881
fut_valid_flag:1.0

projects//configs/VAD/VAD_tiny_e2e.py

@robotzheng
Copy link
Author

@StevenJ308, I have checked my log file, not find redownload the resnet50 model. Maybe, some superparameters are not same as the paper's.

@yuyuyuyuyuty
Copy link

I would like to ask why I can not use my own training pth file to test? I still get errors when I use my own pth:result_dict['ADE_'+cls] = all_metric_dict['ADE_'+cls] / all_metric_dict['cnt_ade_'+cls]
ZeroDivisionError: float division by zero

@yuyuyuyuyuty
Copy link

您是自己训练模型,还是使用预训练模型?如果使用预训练模型,我认为VAD_base_stage_2.py应该使用而不是VAD_base_e2e.py 此外,我认为他们在论文和预训练模型中使用了另一个图像规范化配置。请参阅#18(评论)#9(评论)
Hello, I would like to ask what is the difference between the pth trained by myself and the pth provided by the paper?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants