New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The confusion about results of 3DSSD between official and MMDet3D implementation. #612
Comments
I train the 3DSSD followed the configs in configs/3dssd/3dssd_kitti-3d-car.py with train+val data
There exits a large margin between the official 3DSSD(76.48 vs 79.55). I feel confused about this, did I set something wrong? Or what can I do to make up this performance gap? |
The reason for the performance difference has been explained in the README page. Among the differences, there are two most important ones: different evaluation code and different train/val set. The first one can yield about 2 mAP difference as said in the readme while the second one will at least remove the influence of false positive predictions in those samples without ground truths. In addition, we also regress the benchmark by evaluating our results with their evaluation code and evaluating their results with our evaluation code. The results are almost the same. (Actually, we only reproduce the 79.26 mAP with the official code according to the record of @encore-zhou.) As for the difference on the test set, there exist some uncertainty and tricks. Have you ever tried to train a model with the official code and submit the result to the benchmark? |
Thanks for your feedback! Official code was implemented by Tensorflow, I will try train a model and submit the result to the test and evaluate the performance. New results will be updated here, as soon as I get it. |
By the way, 79.26 is evaluated on val data or test data? If result was evaluated on test data, 79.26 vs 79.55(official in test data), the margin is acceptable. My result on test data was 3 mAP margin, it is unacceptable.
|
It's evaluated on their val dataset and with their evaluation code (compared with the reported 83.3). So I guess there is a large range of fluctuation in terms of performance on the validation set. You can have a try first and let's have a closer look into whether there is a gap between our implementation and the official one. |
Got it, I will try to reproduce result by following official code. |
It's a little strange because when we reproduce 3DSSD, @encore-zhou only got the following performance with the official code: Maybe there is some fluctuation in performance? |
By the way, this results is trained with more epoch, can see that the performance further improved( reach 82.9%). |
Yes, it is really strange because we reproduce the above results on Aug. 2020 (as shown in the screenshot) and there are no updates after April 2020. We will check this issue recently. In the meantime, if you have any progress, please feel free to share it here. |
Thanks for reopening this issue! New findings will be updated. |
Using pytorch1.5 I use official configs in |
I find it is hard to reproduce the results on KITTI test, though you could have gotten a good result on val already. |
If we set the confidence threshold great than 0.0(default, output all the plausible predictions), e.g. 0.2 to filter the final predictions in predictions_in_test.txt, we will get:
Though there is some improvement, it is far from 79.57 in moderate (3DSSD in leaderboard). I guess a good post processing is needed,but the other skills which can improve performance are sitll a question. |
@Physu Have you ever tried generating submission using the official code and submit it to the test server to see the test set result? Also, it seems to me that, changing mmdet3d's training batch and GPUs from Please kindly provide more observations and I will try to look into this issue. |
@Wuziyi616 Thanks for your attention! Does offcial code mean dvlab-research/3DSSD or other methods? |
I will reproduce on 4*4, and we will see the difference further. |
Exactly, the official code I said is the dvlab's code. I think that's the official code release for 3DSSD isn't it? As you mentioned in this reply, you said you would like to submit test results using that code, have you done that? |
Thanks for your attention,my opportunity is running out, the results will be updated soon. |
@Physu |
@Physu |
Thanks for developers extraordinary work!
I have a question about 3DSSD evaluation result between author and MMDet3D implementation.
The author's release result:
In MMDet3D, the result:
I notice "Experiment details on KITTI datasets", which shows the difference between official implementation.
1.Official implementation based on Tensorflow1.4, but I guess pytorch is not the reason of poor performance, or tensorflow and pytorch exist performance gap?
2.It is about two percent margin(81.0 and 83.3) between two implementation, can we come up with some methods to fix it?
I also use single2080Ti to train a train+val model with configs/3DSSD/3dssd_kitti-3d-car.py, I modified the
ann_file=data_root + 'kitti_infos_train.pkl',
toann_file=data_root + 'kitti_infos_trainval.pkl',
the rest code was kept as origin.when the train was finished, I will evaluate on test, and get the result there to discuss.
Thanks again!
The text was updated successfully, but these errors were encountered: