Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PointPillar's performance is not so good compared with published results #35

Closed
Son-Goku-gpu opened this issue Jan 7, 2020 · 15 comments

Comments

@Son-Goku-gpu
Copy link

Hi @poodarchu Thanks for your great code! I trained pointpillar with default config, while the performance is as follow, which is similar to the results post by @s-ryosky in #18 .

default

For the category of car, the published mAP of moderate level on kitti 3D test dataset is 74.99, while my trained one is only 75.66 on kitti val dataset. It seems cannot exceed the published one.
As far as I konw, other researchers could achieve about 77 on val with pointpillar, I wonder if there exists any problem in the configs? and can you pubulish your results? Thanks a lot!

@s-ryosky
Copy link
Contributor

s-ryosky commented Jan 7, 2020

PointPillars's config have been updated recently.
Which one did you use to train?

In old config the voxel_size was set as [0.2, 0.2, 4.0].
But in published results the voxel_size seems to be set as [0.16, 0.16, 4.0].
Please use smaller voxel size.

@poodarchu
Copy link
Collaborator

currently all versions of PyTorch's sync bn implementations have bugs. According to my experiments, APEX's implementation seem to be able to achieve the same performance as single-gpu versions on some models. So I think if you train Point Pillars using a single GPU might get a higher mAP.

@muzi2045
Copy link

muzi2045 commented Jan 7, 2020

training CBGS, the loss value are normal?
it looks like the x,y velocity loss take a major role in loss compute.

2020-01-07 11:27:01,209 - INFO - Epoch [5/20][5850/21350]	lr: 0.00060, eta: 4 days, 14:32:00, time: 1.150, data_time: 0.035, transfer_time: 0.048, forward_time: 0.367, loss_parse_time: 0.000 memory: 4948, 
2020-01-07 11:27:01,209 - INFO - task : ['car'], loss: 3.2919, cls_pos_loss: 0.3375, cls_neg_loss: 0.0345, dir_loss_reduced: 0.3267, cls_loss_reduced: 0.4064, loc_loss_reduced: 2.8202, loc_loss_elem: ['0.0188', '0.0287', '0.2736', '0.0352', '0.0346', '0.0624', '0.7662', '1.3994', '0.2013'], num_pos: 19.6800, num_neg: 31705.3800

@poodarchu
Copy link
Collaborator

training CBGS, the loss value are normal?
it looks like the x,y velocity loss take a major role in loss compute.

2020-01-07 11:27:01,209 - INFO - Epoch [5/20][5850/21350]	lr: 0.00060, eta: 4 days, 14:32:00, time: 1.150, data_time: 0.035, transfer_time: 0.048, forward_time: 0.367, loss_parse_time: 0.000 memory: 4948, 
2020-01-07 11:27:01,209 - INFO - task : ['car'], loss: 3.2919, cls_pos_loss: 0.3375, cls_neg_loss: 0.0345, dir_loss_reduced: 0.3267, cls_loss_reduced: 0.4064, loc_loss_reduced: 2.8202, loc_loss_elem: ['0.0188', '0.0287', '0.2736', '0.0352', '0.0346', '0.0624', '0.7662', '1.3994', '0.2013'], num_pos: 19.6800, num_neg: 31705.3800

It seems to be correct.

@Son-Goku-gpu
Copy link
Author

Thanks! @poodarchu @s-ryosky I did train the model with a single gpu, while I didn't update the voxel_size and range in config as mentioned by @s-ryosky. I am not sure how much increase this modification could bring, but I 'll try it and release the results later.

@Son-Goku-gpu
Copy link
Author

@poodarchu BTW, will you implement PointRCNN and release the code later?

@poodarchu
Copy link
Collaborator

@poodarchu BTW, will you implement PointRCNN and release the code later?

Yes. It will be released soon.

@Son-Goku-gpu
Copy link
Author

@poodarchu Thanks! Looking forward...

@poodarchu
Copy link
Collaborator

@poodarchu Thanks! Looking forward...

Are you interested in reproduce other models, such as VoteNet based on Det3D?

@Son-Goku-gpu
Copy link
Author

@poodarchu I am doing research on 3d detection and want to implement some ideas based on Det3D, maybe I will create some models based on VoteNet later, but I am not sure now. If I need it, I will implement it based on Det3D and pull a request.

@poodarchu
Copy link
Collaborator

@poodarchu I am doing research on 3d detection and want to implement some ideas based on Det3D, maybe I will create some models based on VoteNet later, but I am not sure now. If I need it, I will implement it based on Det3D and pull a request.

thanks

@abhigoku10
Copy link

@Son-Goku-gpu @poodarchu i had found this paper few months back but no implementation ,this has alround functionality https://arxiv.org/abs/1904.07537 please share your views on this
@Son-Goku-gpu cna you share your mail id i have few queries which i would like to ask through mail if you have no issues

@Son-Goku-gpu
Copy link
Author

@poodarchu As mentioned by @s-ryosky, after changing the range and voxelsize in config file, I can achieve the mAP with PointPillar as follows:
default_range_voxelsize_reset
The results seems more reasonable. Thank you!

@Son-Goku-gpu
Copy link
Author

@abhigoku10 I remember it's a workshop paper, the results are so poor that I didn't spend much time on it. You may contact me with my email: 1143883958@qq.com.

@GYE19970220
Copy link

GYE19970220 commented Nov 30, 2020

I follow the newest instructions train pointpillrs,then run python test.py ../examples/point_pillars/configs/kitti_point_pillars_mghead_syncbn.py epoch_100.pth --show to test,
the result are as follows,there is a big difference from the above 3d results,and the result is exactly the same as the val result after the 100th epoch.Is there a problem with my command? Any help would be appreciated. @poodarchu
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants