-
Notifications
You must be signed in to change notification settings - Fork 348
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Preprocessing time #14
Comments
BEV generation normally takes 12-18 ms, so this seems like an issue with your CPU. For faster anchor filtering with integral images, the voxel size can also be increased, but only for the voxel grid used for filtering. Ideally, these preprocessing operations could be moved onto the GPU to run faster, and pull requests are welcome. We have also noticed that Tensorflow will be much slower for the first 3-5 samples inferenced, so timing should take place after several samples have been inferenced already. GPU inference time should not take that long if you are using one of the latest GPU models. |
It might relate to my numpy configuration: In [8]: numpy.show_config() Could you help print this info on your machine ? It would help a lot. |
It is rather clear now: the preprocessing time takes roughly 0.1s np.stack could be optimized, when you report 0.02s, do you mean fill empty anchor is throw away ? |
Read disk (image, calib, ground, pc) time:
Min: 0.01494
Max: 0.03513
Mean: 0.02009
Median: 0.02044
Create bev time:
Min: 0.01963
Max: 0.09319
Mean: 0.0378
Median: 0.034
Load sample time:
Min: 0.05957
Max: 0.18219
Mean: 0.08599
Median: 0.07644
Fill anchor time:
Min: 0.0688
Max: 0.16987
Mean: 0.0908
Median: 0.07938
Feed dict time:
Min: 0.12845
Max: 0.29517
Mean: 0.17686
Median: 0.15515
Inference time:
Min: 0.08431
Max: 2.92182
Mean: 0.16493
Median: 0.09333
Preprocessing time profiled as above, much larger more than 0.02s, I don't think my cpu is super weak, do you have any suggestion ? For example, fill anchor time is so expensive ?
The text was updated successfully, but these errors were encountered: