-
Notifications
You must be signed in to change notification settings - Fork 720
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to inference #8
Comments
The inference.py currently only used for kitti viewer. you can check inference steps in viewer.py.
|
Thanks a lot. |
Yes, but you still need to create a dict which is needed for net.call. you need to add some code in VoxelNet.predict to return LiDAR boxes: final_box_preds_camera = box_torch_ops.box_lidar_to_camera(
final_box_preds, rect, Trv2c)
if self.lidar_only:
predictions_dict = {
"box3d_lidar": final_box_preds,
"scores": final_scores,
"label_preds": label_preds,
"image_idx": img_idx,
}
else:
# camera code |
Thank you. Closing the issue. |
Hi! @bigsheep2012 |
Hello @kxhit, I am not using a pretrained model as @traveller59 said there might be some issues using pretrained model with SparseConv by facebook research. I trained the model from scratch for like one day with a 1080 ti. I am afraid that I may not be able to share the code because it is done in a company. My way is firstly trying to extract the voxel generator, target_assigner and voxelnet parts for building a small version without any data augmentation and, then add other stuffs. Do modifications carefully in box_np_ops.* as there are many trivial modifications if you are using your own data. |
@bigsheep2012 @bigsheep2018 @traveller59 Thanks for your reply!
I want to know the 0.05s specifically refers to which time-consuming and why it takes me much more time. Am I doing something wrong? |
@kxhit some code need real-time JIT compiling, so the first run may cost some time. you can see the input preparation time is very long because point_to_voxel is a numba.jit function. the following run should cost much less time. |
hi @kxhit , Firstly, I do not think the Submission of Kitti by @traveller59 is the shared github version as sparse convolution on GPU is not implemented on the github code. Thus, the time cost might be some different. Secondly, if you are using the author's orginal code (and the reduced point cloud mentioned in ReadMe) without any modification, you should get a faster result. In my case, on my desktop with a 1060 6G, the detection time is 100-120 ms. |
As you say, the next will cost much less time. Thanks! Testing on TITAN XP 12G.
|
@kxhit |
@kxhit @bigsheep2012 Currently I can't reproduce the speed in Ubuntu 18.04, PyTorch 1.0 and newest SparseConvNet. The forward time (not include input prepare time) of the pointcloud 107 is 0.069s in current environment but I can get 0.049s in previous 16.04 in 1080Ti , you can check the deprecated KittiViewer picture in README.md. |
@bigsheep2012 I am also trying to predict on only on Lidar data (from a custom Lidar), I do not have image and calib input. Could you please tell me which files I need to modify to remove the image and calib parameters? |
I have the same problem,hava you solved the problem? |
Hello
May I know how to inference a example from the kitti test dataset or a point cloud file in kitti format but without a info file? I have checked second/pytorch/inference.py. It seems a little bit different from train.py (e.g. should I create a reduced point cloud version of the original point cloud?).
Just wanna make sure I am using the inference in the correct way.
Thanks in advance.
Lin
The text was updated successfully, but these errors were encountered: