Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to detect all points? #12

Open
Oofs opened this issue Oct 17, 2018 · 15 comments
Open

How to detect all points? #12

Oofs opened this issue Oct 17, 2018 · 15 comments

Comments

@Oofs
Copy link

Oofs commented Oct 17, 2018

Hi,
I do the detection on VLP-16 data, result is shown in RVIZ and it's pretty good.
But i have noticed that only the points with x > 0 are detected. Do you know how to make the net detecting all points?

car_dect

@johleh
Copy link

johleh commented Oct 17, 2018

an approach that works with voxelnet and should work with SECOND :

as far as i understand the model is trained for the 80x70x4 area in front of the car and inference produces labels only for the objects visible in image space .

to get inference data for all points you could use your 360° data four times. rotate the data by 0°/90°/180°/270°.
then use the pre-processing step that reduces these point clouds to the part that is visible in the image.
(or just cut everything that is not within -45° to 45° of the front view center )
run inference on all 4 reduced point clouds, rotate the predictions back accordingly.
together these predictions should cover all objects within the 360° point cloud.

maybe 2 parts (original and 180° rotated) are enough.

@Oofs
Copy link
Author

Oofs commented Oct 18, 2018

@johleh thanks for your advice. That seems a reasonable way and i will try this.
I found that simply modifying the following ranges to the desired value in .config file can make the net detecting bigger area: point_cloud_range anchor_ranges post_center_limit_range, but result is no quiet stable.
car_detect

@traveller59
Copy link
Owner

@Oofs you are right. Note that if you use anchor_generator_stride (not recommend), you need to change offset when change detection range.
In addition, the pretrained model encode the absolute location of voxels, so if you want to detect objects outside the camera range in KITTI, you may need to train a new model.

@bigsheep2012
Copy link

@traveller59 May I know why the range is [-3,1] for the z axis ([0,-40,-3, 70.4, 40, 1] in car.config)? According to the official documentation of Kitti, the z axis of the lidar coordinate is pointing upwards to the sky. Then why -3 is needed which seems evaluating the points under ground?

@traveller59
Copy link
Owner

@bigsheep2012 The distribution of car bottom-center location is in [-3, 1]. you can draw the distributions.

@kwea123
Copy link

kwea123 commented Nov 7, 2018

velodyne sensor is located 1.73m above the ground (on top of the car).
for detection of the points behind the car, maybe you can try to multiply the x coordinates by -1, do the detection, then transform the boxes back.

@cedricxie
Copy link

cedricxie commented Nov 15, 2018

Hi @Oofs , it is great to see that you seem to also have tried to do some detections in ROS.

I ran SECOND as a ROS Node with one of the KITTI dataset and the results is at youtube link. This code is at repository link. I felt that the performance is not as expected and I might have done something wrong. Could you please help me check to see if you have any suggestions for improvement? Thank you very much.

Yuesong

@kwea123
Copy link

kwea123 commented Nov 16, 2018

@Oofs Hi, how does the network perform on 16 lines data? Thank you.

@chowkamlee81
Copy link

I have changed point_cloud_range : [-80, -69.12, -3, 80, 69.12, 1]. But still iam not able to detect objects for X<0. KIndly suggest if anybody found the solutions...

@chowkamlee81
Copy link

@kwea123 @traveller59 @cedricxie kindly help us on this issue.

@chowkamlee81
Copy link

@Oofs , how did you got detection result for X<0. KIndly suggest what parameters i need to change.

So far i used point_cloud_range : [-80, -69.12, -3, 80, 69.12, 1], post_center_limit_range: [-80, -69.12, -5, 80, 69.12, 5]

KIndly help

@kwea123
Copy link

kwea123 commented Mar 6, 2019

Don't modify the config, put the range to [0,-40,-3, 70.4, 40, 1] or whatever as it is.
Basically just do two detections, one for x>0. And to do detection on x<0, you just need to "flip" them to the front by multiplying x by -1 (or do x *=-1 and y *=-1 to rotate them to the front), then do detection on these points; finally just rotate the boxes back (you may have to write this code by yourself).

@chowkamlee81
Copy link

@Oofs , how did you got detection result for X<0. KIndly suggest what parameters i need to change.

So far i used point_cloud_range : [-80, -69.12, -3, 80, 69.12, 1], post_center_limit_range: [-80, -69.12, -5, 80, 69.12, 5]

KIndly help

@chowkamlee81
Copy link

Ok thanks let me try with the suggested step..

@dingfuzhou
Copy link

Ok thanks let me try with the suggested step..

Hi, have you correctly detect the objects behind the camera? Would you provide some suggestions please?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants