Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad result for VLP-16 input #1

Closed
poornimajd opened this issue May 5, 2020 · 4 comments
Closed

Bad result for VLP-16 input #1

poornimajd opened this issue May 5, 2020 · 4 comments

Comments

@poornimajd
Copy link

Hey @anirudhtopiwala ,great work!
I am trying to convert my vlp-16 point cloud to 2d image by using this repo.I changed the following parameters in the code
For VLP-16
Fov_Up = 15 degrees
Fov_Down = -15 degrees
Num of Lasers = 16
Length of the image =1024
But I got a very bad result.Is there any other thing I am missing out?

bad_range
Thank you

@anirudhtopiwala
Copy link
Owner

Hi, I was not able to replicate a similar image. I have attached the output I get with the parameters you have mentioned. It will be difficult to visualize the image though, as it has only 16 rows or 16 pixels in length. Are you loading test.pcd file in assets?
Intensity Image_screenshot_05 05 2020

@poornimajd
Copy link
Author

poornimajd commented May 5, 2020

Thankyou for the response.
The image which is shown by you,is it from a vlp-16 pcd file?
Yes I am loading it in pcd format.I have actually converted a .npy file to a .pcd format.
This is the npy ,the corresponding pcd format file and the image.
https://github.com/poornimajd/show/tree/master
These are the samples from the IDD dataset.

@anirudhtopiwala
Copy link
Owner

So I took a look at the pcd file you provided. There are a couple of things to take into account.

  1. The point cloud you provided is not 360 degrees. Here, the yaw values ranges from -90 to 90 degrees for the provided point cloud. Therefore to shift the origin to the left side as I have mentioned in the blog, you need add pi to the values you get. Therefore then the yaw values will range from [0, pi], which can be normalized by dividing by pi or the length of the range.
    tl:dr
    Change line 74 of Spherical_View_Projection.cpp to:
    double v = (yaw+ M_PI/2) / M_PI ;

  2. The intensity value in the provided point cloud ranges from [0, 255]. You to need to normalized it by dividing intensity by 255, as the values are in float, opencv expects values between range [0,1].
    Note: after normalizing it will be difficult to visually understand whats happening in the image, so just for visualizing you can skip this step. Although, if training a deep learning network don't forget to normalize.

The final output looks like this
Intensity Image_screenshot_06 05 2020

@poornimajd
Copy link
Author

Thanks alot for the detailed answer!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants