Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added organize_cloud support #65

Closed

Conversation

spuetz
Copy link
Contributor

@spuetz spuetz commented Nov 5, 2015

added organize_cloud support. The published clouds are organized when the organize_cloud option is enabled. This helps for really fast estimation of point normals or really fast triangulation to a mesh.

@jack-oquin
Copy link
Member

I like the fact that you added a new option controlling this feature. Not all devices can support it.

It's hard to tell exactly what changed from the diff's because big blocks of code were re-indented. Is that necessary?

What device models have you tested it with?

@spuetz
Copy link
Contributor Author

spuetz commented Nov 18, 2015

Hi, I tested it with a HDL 32E and also with recored data and it works fine. First of all I extricate the if-statement out off the loop, because it does not depend on the loop variable j and filter the full number of 2D scan lines in one shot of the lasers. Therefore this filtered data does not have to be part of the scan, even if it should be organized. The extrication of the if-statement changes the indentation of the big code-block, but make the code more logical and readable, but does not change the effect.
The part in the code which creates an organized cloud, simply sorts the scan lines to the right order and sets values which are not in range to NaN, to maintain the organized structure. Neighbouring scan lines have to result in neighbouring points in the organized cloud. The feature is a great one, it is simple but enables a fast estimation of good point normals and also of surface reconstuction. This helps e.g. for robustness of registering point clouds and therefore for a more robust localization.

@jack-oquin
Copy link
Member

This is certainly a desirable feature for devices that can support it, which are probably most of the current models.

I would be very reluctant to introduce such a large change for Indigo, which has been released and stable for over a year.

We can consider releasing it to Jade, which still has no release of this driver. For that, it would need exhaustive testing on nearly all the device models, which will require a lot of effort. I have the ability to test it on a VLP-16 and on the original HDL-64E (which probably cannot support it, but still needs to do something sensible). Someone needs to test with HDL-64E S1 and S2, also probably the new S3.

I am not happy with the re-indentation, which makes it quite difficult to review the changes. We may want to separate the substantive changes into a separate commit, then re-indent if necessary once that is approved.

@spuetz
Copy link
Contributor Author

spuetz commented Oct 12, 2016

Hi @jack-oquin
I'm thinking about rewriting a new version of the point cloud organization feature. I can test it on a VLP-16 and on recorded data from the VLP-32. How can we manage the problem of testing it. What is necessary to get in the next release?
Thank you in advance!

@spuetz
Copy link
Contributor Author

spuetz commented Jan 26, 2018

I'll resolve the conflicts here today.

@spuetz spuetz force-pushed the organized_point_cloud_feature branch from 0ba83bf to 882fcaa Compare January 26, 2018 16:01
@spuetz
Copy link
Contributor Author

spuetz commented Jan 26, 2018

I'll resolved the conflicts and made a clean commit. I suggest to do some testing first.

@JWhitleyWork
Copy link
Contributor

We will. It may take a few days but we will test on VLP-16, HDL-32E, VLP-32C, and HDL-64E2.1. Thanks!

@spuetz spuetz force-pushed the organized_point_cloud_feature branch from 882fcaa to 5fd5273 Compare January 26, 2018 16:22
@dhirajdhule
Copy link

dhirajdhule commented Mar 1, 2018

I have tested it with a VLP-16, and it works great. Thanks a lot @spuetz for the efforts.

After using pc.read_points I get 30336 set of points (skip_NAN=False)

height 1896
width 16
is_bigendian False
point_step 32
row_step 512
is_dense True
fields: [name: x
offset: 0
datatype: 7
count: 1, name: y
offset: 4
datatype: 7
count: 1, name: z
offset: 8
datatype: 7
count: 1, name: intensity
offset: 16
datatype: 7
count: 1, name: ring
offset: 20
datatype: 4
count: 1]

Let me know if you want me to post any specific results here, happy to help!

@willdzeng
Copy link

shouldn't is_dense be false in this way?
Dense means there is no invalid points, but if we have height and width, invalid points should be filled with NAN and it's not dense anymore.
Also since it's organized, ring information is implied by the indexes(assume it also organize rows). So it's not necessary anymore.

@aabyshk
Copy link

aabyshk commented May 12, 2018

Hi @spuetz

Thanks for taking the effort to introduce this feature. I am trying to use the organized_point_cloud_feature PR with a Velodyne VLP16 and get the following error whenever I try to subscribe to any topic from the nodes when organize_cloud = true:

terminate called after throwing an instance of 'ros::serialization::StreamOverrunException'
  what():  Buffer Overrun
[velodyne_nodelet_manager-2] process has died [pid 19426, exit code -6, cmd /opt/ros/kinetic/lib/nodelet/nodelet manager __name:=velodyne_nodelet_manager __log:=/home/abhishek/.ros/log/37782164-563a-11e8-9954-e4b3181ec1d9/velodyne_nodelet_manager-2.log].
log file: /home/abhishek/.ros/log/37782164-563a-11e8-9954-e4b3181ec1d9/velodyne_nodelet_manager-2*.log
[velodyne_nodelet_manager_cloud-4] process has finished cleanly
log file: /home/abhishek/.ros/log/37782164-563a-11e8-9954-e4b3181ec1d9/velodyne_nodelet_manager_cloud-4*.log
[velodyne_nodelet_manager_laserscan-5] process has finished cleanly
log file: /home/abhishek/.ros/log/37782164-563a-11e8-9954-e4b3181ec1d9/velodyne_nodelet_manager_laserscan-5*.log
[velodyne_nodelet_manager_driver-3] process has finished cleanly
log file: /home/abhishek/.ros/log/37782164-563a-11e8-9954-e4b3181ec1d9/velodyne_nodelet_manager_driver-3*.log

There is no such error when organize_cloud = false. Any insights?

@spuetz
Copy link
Contributor Author

spuetz commented May 13, 2018

Hey guys I'll test the driver again, in the next week, hopefully. @aabyshk Which distro are you using? And @willdzeng you are right: is_dense should be false. I'll fix that. Thanks guys for your input and help! :)

@aabyshk
Copy link

aabyshk commented May 13, 2018

Thanks for looking into it @spuetz. I am using ROS Kinetic on an Ubuntu 16.04 laptop.

@JWhitleyWork
Copy link
Contributor

@spuetz and @aabyshk - Any movement on this? It looks like @spuetz branch is behind master by a good bit. Do you have time to merge and update?

@spuetz
Copy link
Contributor Author

spuetz commented Nov 15, 2018

Japp, I will. Good to reminde me about this. I put it ot the list for next week.

@spuetz
Copy link
Contributor Author

spuetz commented Dec 10, 2018

There are two possibilities doing it:

  1. Using the DataContainerBase, so we could create a container implementation which handle the sorting and setting out-of-range-values to NaN. But then we also need to do the range filtering there, which is okay in my opinion.
  2. Directly hacking it into the rawdata.cc like before. which is a bit ugly and not spitted in a nice modular fashion, but maybe a tiny bit faster.

I prefer 1.) But let me know what you prefer guys @JWhitleyAStuff.

Also I would like to clean up the rawdata.cc in a second PR, there are some useless checks.

@JWhitleyWork
Copy link
Contributor

@spuetz - I agree with you. Container-based solution is more readable, C++-friendly, and will translate more easily to ROS2. The compiler will optimize most of it down and the container-based solution will likely end up even more efficient. Let me know if you need any help.

@spuetz
Copy link
Contributor Author

spuetz commented Jan 9, 2019

Hi @JWhitleyAStuff I build such a container-based version for organized clouds: master...spuetz:feature/organized_pc
I had no time to test it. If you have time for testing it, let me know. After some basic tests I can update the PR with that version. But this is not a small change, so you should have a look anyway. Also I think that these two files transform and convert could be combined, as well as the two config files. Which makes the package and the code much cleaner. What do you think @JWhitleyAStuff ? But that should be maybe done in a second PR.

@spuetz
Copy link
Contributor Author

spuetz commented Jan 9, 2019

Btw. we also could remove PCL as dependency, since I use the the sensor_msgs/PointCloud2Iterator. This also saves memory and time, since I directly transform every point when it is read from the network packets, without copping / transforming it to a PCL PointCloud.

@JWhitleyWork
Copy link
Contributor

@spuetz - I'm going to close this PR. Would you mind creating a new one with that branch so I can get an easy diff? I'll reference this one from that one for tracking.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants