-
Notifications
You must be signed in to change notification settings - Fork 168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
For other Lidar sensor #11
Comments
Hi @xiesc, Thanks for your interest. Regarding your questions: A. For the Kitti dataset I assume that the remission of the laser sensor is in between 0 and 1.0. It might be that all points are simply white due to values > 1.0. B. You should try to also adjust the corresponding model parameter, like model_width, model_height, and fov (field of view). Note also that the other parameters are somehow biased towards the KITTI dataset. Therefore I would suggest to tweak the icp max distance and angle. Also consider that the current version of SuMa assumes deskewed or motion compensated scans. Thus you might notice larger errors when turning. Hope that helps. |
Thank you for your reply. I just upload some file at https://github.com/xiesc/CarlaData.git. |
forgot saying that the fov is 30 to -10 degrees. And the number of scan is 64. It may be not convinient that the Lidar is assembly upside down. |
I think I found the reason of problem (a). I found there is a normalization in KITTIReader.cpp : |
@xiesc Did you also try to use the corresponding parameters for the rendered model image? Something like?
Hope that fixes your issues? |
@jbehley |
Hi @xiesc, good that now everything seems to work as expected. I got also similar results when considering simulated results. The behavior when turning is a limitation of the approach, which assumes deskewed or motion compensated scans. A possible solution would be (similar to Moosmann et al. [1]) to first determine the odometry and then deskew the scan for a final alignment. A far more "elegent" solution would be to integrate this in the non-linear least square by considering a linear interpolation of the pose at the start and the end of the turn, i.e., each column of the "data frame" gets transformed according to something like (1-eta(c))*pose[t-1] + eta(c)*pose[t], where eta(c) = column/width to account for the turning rate of the sensor. A similar approach is done by LOAM of Zhang et al. [2]. But there are also continuous-time SLAM approaches (like the work of Droeschel et al. [3]), which handle this even more elegant and robust. Regarding the results of KITTI and the drift in z-direction: I guess is that the apparent drift is somehow present in the deskewing of the scans, which are deskewed using the IMU of the car, which has some drift in the z-direction. This drift is visible when one plots the ground truth trajectories of the sequences with loop closures (like sequence 00 and even more severe in the beginning of sequence 08). However, there might be also other reasons why this happens in the KITTI odometry dataset. References |
Hi @jbehley you can see the drift in z-axis can be efficiently removed especially in straight sequence like 01 and 04 or 09. By the way, I compare the non-closure result by export [1]Glennie, Craig, and Derek D. Lichti. "Static calibration and analysis of the Velodyne HDL-64E S2 for high accuracy mobile scanning." Remote Sensing 2.6 (2010): 1610-1624. |
Thanks for the hint on this correction of the z-drift. I will give it a try. 👍 To get the poses after loop closure (at the end of the sequence), you have to call For the "frame-to-model without loop closure", you have to set The different configuration files are also available on the project page: http://jbehley.github.io/projects/surfel_mapping/index.html Btw, is the issue now resolved? |
Thanks a lot for your help! |
I want to test SuMa for other type of LiDAR. And I formulate the point cloud in .bin file which is similar with KITTI format. After that, I change the default.xml in first several lines :
param name="data_width" type="integer">450
param name="data_height" type="integer">64
param name="data_fov_up" type="float">10.0373
param name="data_fov_down" type="float">-30.078
The algorithm seems works fine but with some small problem:
a.there are no points shown in visualizer. Only can see the odometry trajectory.
b.the results error seems serious comparing with KITTI dataset results.
I wander if there are something I missed?
The text was updated successfully, but these errors were encountered: