Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

For other Lidar sensor #11

Closed
xiesc opened this issue Mar 3, 2019 · 10 comments
Closed

For other Lidar sensor #11

xiesc opened this issue Mar 3, 2019 · 10 comments

Comments

@xiesc
Copy link

xiesc commented Mar 3, 2019

I want to test SuMa for other type of LiDAR. And I formulate the point cloud in .bin file which is similar with KITTI format. After that, I change the default.xml in first several lines :
param name="data_width" type="integer">450
param name="data_height" type="integer">64
param name="data_fov_up" type="float">10.0373
param name="data_fov_down" type="float">-30.078

The algorithm seems works fine but with some small problem:
a.there are no points shown in visualizer. Only can see the odometry trajectory.
b.the results error seems serious comparing with KITTI dataset results.

I wander if there are something I missed?

@jbehley
Copy link
Owner

jbehley commented Mar 3, 2019

Hi @xiesc,

Thanks for your interest. Regarding your questions:

A. For the Kitti dataset I assume that the remission of the laser sensor is in between 0 and 1.0. It might be that all points are simply white due to values > 1.0.

B. You should try to also adjust the corresponding model parameter, like model_width, model_height, and fov (field of view).

Note also that the other parameters are somehow biased towards the KITTI dataset. Therefore I would suggest to tweak the icp max distance and angle.

Also consider that the current version of SuMa assumes deskewed or motion compensated scans. Thus you might notice larger errors when turning.

Hope that helps.

@xiesc
Copy link
Author

xiesc commented Mar 3, 2019

Thank you for your reply.
Actually the .bin file is generated from CARLA. The intensity of all points are given 0.01. But still no points shown in visualizer.
The purpose using simulation data is that I want to figure out the reason that cause the severe bias in z direction of trajectory.

I just upload some file at https://github.com/xiesc/CarlaData.git.
I will really appreciate that if you can have a look on these data.

@xiesc
Copy link
Author

xiesc commented Mar 3, 2019

forgot saying that the fov is 30 to -10 degrees. And the number of scan is 64. It may be not convinient that the Lidar is assembly upside down.

@xiesc
Copy link
Author

xiesc commented Mar 4, 2019

I think I found the reason of problem (a). I found there is a normalization in KITTIReader.cpp :
for (uint32_t i = 0; i < num_points; ++i) {
remissions[i] /= max_remission;
}
So if I give the points all same intensity, they will all become 1.0f.

@jbehley
Copy link
Owner

jbehley commented Mar 5, 2019

@xiesc Did you also try to use the corresponding parameters for the rendered model image? Something like?

 <param name="model_width" type="integer">450</param>
 <param name="model_height" type="integer">64</param>
 <param name="model_fov_up" type="float">10.0</param>
 <param name="model_fov_down" type="float">-30</param>

Hope that fixes your issues?

@xiesc
Copy link
Author

xiesc commented Mar 6, 2019

@jbehley
Yes, I use the same parameters for the rendered model image.
I increase the number of ICP iteration, the results seams right. For the simulation data, SuMa have no obvious drift at all, except in some turning the estimated yaw angle will be smaller than actually value. For the pitch angle, there are no obvious bias. Consequently, it will be no longer suffered from drift in z-axis.

@jbehley
Copy link
Owner

jbehley commented Mar 6, 2019

Hi @xiesc,

good that now everything seems to work as expected. I got also similar results when considering simulated results.

The behavior when turning is a limitation of the approach, which assumes deskewed or motion compensated scans. A possible solution would be (similar to Moosmann et al. [1]) to first determine the odometry and then deskew the scan for a final alignment. A far more "elegent" solution would be to integrate this in the non-linear least square by considering a linear interpolation of the pose at the start and the end of the turn, i.e., each column of the "data frame" gets transformed according to something like (1-eta(c))*pose[t-1] + eta(c)*pose[t], where eta(c) = column/width to account for the turning rate of the sensor. A similar approach is done by LOAM of Zhang et al. [2]. But there are also continuous-time SLAM approaches (like the work of Droeschel et al. [3]), which handle this even more elegant and robust.

Regarding the results of KITTI and the drift in z-direction: I guess is that the apparent drift is somehow present in the deskewing of the scans, which are deskewed using the IMU of the car, which has some drift in the z-direction. This drift is visible when one plots the ground truth trajectories of the sequences with loop closures (like sequence 00 and even more severe in the beginning of sequence 08). However, there might be also other reasons why this happens in the KITTI odometry dataset.

References
[1] Moosmann & Stilller. Velodyne SLAM, IV, 2011. see http://www.mrt.kit.edu/z/publ/download/Moosmann_IV11.pdf
[2] Zhang & Singh. LOAM: Lidar Odometry and Mapping in Real-time. RSS, 2014. see http://www.roboticsproceedings.org/rss10/p07.pdf
[3] Droeschel, Behnke. Efficient Continuous-time SLAM for 3D Lidar-based Online Mapping. ICRA, 2018. see http://ais.uni-bonn.de/papers/ICRA_2018_Droeschel.pdf

@xiesc
Copy link
Author

xiesc commented Mar 8, 2019

Hi @jbehley
Thanks a lot. For drift in z-axis, I found there are constant systematic error in original point cloud. I think It's mainly due to the non-perfect calibrated LiDAR and laser incidence angle variance. The detail can be found in Glennie's works [1-3] and a recently research [4].
I found a way to solve this problem simply. You can insert this code to KITTIReader.cpp in line 155 after
points[i].x() = values[4 * i]; points[i].y() = values[4 * i + 1]; points[i].z() = values[4 * i + 2];
+
double d_2 = points[i].x() * points[i].x() + points[i].y() * points[i].y(); double scale = 1 - d_2*3.25*1e-5/2; points[i].z() = scale*points[i].z();

you can see the drift in z-axis can be efficiently removed especially in straight sequence like 01 and 04 or 09.

By the way, I compare the non-closure result by export currentPose_ in SurfelMapping::updateMap. Is this result corresponding to frame-to-model result?

[1]Glennie, Craig, and Derek D. Lichti. "Static calibration and analysis of the Velodyne HDL-64E S2 for high accuracy mobile scanning." Remote Sensing 2.6 (2010): 1610-1624.
[2]Glennie, Craig, and Derek D. Lichti. "Temporal stability of the Velodyne HDL-64E S2 scanner for high accuracy scanning applications." Remote Sensing 3.3 (2011): 539-553.
[3]Glennie, Craig. "Calibration and kinematic analysis of the velodyne HDL-64E S2 lidar sensor." Photogrammetric Engineering & Remote Sensing 78.4 (2012): 339-347.
[4]Tang, Tim, et al. "Learning a bias correction for Lidar-only motion estimation." 2018 15th Conference on Computer and Robot Vision (CRV). IEEE, 2018.

@jbehley
Copy link
Owner

jbehley commented Mar 10, 2019

Thanks for the hint on this correction of the z-drift. I will give it a try. 👍

To get the poses after loop closure (at the end of the sequence), you have to call getOptimizedPoses() of the class SurfelMapping. Getting the intermediate poses in between can lead to inconsistencies, since these poses might come before a loop closure.

For the "frame-to-model without loop closure", you have to set close-loops in the configuration to false, i.e., <param name="close-loops" type="boolean">true</param>

The different configuration files are also available on the project page: http://jbehley.github.io/projects/surfel_mapping/index.html

Btw, is the issue now resolved?

@xiesc
Copy link
Author

xiesc commented Mar 10, 2019

Thanks a lot for your help!
The problem has been solved

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants