Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

there are some questions using VLP-16 Lidar sensor #12

Closed
Lyusijie opened this issue Mar 27, 2019 · 13 comments
Closed

there are some questions using VLP-16 Lidar sensor #12

Lyusijie opened this issue Mar 27, 2019 · 13 comments

Comments

@Lyusijie
Copy link

Hello.
I try to simulate a new dataset collected by VLP-16, which with the vertical angle of view:+15°- -15°,depth:100m, so i modify the "default.xml"

<param name="data_width" type="integer">900</param>
<param name="data_height" type="integer">64</param>
<param name="data_fov_up" type="float">15.0</param>
<param name="data_fov_down" type="float">-15.0</param>
<param name="max_depth" type="float">100.0</param>
<param name="model_width" type="integer">900</param>
<param name="model_height" type="integer">64</param>
<param name="model_fov_up" type="float">15.0</param>
<param name="model_fov_down" type="float">-15.0</param>

but there are some problems:

  1. the trajectory seems not right, the car turns around on the Z-axis.
  2. there are only some points on the normal map(2D).
    i have some questions:
  3. what's the meaning of data_height & data_width? from the code, i think they are the height&width of normal map. According to different Lidar, how do i modify the parameter?
  4. KITTI use HDL-64E, and my Lidar is VLP-16, does the difference of beams influence a lot?
    Hope for your reply, thank you.
@jbehley
Copy link
Owner

jbehley commented Mar 27, 2019

data_height and data_width are both essential for the approach, since the determine how the point cloud gets projected (it determines who many "bins" exists in x and y direction of the generated vertex map). In case of the Velodyne VLP-16, one should use definitely data_height = 16 and in my experience for data_width something like 360 works better then having a too large "aspect ratio".

Usually, the same values for model_width and model_height works fine.

That the normal map is nearly empty is a side-effect of the too large data_height, since invalid values in the vertex map are not used to compute a normal. When you now reduce data_height, you will notice that the normal map is much more dense.

@Lyusijie
Copy link
Author

Thanks for your reply, i have modified the data_height & data_width, and the normal map looks more dense.
However, the trajectory is not right. According to your reply in other issues, i try to increase the number of ICP iteration and tweak the icp max distance and angle. Unluckily, the trajectory changed but not right as well(the trajectory is spiral) :(
I'm a starter for SLAM, maybe parameters i modified are not proper. Here are my questions:
1.What's the meaning of icp_max_distance&icp_max_angle? From code, i don't notice they are used in ICP algorithm...
2. Why after changing dataset the trajectory is totally wrong? Because of the beams of Lidar? If it is, how do i tweak the parameter?
Hope for your reply, thank you.

@jbehley
Copy link
Owner

jbehley commented Apr 1, 2019

Hi again,

sorry for delay. I just uploaded an example Velodyne VLP-16 file (config/vlp16_example.xml) which I used to generate the following trajectory of a dataset using Velodyne VLP-16 point clouds recorded with an Husky at our lab:

image

Regarding your questions:

  1. The icp_max_distance and ìcp_max_angle` are passed to the shaders, where the actual computation is happening. They are threshold for correspondences, i.e., these determine if point to surfel correspondences are treated as outliers.

  2. The "spiral" is usually happening if there is no map build. You should see something like the lower part of "debug images" in the screenshot from above. If this is just black, then there is definitely some issue with the parameters.

@Lyusijie
Copy link
Author

Lyusijie commented Apr 3, 2019

Thanks for your reply, i find that my .bin with the format the same as .pcd. It's my fault, sorry.
So i modify bool KITTIReader::read(uint32_t scan_idx, Laserscan& scan)in KITTIReader.cpp by using loadPCDFile. The problem has been solved and trajectory goes well, but the error seems serious at LoopClosure comparing with KITTI dataset.
Because of accumulated position errors, There is no overlap between the starting and ending points, and there is a difference in altitude.
I had read the article and code, and have some questions about LoopClosure:

  1. According to the article: At first, by computing min||twct-twcj*||, we get the possible loop closure candidate j*. Then, by comparing Emap(residual between t&map)&Eodom(residual between t&j*), we distinguish if it is a valid candidate.
  2. According to code:
    bool loop_closure = (rel_error_all < loopResidualThres_) || (residual - result_new_.residual) < 0.1;
float error = objective_->jacobianProducts(JtJ, Jtr);
float residual = error / float(objective_->inlier() + objective_->outlier()); 
double residual = objective_->jacobianProducts(JtJ_new, Jtr);
result_new_.error = residual;  // gn_->residual();
result_new_.inlier = objective_->inlier();
result_new_.outlier = objective_->outlier();
result_new_.residual = result_new_.error / (result_new_.inlier + result_new_.outlier);

The residual & result_new_.residual seems the same apart from JtJ_new? Why don't get j* by getClosestIndex() before bool loop_closure ? i mean, i think residual & result_new_.residual have noting to do with j*. Is there any mistakes about my understanging? Hope for your reply. Thank you.

@jbehley
Copy link
Owner

jbehley commented Apr 4, 2019

Hi @Lyusijie,

the employed loop closure procedure is currently quite heuristically and the parameters are definitely tailored to the KITTI dataset. You have to adjust the thresholds and for this you can use the provided visualizer that shows you potential loop closures and colors the scans green if the get considered. For stability, we also use an approach where we want to have multiple loop closure candidates before we add them to the pose graph.

Here you can see the considered candidates (red and blue):
image

And if a candidate is accepted the candidate is shown in green:
image

You may also notice that there is a green bar in the "graph plot", which indicates that a loop closure candidate was added. After a threshold of 5 valid loop closures, these are added to the pose graph.

You have to adjust the "loop_closure" parameters to get something sensible in your application. Furthermore, it might be necessary to reduce the loop-min-trajectory-distance in small scale environments. Overall, you have to adjust these parameters:

  <param name="close-loops" type="boolean">true</param>
  <param name="loop-residual-threshold" type="float">1.15</param>
  <param name="loop-valid-threshold" type="float">0.95</param>
  <param name="loop-outlier-threshold" type="float">1.1</param>
  <param name="loop-search-distance" type="float">50</param>
  <param name="loop-min-verifications" type="integer">5</param>
  <param name="loop-min-trajectory-distance" type="float">50</param>

Regarding question 2: Note also that objective_->jacobian(JtJ, Jtr) returns a residual and that the poses change in between.

I have to admit that this part of the code is rather hacky and quite heuristic. Therefore, I also added the "future work" to the paper: better loop closure detection.

Hope these comments help.

@Lyusijie
Copy link
Author

Lyusijie commented Apr 5, 2019

Hi, thanks for your reply.
Forget to mention that my path is much longer than KITTI, about 5km in the campus and the trajectory drifts a lot.
webwxgetmsgimg
1.This is my trajectory, in fact, start and end should coincide, it seems that the trajectory drifts a lot.
when i change ICP max iterations at 99, the trajectory is the same.
2.when run to the positions at red points, here are some errors:

t_err: 3.0615, r_err: 0.0757671
Lost track: Pose increment change too large. Falling back to frame-to-frame tracking.
t_err: 1.46536, r_err: 0.039097
Lost track: Pose increment change too large. Falling back to frame-to-frame tracking.
t_err: 0.478697, r_err: 0.0113782
Lost track: Pose increment change too large. Falling back to frame-to-frame tracking.

I'm not quite understand the meaning of "Pose increment change too large", in fact, the car doesn't move fast at this positions.
3. We consider potential loop closure based on the pose_distance, because the trajectory drifts a lot, can not get loop closure is common, maybe i should pay attention to ICP before loop closure, right?
4. After getting loop closure, there is pose optimization. But when i run KITTI sequence 06, i don't notice obvious trajectory correction. I means, when a loop closure candidate is added to the graph, the current pose won't get close to the loop closure candidate pose, why?
5. what is calibration file, is it necessary?
I am still a novice, the question may be a little stupid, please forgive me.
Hope for your reply, thank you.

@jbehley
Copy link
Owner

jbehley commented Apr 6, 2019

  1. try to increase the loop-search-distance. You can also try to decrease the icp-max-distance, this gave me better results with slowly moving sensors.
  2. this message occurs if the tracking was lost. Then I switch to frame-to-frame odometry. This might be caused by too less distinctive "features" in the environment.
  3. The large displacement might be an artifact of the assumption that the scans are "deskewed". Thus it might be that my approach under- or overestimates the relative pose when the robot turns.
  4. In sequence 06 this is not that severe. Sequence 00 has more noticeable loop closures.
  5. the calibration file is needed to save the LiDAR poses in the KITTI format of the camera. And to read the ground truth poses from the file. (KITTI expects everything in the pose of a camera with a different coordinate system.)

@Lyusijie
Copy link
Author

Thanks for your patient reply.
As your reply about my question 4, i noticed Sequence 00 has a obvious trajectory correction in the end. Is the trajectory correction because of the code as following?

    std::cout << "[info] Trigger optimization at " << timestamp_ << std::endl;
    optimizedPosegraph_ = posegraph_->clone();
    optimizeFuture_ = std::async(std::launch::async, std::bind(&SurfelMapping::optimizeAsync, this));

the sixth loop closure will triggers optimization, but in the graph, i don't notice obvious trajectory correction. I am not sure which code causes trajectory correction.

I have this question because i want to impose loop closure constraints and optimization, for example: in sequence 06, i want to impose loop closure between frame 150 and frame 50, so i write code as following after // if (verifiedLoopClosures_.size() > 6 || timeWithoutLoopClosure_ > 3)(Line 651 SurfelMapping.cpp):

if (timestamp_==150)
    {
      std::cout <<"timestamp 150" << std::endl;
       LoopClosureCandidate candidate;
       int32_t from = timestamp_;
       int32_t to = getClosestIndex(currentPose_old_);
       Eigen::Matrix4d nearest_pose = posegraph_->pose(to);//j的位姿
       candidate.rel_pose = currentPose_old_.inverse() * nearest_pose;
       const Eigen::Matrix4d& diff = candidate.rel_pose;
       posegraph_->addEdge(from, to, diff, info_);//**********建立from到to的约束 紫线******************************
       std::cout << "Adding constraints " << from << " -> " << to << std::endl;
       optimizedPosegraph_ = posegraph_->clone();//优化
       optimizeFuture_ = std::async(std::launch::async, std::bind(&SurfelMapping::optimizeAsync, this));
       std::cout << "[info] Trigger optimization at " << timestamp_ << std::endl;
    }

I notice there is a purple line between frame 150 and 50 as constraints, but the pose of frame 150 doesn't get close to frame 50. Do i have some mistakes?
Hope for your reply, thank you.

@jbehley
Copy link
Owner

jbehley commented Apr 15, 2019

  1. Optimization is triggered as soon as enough verified loop closures has been added or at least 3 time steps, since the last pose graph optimiztion (timeWithoutLoopClosure_ > 3).
  2. Note that loop closure edges should have the desired transformation between the nodes. Therefore, ICP is needed to determine the "correct" transformation between the potential loop closure nodes.

@Lyusijie
Copy link
Author

Thanks for your patient reply.
Sorry, i do not pretty understand yet.
1.as you said, the code

optimizedPosegraph_ = posegraph_->clone();
optimizeFuture_ = std::async(std::launch::async, std::bind(&SurfelMapping::optimizeAsync, this));

is doing pose graph optimization, right?
2.i have known the frame id(from&to) and transformation(diff)between the nodes, if i want to optimize the pose of frame 150 using such code, what other parameters should i have?
3.when i run KITTI dataset, i notice there are some [info] Trigger optimization, but even have triggered optimization, the pose doesn't change, why? for example, there are some loop closures in sequence 00, just the last loop closure optimizes the trajectory, the previous loop closure doesn't optimize the trajectory, why?
Hope for your reply, thank you.

@jbehley
Copy link
Owner

jbehley commented Apr 15, 2019

The following code starts the optimization in a separate CPU thread:
optimizeFuture_ = std::async(std::launch::async, std::bind(&SurfelMapping::optimizeAsync, this));

The optimized poses are only updated in the next processScan() and the call to integrateLoopClosures(). To directly see an effect, you have to simply call optimizeAsync() and then call integrateLoopClosures(). But this might affect some other parts... so be aware that it might break something.

Regarding your second question: from, to, diff, and the info_ for adding an edge to the posegraph should be enough.

@Lyusijie
Copy link
Author

Lyusijie commented Apr 17, 2019

Thanks for your reply.
I add these code in void SurfelMapping::checkLoopClosure()to directly see an effect:

 if (timestamp_>150&&timestamp_<160)
    {
      //std::cout <<"timestamp 150" << std::endl;
       LoopClosureCandidate candidate1;
       int32_t from1 = timestamp_;//get from
       int32_t to1 = getClosestIndex(currentPose_old_);// get to
       Eigen::Matrix4d nearest_pose1 = posegraph_->pose(to1);//j的位姿
       candidate1.rel_pose = currentPose_old_.inverse() * nearest_pose1;
       const Eigen::Matrix4d& diff1= candidate1.rel_pose;// get diff
       posegraph_->addEdge(from1, to1, diff1, info_);//**********add constraint line******************************
       std::cout << "Adding constraints " << from1 << " -> " << to1 << std::endl;
       optimizedPosegraph_ = posegraph_->clone();
       optimizeFuture_ = std::async(std::launch::async, std::bind(&SurfelMapping::optimizeAsync, this));// trigger optimization

i run KITTI sequence 00 ,when 150<timestamp_<160,it adds constraint line and create optimizeFuture_ , in the next processScan() call integrateLoopClosures() to update optimized poses. Logically, i think it should change poses during frame 150-160, but in fact, it just have constraint lines without changing poses:
2019-04-17 14-41-31屏幕截图
and i noticed in bool Posegraph::optimize(uint32_t num_iters) the initial error&final error are same at a very small value about e-22. maybe the value of result_is wrong, so the result_ is from where?
i don't know where i make mistakes and have puzzled for some days.
if it is available, could you download my SurfelMapping.cpp on https://github.com/Lyusijie/my-zone/blob/master/SurfelMapping.cpp to have a look?I modify my code between line 656-673.
Hope for your reply, thank you.

@Lyusijie
Copy link
Author

Hello, the problem has been solved, but there is a new problem.
i run my own dataset, and i set frame 3700-3727 to add 28 constraints with 4 trigger optimize:
before optimization:
2019-04-18 17-38-12屏幕截图
after optimization:
2019-04-18 17-38-33屏幕截图
the pose has been optimized in the first trigger but still have error, so i add a more trigger at frame 3751-3756:
before optimization:
2019-04-18 17-38-40屏幕截图
after optimization:
2019-04-18 17-38-49屏幕截图
the trajectory is not as i thought, so i have 2 questions:

  1. i increase optimization iteration to 500:return optimizedPosegraph_->optimize(500);but the error is still big at first optimization. What else can i do to make the optimization more accurate?
  2. compare two beams of purple line, it seems that the poses which are added constraints before won't be optimized, why?
    Hope for your reply, thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants