Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The distortion corrector package doesn't compensate for all motion distortion #6657

Closed
3 of 5 tasks
kaancolak opened this issue Mar 20, 2024 · 12 comments
Closed
3 of 5 tasks
Assignees
Labels
component:sensing Data acquisition from sensors, drivers, preprocessing. (auto-assigned) type:improvement Proposed enhancement

Comments

@kaancolak
Copy link
Contributor

Checklist

  • I've read the contribution guidelines.
  • I've searched other issues and no duplicate issues were found.
  • I've agreed with the maintainers that I can plan this task.

Description

The current implementation of the distortion corrector package only takes into account linear speed x from twist messages and just yaw rate from the IMU. But the current solution doesn't provide a motion-compensated point cloud in different cases like going up and down speed bump, ego roll angle changes when the vehicle turning etc.

Purpose

Compensate all motion distortion from lidar point cloud.

Possible approaches

Solution 1:

In the current sensor setup, we possess high-frequency IMU data. This enables us to calculate orientation variances between the timestamps of the point cloud (the timestamp of the initial scan) and the timestamps of individual points. Additionally, utilizing a localization stack (EKF), we can determine the vehicle's displacement during this interval. We can then implement a reverse transformation from the pseudo sensor base frame (imu orientation + displacement that comes to EKF) to the sensor base frame.

Solution 2:

Improve the current algorithm by just adding the pitch rate.
Limitation: We have just linear x that comes from the twist message.

Definition of done

  • A possible solution was decided
  • Distortion corrector package refactored
@kaancolak kaancolak self-assigned this Mar 20, 2024
@kaancolak
Copy link
Contributor Author

Please feel free to share your ideas

FYI: @drwnz @miursh @xmfcx

@knzo25
Copy link
Contributor

knzo25 commented Mar 21, 2024

@vividf was working on things related to this

@kaancolak
Copy link
Contributor Author

Hi @vividf , what's your current plan?

@vividf
Copy link
Contributor

vividf commented Mar 25, 2024

Hi, @kaancolak,

What I did before was to utilize the information from twist (linear xyz, but just as you said we only have linear x from twist) and imu (angular xyz) to compensate for the pointcloud in the sensor_frame (lidar_frame), not in the base_link.

  1. Apply an adjoint map to transform the twist from the baselink to the sensor frame.
  2. Apply a rotation matrix to transform the angular velocity from imu to the sensor frame.
  3. Replace the angular velocity of twist to imu.
  4. Apply the exponential map with the time_offset to estimate the motion in the period.
  5. Get the undistorted point by multiplying the transformation matrix with the distorted point.

The reason why we didn't use this algorithm is because it almost doubles the time.

@idorobotics idorobotics added the type:improvement Proposed enhancement label Apr 4, 2024
@kaancolak
Copy link
Contributor Author

Thank you for the clarification @vividf . I apologize for the delayed response, I haven't had the time to work on this issue yet due to my other tasks.

I understood your approach. In the current implementation of lidar distortion corrector has difficulty solving complex situations(using just yaw rate) that are mentioned in the issue. Maybe can we add this solution as an optional in the distortion corrector package?

@vividf
Copy link
Contributor

vividf commented Apr 30, 2024

@kaancolak
Sure, I will create a new branch and add the implementation for you guys to test the performance.

@kaancolak kaancolak assigned vividf and unassigned kaancolak Apr 30, 2024
@kaancolak
Copy link
Contributor Author

Thank you @vividf , I re-assigned this issue to you. (cc. @xmfcx ) , before we created this issue, we talked with Fatih and planned to implement a similar approach to yours(or use a different displacement resource). However, you've already put in some work on it. Let's test the outcome. If it performs satisfactorily, maybe we can try optimizing certain aspects to address the processing time concern.

@vividf
Copy link
Contributor

vividf commented May 1, 2024

@kaancolak
Please use this branch (https://github.com/autowarefoundation/autoware.universe/tree/feat/3d_distortion_corrector) to test whether it works for your case.

This branch is a bit different from what I implemented before (undistorted in baselink instead of undistorted in sensor frame).
The speed should be faster. Hope this can help you!

@meliketanrikulu meliketanrikulu added the component:sensing Data acquisition from sensors, drivers, preprocessing. (auto-assigned) label May 9, 2024
@meliketanrikulu
Copy link
Contributor

meliketanrikulu commented May 14, 2024

Hello @vividf . Thanks for your work. I tested it.
First of all I check distortion corrector input point cloud and output point cloud for understanding difference coming with your branch. I did this test by viewing the moments when it passed through speed bumps.
Without 3d distortion corrector branch:
before_fully_distortion_corrected
Blue pc : Input of the distortion corrector
White pc : Output of the distortion corrector
We can see here pc which is output of the distortion corrector does not changes in direction z

With 3d distortion corrector branch:

after
Blue pc : Input of the distortion corrector
White pc : Output of the distortion corrector

We can see here pc which is output of the distortion corrector is changes in direction z.
After seeing this change, I tested it with ground segmentation to see if it provided an improvement.

You can see below that ground segmentation adds ground points to non-ground points when passing through speed bump before the changes occur.
Before 3d distortion corrector (ground segmentation test) Video link is here

I observed that this error did not occur after checkout the 3d distortion corrector branch.
After 3d distortion corrector (ground segmentation test) Video link is here

Which method did you use to test it? Apart from this, if there is a method I can test, I can test it.
As a result of these tests, I believe that your branch offers an improvement. Are you planning to create a PR for this branch and add it to autoware?

Note: By default, the distortion corrector node uses /sensing/vehicle_velocity_converter/twist_with_covariance topic as input. However, the angular velocity field of this topic appears empty. However, I saw that you also used these fields in the code. That's why I ran the tests with /localization/twist_estimator/twist_with_covariance topic

@meliketanrikulu meliketanrikulu self-assigned this May 14, 2024
@xmfcx
Copy link
Contributor

xmfcx commented May 14, 2024

@vividf please create the PR from the branch, since it is improving the autoware performance.

@vividf
Copy link
Contributor

vividf commented May 15, 2024

@meliketanrikulu @xmfcx
Thanks for testing!
I created a draft PR #7031, could you test them again to make sure there are no issues and also provide the result again to the PR?

Thanks!

@meliketanrikulu
Copy link
Contributor

meliketanrikulu commented Jul 3, 2024

Related PR is merged. --> #7137
We can close this issue. Thanks for your work @vividf

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:sensing Data acquisition from sensors, drivers, preprocessing. (auto-assigned) type:improvement Proposed enhancement
Projects
Status: Done
Development

No branches or pull requests

6 participants