Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix odometry and acceleration processing pipeline #753

Merged
merged 6 commits into from
May 25, 2022
Merged

Fix odometry and acceleration processing pipeline #753

merged 6 commits into from
May 25, 2022

Conversation

HaoguangYang
Copy link
Contributor

Fix odometry pose transform when odometry origin is different from base_link.
Fix covariance transformation when Odometry msg child_frame is not base_link.
Fix acceleration transformation, considering angular acceleration and centripetal acceleration.
Fix gravity removal for IMU with relative orientation (initial pose as [0,0,0,1]).

@ayrton04
Copy link
Collaborator

Thank you for the PR.

Can you clarify some of these? Everything is marked as a fix, but these appear to be feature enhancements. Are you fixing any specific bugs that you can detail? Maybe just generally provide use cases that are solved by these changes.

Fix odometry pose transform when odometry origin is different from base_link

I need to go over the PR, but what will happen if the odometry data is not provided in a frame relative to base_link, e.g., if it comes from a motion capture system that has a fixed transform to the robot's world frame (direct transform from mocap_frame to odom)?

@HaoguangYang
Copy link
Contributor Author

HaoguangYang commented May 11, 2022 via email

@ayrton04
Copy link
Collaborator

I haven't forgotten about this. Just need to make time for review. Thanks again for the PR and details.

@ayrton04
Copy link
Collaborator

ayrton04 commented May 25, 2022

Q: "what will happen if the odometry data is not provided in a frame relative to base_link, e.g., if it comes from a motion capture system that has a fixed transform to the robot's world frame (direct transform from mocap_frame to odom)?"
A: This PR, as well as #728, fixes exactly that issue.

Hang on, I want to back up on this point. I'm sure you are aware of all of this, but for completeness, I want to go over it here.

r_l has always made the assumption that pose data will arrive in a frame that can be transformed to its world frame (e.g., odom) directly, without passing through the world_frame -> base_link_frame transform. So, for example, if I have an external motion capture system that generates my pose in the mocap frame, and I know the transform from mocap to odom, then I can publish the mocap->odom transform, and the EKF can use messages from the motion capture system directly. It will transform the messages from the mocap frame to the odom frame and fuse it correctly.

This has been supported from day 1; the reason I use this specific example is because I have done this exact thing. It also works if you want to fuse, e.g., Gazebo's "perfect" state messages into the robot's pose. But again, for externally-referenced world frames, this already worked.

Likewise, the package makes the assumption that any velocity data will be reported in a frame that can be transformed to base_link directly (i.e., without having to "pass through" the odom->base_link transform in the tree).

If a nav_msgs/Odometry message gets fed to the filter, the pose data gets transformed from the messages's frame_id into the filter's world_frame (e.g., odom), and the twist data gets transformed from the message's child_frame_id into the filter's base_link_frame (e.g., base_link). They are then fused directly.

One of the main shortcomings of r_l has always been the case where someone mounts a sensor with its own world frame to the robot itself. This gets complicated, because the sensor's offset from the robot's origin will be defined relative to base_link. So now we have pose data that is coming from a frame relative to base_link, and that pose data also has to get transformed using the transform that the EKF itself is producing (i.e., odom->base_link).

#728 solved this, I believe, by making it so that differential mode, when applied, would use the child_frame_id of the odometry message. This would allow us to define a base_link-><child_frame_id> transform. Then the differential mode would differentiate two consecutive pose measurements from that sensor (which would yield velocity in the sensor frame), and then that sensor-frame velocity would get transformed by the base_link-><child_frame_id> transform, and we'd get robot body-frame velocity.

Copy link
Collaborator

@ayrton04 ayrton04 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, took a deeper read through this, and it looks sound. Thanks for the PR!

state(StateMemberVyaw));
angular_acceleration_ = (newStateTwistRot - lastStateTwistRot_)/dt;
const Eigen::MatrixXd &cov = filter_.getEstimateErrorCovariance();
for (size_t i = 0; i < 3; i ++)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just for consistency, it might make sense to change 3 to ORIENTATION_SIZE.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for (size_t i = 0; i < 3; i ++)
for (size_t i = 0; i < ORIENTATION_SIZE; i ++)

@ayrton04
Copy link
Collaborator

I see #728 was actually broader in scope than I realised, and isn't just targeting differential mode.

@ayrton04 ayrton04 merged commit d428330 into cra-ros-pkg:noetic-devel May 25, 2022
HaoguangYang added a commit to HaoguangYang/robot_localization that referenced this pull request Jun 19, 2022
state(StateMemberVyaw));
angular_acceleration_ = (newStateTwistRot - lastStateTwistRot_)/dt;
const Eigen::MatrixXd &cov = filter_.getEstimateErrorCovariance();
for (size_t i = 0; i < 3; i ++)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for (size_t i = 0; i < 3; i ++)
for (size_t i = 0; i < ORIENTATION_SIZE; i ++)

const Eigen::MatrixXd &cov = filter_.getEstimateErrorCovariance();
for (size_t i = 0; i < 3; i ++)
{
for (size_t j = 0; j < 3; j ++)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for (size_t j = 0; j < 3; j ++)
for (size_t j = 0; j < ORIENTATION_SIZE; j ++)

ayrton04 pushed a commit that referenced this pull request Aug 3, 2022
* compiling version of commit #753 and #728 ported to ros2 rolling

* format fixes

* fix time source disagreement by converting to seconds beforehand, append parameter usage, fix linting

* fix linting and uncrustify
@rokusottervanger
Copy link

rokusottervanger commented Nov 24, 2023

I came here because I suddenly see drift in my odometry state because my IMU is at a small but non-negligable angle (maybe the robot is at that angle, or the imu is not mounted perfectly straight.

Looking at this PR, I find it odd that:

  • There's suddenly math being performed on the state in the ros_filter layer (calculating the angular acceleration by means of differentiation)?
  • There's suddenly state being held outside of the filter_'s state (specifically the angular acceleration)?
  • The filter assumes a constant linear acceleration and constant angular velocity model, but now the angular acceleration is also used?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants