Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

navsat_transform documentation diagram #550

Closed
mabelzhang opened this issue Mar 30, 2020 · 6 comments
Closed

navsat_transform documentation diagram #550

mabelzhang opened this issue Mar 30, 2020 · 6 comments
Assignees

Comments

@mabelzhang
Copy link
Contributor

Feature request

Feature description

We were trying to use navsat_transformation with ekf_nodes for localization, and we had a lot of trouble figuring out how the nodes should be set up from the existing documentation. We have been following this answer from Tom on ROS Answers, and the circular dependency between the second EKF node and the navsat_transform node didn't quite make sense to us.

So we thought a clear diagram in the documentation would help future users set it up easier.

Implementation considerations

@clalancette came up with the following diagram of the setup. We're thinking it would be good to add something like it to the navsat_transform_node documentation page

https://ibb.co/jZhYj8c

Could we get some feedback as to whether this is the correct understanding of how these nodes are supposed to work together?

If the maintainers think this is useful to have in the documentation, let us know what requirements there are for diagrams / images, and I can submit a PR after we get this right.

@SteveMacenski
Copy link
Collaborator

SteveMacenski commented Mar 31, 2020

Admittedly, I haven't used navsat transform before nor have I used R_L for global fusion. I just spent the last 45 minutes reading the source code to understand and demo what is happening. Frankly, this is something I should have done a long time ago so your question nudged me to figure it out. Way to help me procrastinate figuring out my optimization problem issues 😉. I've come up with the following, of which the most important parts are:

  • The robot is happy, so it is smiling
  • The GPS receiver is really good to work under an umbrella in the rain

image

I'm going to assume you understand the left hand side with the usual odom->base transform and inputs.

On the righthand side, we have the raw sensor data for odometry (IMU, VIO, wheel encoders, etc) and the raw GPS data. We feed the raw GPS and IMU for heading information into the navsat transform. We also require the global map->odom (and larger the map->baselink full tree the left hand side augments) to get the base frames and a full TF tree from the global map frame to the robot's frame. This global odometry can also be used for heading instead of IMU if you would like. Then the global EKF gets that updated GPS information into the global frame and gives it out to the global EKF to produce the map to odom transform note, I messed up that image, the TF map->odom comes from the positioning filtered EKF, not the navsat transform.

The "first" global filtered odometry from the EKF on the righthand side will just give you back out the identity pose. The EKF will dish out a filtered output at the given frame rate even if nothing else is valid. The question that's unclear to me is that before the GPS comes in, all the IMU/odometry will probably be just dropped because you can't convert from the imu_sensor_frame to map yet. Seems like a pretty inefficient use of compute, but there's no chicken-and-egg problem there, technically.

When the NavSat transform gets some new data from datum, we take the very next global odometry frame update, populate the transform_world_pose_, if using not IMU for heading, using the odometry heading, then using the datum update and transform_world_pose_ to compute the GPS to base frame transform. That transform is then given to the global frame to chew on as the new global frame update.

Basically means that at any given time, there's a strong chance for impedance mismatches because the filter is updating at 30hz, lets say, and all of them are being thrown out except 1 if your GPS updates at 1 hz, lets say. Then the filter is propogating 1 second old data until you get an update, etc. As long as these are both running at similar rates, there's not much loss of data, but certainly there's opportunity for it.

we had a lot of trouble figuring out how the nodes should be set up from the existing documentation

This is the ROS community, we're good at making stuff, not documenting stuff 😉 sometimes you just have to read the source code. Its only ~1000 lines and 2-3 threads...

@Timple
Copy link
Contributor

Timple commented Mar 31, 2020

@mabelzhang We fuse the imu/data and the wheel/odometry into the global EKF, and not the local_odometry.
Not saying this is better, it is just what we do. I can image this reduces the delay by one sample because this data is not preprocessed. Perhaps at the cost of CPU (double processing of same data)

@ayrton04
Copy link
Collaborator

So I'll also take a crack at explaining this. There are a lot of moving pieces, I'll admit, and perhaps there's an opportunity to simplify it all. Steve hit a lot of the main points, but I'll just clarify where I can.

First, like Steve, I'll assume that the odom/continuous EKF is straightforward. We fuse continuous data sources in it.

Before we move to the map/global EKF, we'll talk through what's required to turn GPS data into data that can be fused into your EKF. For this scenario, we'll assume that we have a robot with wheel encoders, an IMU (with a magnetometer), and a GPS device. Also, in this scenario, our robot will start out inside a GPS-denied area, and then move outside into a GPS-friendly location.

We're interested in a transform from lat/long coordinates to our robot's map/global frame. I didn't want to deal directly with lat/longs, so the first thing navsat_transform_node does is convert the lat/long coordinates into UTM coordinates. This gives us our GPS location in a Cartesian coordinate frame. What we want to know is the position and orientation of our robot's world frame in the UTM grid (or its inverse). This is the what navsat_transform_node is giving us. To do that, it will need the following data:

  1. We need to know the orientation and position of the map/global frame. However, (a) the robot might start out with a non-zero heading in the map frame, even if it has GPS right away, and (b) if, as in our scenario, the robot is in a GPS-denied area for some time, it will get its first GPS reading when the robot is at some pose in the world. So we need that pose, so it can be inverted to get the origin. Therefore, navsat_transform_node needs to know the robot's map/global-frame pose, which means one of its inputs is the output of the map/global EKF. You could also look up the transform using tf (and if that makes the code easier to use/understand, then making that change is fine with me).

  2. The UTM grid is aligned with X facing east, and Y facing north. The GPS reading gives us our heading in that coordinate frame, but it does not give us the orientation in that coordinate frame. So we need to adjust the IMU's heading so that 0 radians is aligned with the UTM X axis. That's where the offset parameters come in.

Once navsat_transform_node has the required data, it generates the required transform, and starts spitting out, for each GPS message, a pose that is consistent with your map/global frame EKF. So that data is then fed back to the map/global EKF. At this point, it no longer needs IMU data.

I think you could also drop the odometry sub at this point (but we don't...d'oh), unless you have publish_filtered_gps turned on. That option makes navsat_transform_node also generate high-frequency filtered GPS messages using your state estimate (i.e., it goes the other way). However, I'm re-reading the code and wondering why we care about whether the odom is updated here. The things I would do with more free time...

In any case, you can see the circular dependency: navsat_transform_node initially needs the output of the map/global EKF, so that it can generate poses to send back to it.

There are a number of parameters to change this behavior. For example, if you are fusing a UTM-friendly heading into your map/global EKF, then you can get the required heading for (2) from the same data that you get in (1). That's why we have the use_odometry_yaw parameter.

If anyone is interested in doing so, I think the logic from navsat_transform_node could probably get rolled into the EKF, and we could just have a gps input. It would save a lot of headache.

In any case, @mabelzhang, your diagram looks correct to me.

@SteveMacenski
Copy link
Collaborator

From looking over it, every time a GPS signal is received, it sets a flag to get a new Ekf Global pose. That would be required for the setting to use heading based on EKF and not the IMU.

I could also imagine the requirement for a feedback loop also if you have multiple GPS or multiple global reading sensors

@SteveMacenski
Copy link
Collaborator

https://docs.swiftnav.com/wiki/ROS_Integration_Guide

Swift nav also has a decent diagram.

@mabelzhang
Copy link
Contributor Author

Thank you everyone for the explanations! I finally read everything here and understood what's going on.

Swift Nav indeed has a very nice diagram. Since I can't steal it and it'd be nice for the tutorial in this repo to be standalone, I made another in the PR.

SteveMacenski pushed a commit that referenced this issue Jun 5, 2020
* add diagram for navsat_transform

Signed-off-by: Mabel Zhang <mabel@openrobotics.org>

* add diagram to tutorial

Signed-off-by: Mabel Zhang <mabel@openrobotics.org>
ayrton04 pushed a commit that referenced this issue Dec 4, 2020
* add diagram for navsat_transform

Signed-off-by: Mabel Zhang <mabel@openrobotics.org>

* add diagram to tutorial

Signed-off-by: Mabel Zhang <mabel@openrobotics.org>
ayrton04 added a commit that referenced this issue Mar 22, 2021
* navsat_transform diagram to address #550 (#570)

* add diagram for navsat_transform

Signed-off-by: Mabel Zhang <mabel@openrobotics.org>

* add diagram to tutorial

Signed-off-by: Mabel Zhang <mabel@openrobotics.org>

* Fix frame id of imu in differential mode, closes #482

* Added local Cartesian option to navsat_transform

* Fix navsat_transform issue with starting on uneven terrain

* Fix typo in navsat_transform_node.rst (#588)

"prodives" -> "provides"

* Fix sign error in dFY_dP in transfer function jacobian

* Making repeated state publication optional (#595)

* PR feedback

* Fixing build issues

* Fixing stamp issues

* Linting and uncrustifying

Co-authored-by: Mabel Zhang <mabel.m.zhang@gmail.com>
Co-authored-by: Jeffrey Kane Johnson <jeff@mapless.ai>
Timple pushed a commit to nobleo/robot_localization that referenced this issue Apr 13, 2021
* navsat_transform diagram to address cra-ros-pkg#550 (cra-ros-pkg#570)

* add diagram for navsat_transform

Signed-off-by: Mabel Zhang <mabel@openrobotics.org>

* add diagram to tutorial

Signed-off-by: Mabel Zhang <mabel@openrobotics.org>

* Fix frame id of imu in differential mode, closes cra-ros-pkg#482

* Added local Cartesian option to navsat_transform

* Fix navsat_transform issue with starting on uneven terrain

* Fix typo in navsat_transform_node.rst (cra-ros-pkg#588)

"prodives" -> "provides"

* Fix sign error in dFY_dP in transfer function jacobian

* Making repeated state publication optional (cra-ros-pkg#595)

* PR feedback

* Fixing build issues

* Fixing stamp issues

* Linting and uncrustifying

Co-authored-by: Mabel Zhang <mabel.m.zhang@gmail.com>
Co-authored-by: Jeffrey Kane Johnson <jeff@mapless.ai>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants