-
Notifications
You must be signed in to change notification settings - Fork 880
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consecutive odom estimates with exact same timestamp published? #336
Comments
If I had to guess, I'd say it's these three lines that are causing trouble in this case: https://github.com/cra-ros-pkg/robot_localization/blob/kinetic-devel/src/filter_base.cpp#L213 Again, I'm just guessing at this point, but is it possible that you occasionally get a measurement whose timestamp is out-of-sequence? If so, the filter will either (a) skip prediction and simply correct based on that value, or (b) if you have I could add a parameter that causes the filter to always do a prediction from the last measurement time to |
Ah ok, I see. I can see how this could happen given the nature of ROS communication with no hard guarantees of the order of arrival of messages from different sources. The example is fusing data from IMU and base odometry, so it certainly can't be ruled out that timestamps arrive out of order. I guess there is no trivial "right way" of doing things, but adding that option might be good (if it doesn't clutter things too much). |
I don't think it'll clutter anything. I'll add it. 👍 |
Addressed in #381. I realize this is quite old, but if you still have need of it and have a chance to try it out, can you let me know how it goes? Thanks! |
Closing. Please reopen if you find this isn't fixed. |
This problem still exists. I have an ekf_localization_node that has wheel odometry coming in at 15 Hz and IMU data coming in at 100 Hz. The node's "frequency" parameter is set to 100. Almost immediately, I get messages published to odometry/filtered that have duplicate timestamps. Decreasing "frequency" to 50 increases the amount of time before a duplicate timestamp is detected. cartographer_ros notices the duplicate timestamps, and exits with the following error message: [FATAL] [1589675649.666559249, 22585.620000000]: F0516 17:34:09.000000 18205 map_by_time.h:43] Check failed: data.time > std::prev(trajectory.end())->first (621356193855500000 vs. 621356193855500000) |
And I assume you have |
I didn't. After setting predict_to_current_time to "true", duplicate timestamps occur far less frequently, but one showed up after driving the simulated robot for about a minute. |
Hrm, that makes me suspect something is up with the simulated time server, then. With robot_localization/src/ros_filter.cpp Line 584 in 799ad44
We then take the difference between that time and the last measurement time, and use that to project the state forward: robot_localization/src/ros_filter.cpp Line 689 in 799ad44
That stamp then gets put into the message header: robot_localization/src/ros_filter.cpp Line 424 in 799ad44
So the only way I can see this happening (at least right now) is if You said your node runs at 100 Hz, which is quite fast. What's the update rate on your sim set to? Gazebo publishes to http://gazebosim.org/tutorials/?tut=ros_comm
So if you ran a node at 1000 Hz that just checked |
That sounds plausible. The Gazebo simulation is running at 100 Hz, and ekf_localization_node's frequency is also set to 100 Hz. I commented out both of ekf_localization_node's input sources (wheel odometry and the IMU), and it still produces duplicate timestamps. When predict_to_current_time is false, the duplicate timestamps happen very frequently. When predict_to_current_time is true, they happen rarely, but even one of them will cause cartographer_node to exit. My current thinking is that I should set predict_to_current_time to true, and there is no bug in ekf_localization_node; it just sometimes gets identical times from Gazebo. I've written a node that gets messages from odometry/filtered, modifies the timestamps (if necessary) to prevent duplications, and passes the messages on to cartographer_node. |
The node won't (or shouldn't, anyway) run with zero input sources. We set robot_localization/src/filter_base.cpp Line 258 in 799ad44
robot_localization/src/ros_filter.cpp Line 648 in 799ad44
Finally, when it comes time to publish, we try to build the output message: robot_localization/src/ros_filter.cpp Line 1894 in 799ad44
...but that method returns false if the core filter is not yet initialized: robot_localization/src/ros_filter.cpp Line 429 in 799ad44
So I'm not seeing how that was possible (though that doesn't mean there isn't a bug!). After commenting out the input sources, did you restart your roscore? The parameter server will have retained those input settings. Commenting them out won't delete them from the parameter server. |
No, I didn't. I did not realize that the parameters are not deleted from the server. I ran roscore again, with the commented out input sources, and messages are no longer written to odometry/filtered. |
I noticed an issue in a controller where we use the difference of odom timestamps to compute some variables. It appears that very rarely, robot_localization publishes odometry estimates that have the exact same timestamp as the preceding odometry message published. If users are not careful, this can lead to some interesting results (division by zero etc.).
Is this behavior intentional (or at least "tolerated") or a bug? Given that there are two estimates for the exact same time I'd certainly consider this a bug.
/edit: This is on ROS Indigo/64bit/.debs 2.3.1-0trusty-20161027-103827-0700
Here's an instance of the issue:
The text was updated successfully, but these errors were encountered: