[WIP] Use tf2_transform data to get current robot pose in NavFn #618
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Basic Info
Description of contribution in a few bullet points
Rationale
I tried to make a standalone version of the getCurrentPose function, but it had so many dependencies, that it seemed the wrong approach to solving the problem. All those dependencies were already available in costmap, and it seems reasonable to ask the world model/costmap where the robot is; it's sort of the equivalent of a "You are here" indicator on a real map.
I was going to add a separate service call to World Model to get the robot pose, but we currently have latency issues with service calls. Instead, I added the pose to the GetCostmap service call. It seems unlikely you will get the costmap without also needing to know where you are on the map at the same time, so it seemed reasonable to combine them until our service call latency issues can be fixed.
NOTE
Updated to use a separate service to get the pose. Need to target this to the lifecycle branch still.