Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

navigation stack performance on groovy #11

Closed
ahendrix opened this issue Nov 29, 2012 · 7 comments
Closed

navigation stack performance on groovy #11

ahendrix opened this issue Nov 29, 2012 · 7 comments
Assignees
Labels

Comments

@ahendrix
Copy link

On the PR2 on Fuerte, the move_base uses 60-70% CPU while actively navigating, and performs very smoothly. On Groovy, move_base uses upwards of 110% CPU while actively navigating, and fails to meet the control loop speed of 10Hz, resulting in choppy motion and poor navigation.

@chadrockey
Copy link

This could be a result of the 'Tock' for REP 117 in Hokuyo_node:
ros-drivers/hokuyo_node@2218ce5

If the NaNs and Infs are not being filtered out correctly, the maps could be very large or have points in 'la la' land.

http://ros.org/wiki/rep_117/migration

Check to ensure that points are being filtered properly:
// Laserscans and Ranges
for(size_t i = 0; i < msg.ranges.size(); i++){
// NaNs and Infs will always return false on a comparison.
if(msg.min_range <= msg.ranges[i] && msg.ranges[i] <= msg.max_range){
// Accept this measurement
}
}

Even better... update the software to interpret -Inf and +Inf.

If this is the issue and becomes a time-critical blocker, you can set the 'use_rep_117' to false until Hydromedusa. The parameter is planned to stop working in Hydro.

@ahendrix
Copy link
Author

ahendrix commented Dec 2, 2012

Running the nav stack on the PR2 with use_rep_117 set to false doesn't seem to be helping.

@hershwg
Copy link
Contributor

hershwg commented Dec 11, 2012

I narrowed it down to change db18d56. Before that, motion was smooth. After that, very jerky.

It was not clear to me that the fuerte nav stack actually took 60-70% cpu. When I ran the fuerte nav stack in the groovy environment, it took similar CPU as the groovy version, like the 110% Austin mentioned, but the robot moved smoothly, worked fine.

I did a "git bisect" on the nav stack between fuerte and groovy versions and found that the above-mentioned commit was where the change occurred. It was not gradual... the revisions before that commit all ran smoothly with similar CPU usage and the revisions after were jerky-all-the-time.

I have not yet determined the mechanism causing the bug. The change itself looks like straightforward move of code from one place to another.

@hershwg
Copy link
Contributor

hershwg commented Dec 11, 2012

Well, I have discovered one major performance-related factor in the change.

That change introduces several calls to malloc() and free() during each call to lineCost(). The malloc() and free() are called because of the std::vector which is used to accumulate grid cell locations in FootprintHelper::getLineCells(). Looking into stl_vector.h you can see (eventually) that it starts at size 0, then goes to 1, then 2, then 4, 8, etc. Every time it re-allocates it has to copy all the previous contents into the new memory and then free the old memory.

@chadrockey
Copy link

I looked at the two uses and where vector is filled. It looks like the ideal data structure could just be iterable (the points are never accessed out of order.) That seems to be the only constraint, seeing as elements are never removed and you know the number of elements in the vector before assigning them (numpixels in FootprintHelper::getLineCells).

Does performance improve if you call reserve on the vector before the for loop?
http://www.cplusplus.com/reference/vector/vector/reserve/

pts.reserve(numpixels);
for (int curpixel = 0; curpixel <= numpixels; curpixel++){
...
}

If that does help, it's probably worth reserving space in the other 2 vectors in FootprintHelper. Line 123 and 188.

@ghost ghost assigned hershwg Dec 11, 2012
@hershwg
Copy link
Contributor

hershwg commented Dec 11, 2012

I am actively working on this. I've changed it to a LineIterator class which should have similar performance as the "before" code. First though I will test it with the change simply reverted. PRK is down right now so there's a bit of a delay in testing.

@hershwg
Copy link
Contributor

hershwg commented Dec 12, 2012

Fixed by 450ba60. Subsequent changes implement the LineIterator refactoring, which has also been tested on a PR2 and drives smoothly.

@hershwg hershwg closed this as completed Dec 12, 2012
KaijenHsiao pushed a commit to KaijenHsiao/navigation that referenced this issue Sep 19, 2014
…test_merge

Improvements to the global planner and move base
bfjelds pushed a commit to bfjelds/navigation that referenced this issue Nov 17, 2017
…anning#11)

* Make laser subscription best effort

Otherwise it can't subscribe to best effort publishers (and is generally more appropriate)

* Use rcutils logging macros

* Start a parameter server for external nodes to interact with

* Reduce diff with upstream
bfjelds pushed a commit to bfjelds/navigation that referenced this issue Nov 17, 2017
…anning#11)

* Make laser subscription best effort

Otherwise it can't subscribe to best effort publishers (and is generally more appropriate)

* Use rcutils logging macros

* Start a parameter server for external nodes to interact with

* Reduce diff with upstream
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants