Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix for #523: global planner called much more times than expected #524

Merged
merged 2 commits into from
Aug 2, 2017

Conversation

corot
Copy link
Contributor

@corot corot commented Sep 29, 2016

So now planner is called 1 + max_planning_retries in a first try, and then 1 + max_planning_retries after every recovery behavior. Or up to planner_patiente seconds after every recovery. I think this is the expected operation.

Seems to work fine now, but move_base.cpp code is quite messy, so probably needs at least one independent tester.

@ivan-gavran
Copy link

ivan-gavran commented Oct 5, 2016

@corot , would you mind explaining the fix? From the diff, it seems as some combination with locks, but I don't understand it completely.

I tested the behavior, and it is as described: the move base is being called only 1+max_planning_retries for each recovery behavior.

@corot
Copy link
Contributor Author

corot commented Oct 5, 2016

Hi @Gergia, sorry for not being clear. All the issue is better explained in issue #523 and this ROS answer.

The change is quite minimal, once whitespaces are ignored: Without the change, note that there's nothing preventing to keep calling the planner even if we are not in PLANNING state. The change makes that planning thread only calls the planner while current state is PLANNING, up to max_planning_retries+1 times or during the time allowed by planner_patiente. If not, it is blocked in the wait_for_wake planner_cond_.wait(lock); (saving CPU)

Hope is clearer now

@corot
Copy link
Contributor Author

corot commented Oct 17, 2016

Hi all, anything preventing this PR to get merged?

@corot
Copy link
Contributor Author

corot commented Nov 9, 2016

PR also on kinetic

@DLu DLu added the move_base label May 30, 2017
@mikeferguson mikeferguson self-assigned this Aug 1, 2017
Copy link
Contributor

@mikeferguson mikeferguson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@corot -- apologies for the slow responses on these, we should really keep the same locking structure as before, which requires a pair of simple changes. Would be good to also apply these same changes to #539.

@@ -619,54 +619,59 @@ namespace move_base {
wait_for_wake = false;
}
ros::Time start_time = ros::Time::now();

//time to plan! get a copy of the goal and unlock the mutex
geometry_msgs::PoseStamped temp_goal = planner_goal_;
lock.unlock();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This lock.unlock() call should really move with the temp_goal declaration. We should get the goal WHILE under lock. Of course, that means we need to add an "else{ lock.unlock() }" to the "if (state_==PLANNING)" below, which isn't ideal (it's convoluted) but it is at least thread-safe

}
lock.unlock();
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where we would add "else { lock.unlock() }".

I did test this lock change, and it looks all good (also tested the whole feature in general).

Like that?  Sorry, I edited directly the PR; hope I broke nothing!
@mikeferguson
Copy link
Contributor

Thanks! LGTM -- I'll wait for CI to complete again, and then squash-merge for ease of cherry-picking into K+L.

corot added a commit to corot/navigation that referenced this pull request Aug 2, 2017
mikeferguson pushed a commit that referenced this pull request Aug 2, 2017
* Implements issue #496: add a max_planning_retries parameter as an alternative to planner_patience to limit the failed calls to global planner
* Includes #498 and #524 for kinetic and above
@mikeferguson mikeferguson merged commit b221580 into ros-planning:indigo-devel Aug 2, 2017
mikeferguson pushed a commit that referenced this pull request Aug 2, 2017
* Implements issue #496: add a max_planning_retries parameter as an alternative to planner_patience to limit the failed calls to global planner
* Includes #498 and #524 for kinetic and above
@DLu
Copy link
Member

DLu commented Aug 29, 2017

I think this PR breaks the ability to plan with positive planner_frequency, since it forces the planner to only run when state=PLANNING, but the intended behavior of move_base is to be able to replan continuously, even when in state=CONTROLLING. Was this tested with positive planner_frequency?

@corot
Copy link
Contributor Author

corot commented Aug 29, 2017

I think you are right 🙈

Will try to fix it asap; sorry for the gaffe

@corot
Copy link
Contributor Author

corot commented Aug 31, 2017

PR #622
Ok, hope this time all works as intended! @DLu, can you give a try to the PR? Thanks & thanks for detecting my pitfall!

gerkey pushed a commit to codebot/navigation that referenced this pull request Jan 19, 2018
…ning#539)

* Implements issue ros-planning#496: add a max_planning_retries parameter as an alternative to planner_patience to limit the failed calls to global planner
* Includes ros-planning#498 and ros-planning#524 for kinetic and above
johaq pushed a commit to CentralLabFacilities/navigation that referenced this pull request Mar 30, 2018
…ning#539)

* Implements issue ros-planning#496: add a max_planning_retries parameter as an alternative to planner_patience to limit the failed calls to global planner
* Includes ros-planning#498 and ros-planning#524 for kinetic and above
Combinacijus pushed a commit to Combinacijus/navigation that referenced this pull request Mar 27, 2024
* Update the default ports

Related to ros-navigation/navigation2#4060

* fix tables
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants