-
Notifications
You must be signed in to change notification settings - Fork 938
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Servo interruption #2314
Fix Servo interruption #2314
Conversation
Codecov Report
@@ Coverage Diff @@
## master #2314 +/- ##
==========================================
- Coverage 57.96% 57.91% -0.04%
==========================================
Files 327 327
Lines 25638 25633 -5
==========================================
- Hits 14859 14844 -15
- Misses 10779 10789 +10
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These changes look ok. I'm trying to clarify what they fix. Also, you need to run clang_format to get it to pass travis.
We could also discuss if we need both of these:
The difference seems to be, Edit: I think I'll go ahead and delete If reviewers disagree strongly, I'll just revert that commit |
I'm working on making the tests more repeatable now. It seems like the issue had to do with initializing ROS in one executable for multiple tests. Separate executables for each test is 100% reliable from what I've seen so far. Will work on reducing duplicate code next. |
OK, this is working well locally. Sorry to add so many lines but getting these tests to pass reliably has been really difficult. I stuck a header file ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like what you are doing with the tests here. However, I feel that they don't belong in this PR. Would you mind splitting the changes to the tests into a separate PR from the one to change the logic for stopping?
It seems that there was some issue caused by the change of logic in the tests that prompted refactoring the tests. Could you help me understand what the reason was? |
Thanks for asking this question! It turns out the deletion of timer_.stop() is what was causing the flaky tests. I put that back in the destructor and tests are back to normal. (originally I thought the tests on |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, tested with customer setup.
Tweak the interruption logic