Skip to content
This repository has been archived by the owner on Feb 4, 2020. It is now read-only.

Proposal: make ament test return non-zero on test failure #140

Closed
gerkey opened this issue Mar 8, 2017 · 5 comments
Closed

Proposal: make ament test return non-zero on test failure #140

gerkey opened this issue Mar 8, 2017 · 5 comments
Labels
enhancement New feature or request

Comments

@gerkey
Copy link
Contributor

gerkey commented Mar 8, 2017

Following discussions on other issues (e.g., #139 and ros-infrastructure/ros_buildfarm#367), I'd like to propose a change in the behavior of ament: Make ament test return non-zero if:

  • there was a failure in the build step (the test step first runs the build step); or
  • there was a failure to invoke a test; or
  • there was a failure reported by a test that was successfully invoked.

At present, ament test returns non-zero only for the first two conditions, but not the third.

Rationale: ament test is a step that aggregates underlying steps and as such it should follow the common pattern of ORing together the return codes of those underlying steps to produce its own return code.

Pro:

  • It's easy to check whether anything went wrong with the test step without having to subsequently run ament test_results (this command does return non-zero if any of the results include test failures).

Con:

  • The caller can no longer distinguish based on the return code between failure to invoke a test and a test that reports a failure.

(Note that you can still distinguish build failure from test failure by run ament build and ament test separately.)

My opinion is that the benefits of the pro outweigh the costs of the con. Thoughts?

p.s. I'm proposing this change only for ament, which is a new tool that we can fairly freely modify. The same change could in principle be made to catkin, but there seems to be risk of disruption in doing that.

@gerkey gerkey added the enhancement New feature or request label Mar 8, 2017
@dirk-thomas
Copy link
Contributor

For any kind of automation it is crucial to distinguish failures to run something vs. some tests returning non-zero return codes. In my opinion this it is a strict requirement for our tools to satisfy this requirement. With the proposal it is not possible anymore to identify if the invocation of test failed for any reason. Nor is there any other way to achieve that (e.g. by invoking more commands). That prevents those use cases to operate correctly (e.g. sending a notification email in case tests fail, but not when something in the infrastructure / invocation failed) which I consider an unacceptable "cost" which can't be outweigh by any benefit.

On the other hand with the current state your use case is easily possible by invoking two two commands.

There is an alternative which continues meeting the requirement stated above as well as your desire to get a non-zero return code for any kind of failure. The ament test command could offer an argument to choose the desired return code behavior.

@gerkey
Copy link
Contributor Author

gerkey commented Mar 8, 2017

There is an alternative which continues meeting the requirement stated above as well as your desire to get a non-zero return code for any kind of failure. The ament test command could offer an argument to choose the desired return code behavior.

Ah, that sounds like the perfect compromise! How about this change instead: change the default behavior of ament test according to my earlier proposal, but simultaneously offer an option to produce the current behavior? Automated systems would supply the option to get the current behavior, but regular users who don't supply that option would get the new behavior.

@mikaelarguedas
Copy link
Contributor

mikaelarguedas commented Mar 8, 2017

I am not sure if this is relevant here, but I do remember that we decided to ignore the return code in the first place because all our successful builds with test failures were marked as failed on the buildfarm (e.g. Build Status). So https://github.com/ament/ament_tools/pull/110/files has been made so that we could make the difference between a failing job and a job with failing tests.

EDIT: duplicate of previous comment

@dirk-thomas
Copy link
Contributor

👍 I don't mind what the default behavior is as long as the mandatory use cases are still possible.

@gerkey
Copy link
Contributor Author

gerkey commented Mar 15, 2017

Fixed by #141.

@gerkey gerkey closed this as completed Mar 15, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants