-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix retcode on fail #701
Fix retcode on fail #701
Conversation
Codecov Report
@@ Coverage Diff @@
## main #701 +/- ##
==========================================
- Coverage 32.98% 32.97% -0.01%
==========================================
Files 302 302
Lines 35165 35170 +5
Branches 6142 6144 +2
==========================================
Hits 11598 11598
- Misses 23016 23020 +4
- Partials 551 552 +1
Flags with carried forward coverage won't be shown. Click here to find out more.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Totally makes sense!
I tried both running local and remote. For local I tried going through the selection screens (OK) and running checkbox-cli run testplan-with-a-job-that-fails
. Works as expected in both cases.
However, this patch does not seem to take other failing jobs (e.g. crashed) into account. See inline comments.
@@ -158,7 +159,7 @@ def invoked(self, ctx): | |||
) | |||
while time.time() < deadline: | |||
try: | |||
self.connect_and_run(ctx.args.host, port) | |||
return self.connect_and_run(ctx.args.host, port) | |||
break |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we still need this break
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course not! removed in 591fcb7
self._sa.manager.default_device_context._state._job_state_map | ||
) | ||
self._has_anything_failed = any( | ||
job.result.outcome == "fail" for job in job_state_map.values() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what about crashed jobs? (i know that somewhere in checkbox there is a mapping that is done to map every possible state to one of pass/skip/fail
, I'm just not sure if it's the case here)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK I checked by running a test plan that only has one job that's crashing, and checkbox retrurns 0. Oops :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, very valid concern.
I think it can be done as a followup as the code provided here is critical to land fast.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 provided that we actually follow-up on this to implement returning 1 for any bad job outcome.
For a list of the possible outcomes, see checkbox-ng/plainbox/abc.py#L194-L219.
Description
This patch makes Checkbox return a valid return code depending on the outcome of the test plan being run.
Previously Checkbox remote always returned 0, even when some of the jobs failed. This patch makes checkbox return 0 only when everything passed.
Tests
I'm providing a metabox test in this PR that checks this mechanic.